페이지

2022년 5월 21일 토요일

1.5.1 Major Frameworks

 - Theano is one of the earliest deep learning frameworks. It was developd by Yoshua Bengio and Ian Goodfellow. It is a Python-based ocmputing library for positioning low-level operations. Theano supports both GPU and CPU operations. Due to Theano's low development efficiency, long model compilation time, and developers switching to TensorFlow, Theano ahs now stopped maintenace.


- Scikit-learn is a complete computing library for machine learning algorithms. It has builit-in support for common traditional machine learning algorithms, and it ahs rich documentation and examples. However, scikit-learn is not specifically designed for neural networks. It does not support GPU acceleration, and the implementation of neural network-related layers is also lacking.


- Caffe was developed by Jia Yangqing in 2013. It is mainly used for applications using convolutional neural networks and is not suitable for other types of neural networks. Caffe's main development language is C++, and it also provides interfaces for other languages such as Python. It also supports GPU and CPU. Due to the earlier developement time and higher visibility in the industry, in 2017 Facebook launched an upgraded fversion of Caffe, Caffe2. Caffe2 has now been integrated into the PyTorch library.

- Torch is a very good scientfic computing library, developed based on the less popular programming language Lua, Torch is highly flexible, and it is also an excellent gene inherited by PyTorch. However, due to the small number of Lua language users, Torch has been unable to obtain mainstream applications.

- MXNet was developed by Chen Tianqi and Li Mu and is the official deep learning framework of Amazon. It adopts a mixed method of imperative programming and symbolic programming, which has high flexibility fast running speed, and rich documentation and examples.

-PyTorch is a deep learning framework launched by Facebook based on the original Torch framework using Python as the main development language. PyTorch borrowed the design style of Chainer and adopted imperative programming, which made it very convenient to build and debug the network. Although PyTorch was only released in 2017, due to its sophisticated and compact interface design, PyTorch After the 1.0 version, the original PyTorch and Cafrfe2 were merged to make up for PyTorch's deficiencies in industrial deployment. Overall, PyTorch is an excellent deeop learning framework.

- Keras is a high-level framework implemented based on the underlying operations provided by frameworks such as Theano and TensorFlow. It provides a large number of high-level interfaces for rapid training and testing. For common applications, developing with Keras is very efficient. But because there is no low-level implementation, the underlyuing framework needs to be abstracted, so the operation efficiency isnot high, and the flexibility is average.

- TensorFlow is a deep learning framework released by Google in 2015. The initial version only supported symbolic programming. Thanks to its earlier release and Google's influence in the field of deep learning, TensorFlow quickly became the most popular deep learning framework. However, due to frequent changes in the interface design, redundant functional design, and difficulty in symbolic programming development and debuygging, TensorFlow 1.x was once criticized by the industry. In 2019, Google launched the  official version of TensorFlow 2, which runs in dynamic graph priority mode and can avoid many defects of the TensorFlow 1.x version. TensorFlow 2 has been widely recognized by the industry.


At present, TensorFlow and PyTorch are the two most widley used deep learning frameworks in industry. TensoirFlow has a complete solution and user base in the industry. Thanks to its streamlined and flexible interface design, PyTorch can quickly build and debug entworks, which has received ravee reviews in academia. After TensorFlow 2 was released, it makes it easier for users to learn TensorFlow and seamlessly deploy moduels to production. This book users TensorFlow2 as the main framework to implement deep learning algorithms.

Here are the connections and differences between TensorFlow and Keras.  Keras can be understood as a set of high-level API design specifications. Keras itself has an official implementaion fo thespecifications. The same specifications are also implemented in TensorFlwo, which is called the tf.keras module, and tf.keras will be used as the unique high-level interface to avoid interface redundancy,. Unless otherwise specified, Keras in this book refers to tf.keras.





1.4 DEEP LEARNING APPLICATIONS

 An introduced earlier, there is an excess of scenarios and applications where Deep Leaning is being used. Let us look at few applications in Deep Learning for a more profound understanding of where exactly DL is applied

1.3 WHAT IS THE NEED OF A TRANSTITION FROM MACHINE LEARNING TO DEEP LEARNING?

 Machine Learning has been around for a very long time. Machine Learning helped and motivated scientists and researchers to come up with newer algorithms to meet the expectations of technology enthusiasts. The major limitation of Machine Learning lies in the explicit human intervention for the extraction of features in the data that we work (Figure 1.1). Deep Learning allows for automated feature extraction and learning of the model adapting all by itself to the dynamism of data.

Apple => Menual feature extraction => Learning => Machine learning => Apple

Limitation fo Machine Learning.

Apple => Automatic feature extraction and learning => Deep learning => Apple

Advantages of Deep Learning.

Deep Learning very closely tries to imitate the structure and pattern of biological neurons. This single concept, which makes it more complex, still helps to come out with effective predictions. Human intelligence is supposed to be the best of all types of intelligence in the universe. Researchers are still striving to understand the  complexity of how the Human intelligence is supposed to be the best of all types of intelligence in the universe. Researchers are still striving to understand the complexity of how the human brain works. The Deep Learning module acts like a black box, which takes inputs, does the processing in the black box, and gives the desired output. It helps us, with the help of GPUs and TPUs, to work with complex algorithms at a faster pace. The model developed could be reused for similar futuristic applications.



1.2 THE NEED: WHY DEEP LEARING?

 Deep Learning application have become an indispensable part of contemporary life. Whether we acknowledge it or not, there is no single day in which we do not use our virtual assistants like Google Home, Alexa, Siri and Cortana at home. We could commonly see our parents use Google Voice Search for getting the search results easily without requiring the effort of typing. Shopaholics cannot imagine shopping online without the appropriate recommendations scrolling in. We never perceive how intensely Deep Learning has invaded our normal lifestyles. We have automatic cars in the market already, like MG Hector, which can perform according to our communication. We already have hte luxury of smart phones, smart homes, smart electrical applicances and so forth. We invariably are taken to a new status of lifestyle and comfort with the technological advancements that happen in the field of Deep Learning.

1.1 INTRODUCTION

 Artificial Intelligence and Machine Learning have been buzz words for more than a decade now, which makes the machine an artificially intelligent one. The computational speed and enormous amounts of data have stimulated academics to deep dive and unleash the tremendous research  potential that lies within. Even though Machine Learning helped us start learning intricate and robust systems. Deep Learning has curiously entered as a subset for AI, producing incredible results and outputs in the field.

Deep Learning architecture is built very similar to the working of a human brain, whereby scientists teach the machine to learn in a way that humans learn. This definitely is a tedious and challenging task, as the working of the human brain itself is a complex phenomenon. Our research in the field has resulted in valuable outcomes to makes things easily understandable for scholar and scientists to build worthy applications for the welfare of society. They have made the various layers in neural nets in Deep Learning auto-adapt and learn according to the volume of datasets and complexity of algorithms.

The efficacy of Deep Learning algorithms is in no way comparable to traditional Machine Learning helped industrialists to deal with unsolved problems in a convincing way, opening a wide horizon with ample opportunity. Natual language processing, speech and image recognition, the entertainment sector, online retailing sectors, banking and finance sectors, the automotive industry, chat bots, recommender systems, and voice assistants to self-driving car are some of the major advancements in the field of Deep Learning.

CHAPTER1. Introduction to Deep Learning

 LEARNING OBJECTIVES

After reading through this chapter, the reader will understand the following:

- The need for Deep Learning

- What is the need of transition from Machine Learning to Deep Learning?

- The tools and languages available for Deep Learning

- Further reading


2022년 5월 17일 화요일

1.5 Deep Learing Framework

 If a workman wants to be good, he must first sharpen his weapon. After learning about the basic knowledge of deep learning, let's pick the tools used to implement deep learning algorithms.