페이지

2022년 5월 21일 토요일

1.5.3 Demo

 The core of deep learning is the design idea of algorithms, and deep learning frameworks are just our tools for implementing algorithms. In the following, we will demonstrate the three core functions of the TensorFlow deep learning framework to help us understand the role of frameworks in algorithm design.

a) Accelerated Calculation

The neural network is essentially comoposed of a large number of basic mathematical operations such as matrix multiplication and addition. One important function of TensorFlow is to use the GPU to conveniently implement parallel computing acceleration functions. In order to demonstrate the acceleration feffect of GPU, we can compare mean running time for multiple matirx multiplications on CPU and GPU as follows.

We create two matrices A and B with spahe [1,n] and [n,1], separately. The size of the matrices can be adjusted using parameter n. The code is as follows:

# Create two matriees running on CPU

with tf.device('/cpu:0'):

    cpu_a = tf.random.normal([1,n])

    cpu_b =tf.random.normal([n,1])

    print(cpu_a.device, cpu_b.device)

#Create two matrices running on GPU

with tf.device('/gpu:0'):

    gpu_a = tf.random.normal([1,n])

    gpu_b = tf.random.normal([n,1])

    print(gpu_a.device, gpu_b.device)

Let's implement the ufnctions of the CPU and GPU operations and measuer the computation time of the two functions through the imeit. itmeit() function. It should be noted that additional environment initialization work is generally required for the first calculation, so this time cannot be counted. We remove this time through the warm-up session and then measuer the calculation time as follows:

def cpu_run(): # CPU function

    with tf.device('/cpu:0'):

        c = tf.matmul(cpu_a, cpu_b)

    retun c

def gpu_run(): # GPU function

    with tf.device('/gpu:0'):

        c = tf.matmul(gpu_a, gpu_b)

    return c

#First calcualtion needs warm-up

cpu_time = timeit.timeit(cpu_rn, number=10)

gpu_time = timeit.timeit(gpu_run, number=10)

print('warmup:', cpu_time, gputime)

# Calculate and print mean running time

cpu_time = timeit.timeit(cpu_run, number=10)

gpu_time = timeit.timeit(gpu_run, number=10)

print('run time:', cpu_time, gpu_time)

We plot the computation time under CPU and GPU environments at different matrix sizes as shown in Figure 1-21. It can be seen that when the matrix size is small, the CPU and GPU times are almost the same, which does not reflect the advantages of GPUY parallel computing. When the matrix size is larger, the CPU computing time significantly increases, and the GPU takes full advantage of paralled computing without almost any change of computation time.

b) Automatic Gradient Calculation

When using TensorFlow to construct the forward caculation process, in addition to being able to obtain numberical results, TensorFlow also automatically builds a computational graph. TensorFow provides automatic differentiation that can calculate the derivative of the output on network parameters without manual derivation. Consider the expression of the following function:

y  = aw2 + bw +c

The derivative relationship of the output y th the variable w is 

dy/dw = 2aw +b

Consider the derivative at (a,b,c,w) = (1,2,3,4). We cat get dy/dw = 2*1*4 + 2 = 10

With TensorFlow, we can directly calculate the derivative given the expression of a function without manaully deriving the expression of the derivatives. TensorFlow can automatically derive it. The code is implemented as follows:

import tensorflow as tf

# Create 4 tensors

a = tf.constant(1.)

b = tf.constant(2.)

c = tf.constant(3.)

w = tf.constant(4.)

with tf.GradientTape() as tape:# Track derivative tape.watch([w]) # Add w to derivative watch list

# Design the function

y = a * w w**2 _ b * w + c

# Auto derivative calculation

[dy_dw] = tape.gradient(y, [w])

print(dy_dw) # print the derivative

The result of the program is 

tf.Tensor(10.0, shape=(), dtype=float32)

It con be seen that the result of TensorFlow's automatic differentiation is consistent with the result of manual calculation.

c) Common Neural Network Interface

In addition to the underlying mathematical functions such as matrix multiplication and addition, TensorFlow also has a series of convenient functions for deep learning systems such as commonly used neural network operation functions, commonly used network layers, network training, model saving, loading, and deployment. Using TensorFlow, you can easily use thes functions to complete common production processes, which is efficient and stable.


1.5.2 TnsorFlow 2 and 1.x

 TensorFlow 2 is a completely different fraework from TensorFlow 1.x in terms of user experience. TensorFlow 2 is not compatible with TensorFlow 1.x code. At the same time, it is very different in programming style and functional interface design. TensorFlow 1.x code needs to rely on artificial migration, and automated migration methods are not reliable. Google is about to stop updating TensorFlow 1.x. It is not recommended to learn TensorFlow 1.x now.

TensorFlow 2 supports the dynamic graph priority mode. You can obtain both the computational graph and the numerical results during the calculation. You can debug the code and print the data in real  time. The network is built like a building block, stacked layer by layer, which is in line with software development thinking.

Taking simple addition 2.0 + 4.0 as an example, in TensorFlow 1.x, we need to create a calculation graph first as follows:

import tensorflow as tf

# 1. Create computaition graph with tf 1.x

# Create 2 input variables with fixed name and type

a_ph = tf.placeholder(tf.float32, name='variable_a')

b_ph = tf.placeholder(tf.loat32, name='variable_b')

# Create output operation and name

c_op = tf.add(a_ph, b_ph, name='variable_c')

The process of creating acomputational graph is analogous to the process of establishing a formula c=a_b through symbols. It only records the computational steps of the formula and does not actually caculate the numerical results. The numberical results can only be obtained by running the output c and assigning values a = 2.0 and b = 4.0 as follows:

# 2. Run computational graph with tf 1.x

# Create running environment

sess = tf.InteractiveSession()

#Initialization

init = tf.global_variables_initializer()

sess.run(init) # Run the initialization

# Run the computation graph and return value to c_numpy

c_numpy = sess.run(c_op, feed_dict={a_ph: 2., b_ph: 4.})

#print out the output

print('a+b', c_numpy)

It can be seen that it is so tedious to perform simple addition operations in TensorFlow 1, let alone to create complex neural network algorithms. This programming method of creating a computational graph and then running it later is called symbolic programming.

Next, we use TensorFlow 2 to complete the same operation as follows:

import tensorflow as tf

# Use TensorFlow 2 to run

# 1.Create and initalize variable

a = tf.constant (2.)

b = tf.constant(4.)

# 2. Run and get result directly

print('a_b=', a_b)

As you can see, the calculation process is very simple, and there are not extra calcuation steps.

The method of getting both computation graphs and numerical results at the same time is called jimperative programming, also known as dynamic graph mode. TensorFlow 2 and PyTorch are both developed using dynamic graph priority mode, which is easy to debug. In general, the dynamic graph mode is highly efficient for development, but it may not be as efficient as the static graph mode for running. TensorFlow 2 also supports converting the dynamic graph mode to the static graph mode through tf.function, achieving a win-win situation of both development and operating efficiency. In the remaining part of this book, we use TensorFlow to represent TensorFlow 2 in general.


1.5.1 Major Frameworks

 - Theano is one of the earliest deep learning frameworks. It was developd by Yoshua Bengio and Ian Goodfellow. It is a Python-based ocmputing library for positioning low-level operations. Theano supports both GPU and CPU operations. Due to Theano's low development efficiency, long model compilation time, and developers switching to TensorFlow, Theano ahs now stopped maintenace.


- Scikit-learn is a complete computing library for machine learning algorithms. It has builit-in support for common traditional machine learning algorithms, and it ahs rich documentation and examples. However, scikit-learn is not specifically designed for neural networks. It does not support GPU acceleration, and the implementation of neural network-related layers is also lacking.


- Caffe was developed by Jia Yangqing in 2013. It is mainly used for applications using convolutional neural networks and is not suitable for other types of neural networks. Caffe's main development language is C++, and it also provides interfaces for other languages such as Python. It also supports GPU and CPU. Due to the earlier developement time and higher visibility in the industry, in 2017 Facebook launched an upgraded fversion of Caffe, Caffe2. Caffe2 has now been integrated into the PyTorch library.

- Torch is a very good scientfic computing library, developed based on the less popular programming language Lua, Torch is highly flexible, and it is also an excellent gene inherited by PyTorch. However, due to the small number of Lua language users, Torch has been unable to obtain mainstream applications.

- MXNet was developed by Chen Tianqi and Li Mu and is the official deep learning framework of Amazon. It adopts a mixed method of imperative programming and symbolic programming, which has high flexibility fast running speed, and rich documentation and examples.

-PyTorch is a deep learning framework launched by Facebook based on the original Torch framework using Python as the main development language. PyTorch borrowed the design style of Chainer and adopted imperative programming, which made it very convenient to build and debug the network. Although PyTorch was only released in 2017, due to its sophisticated and compact interface design, PyTorch After the 1.0 version, the original PyTorch and Cafrfe2 were merged to make up for PyTorch's deficiencies in industrial deployment. Overall, PyTorch is an excellent deeop learning framework.

- Keras is a high-level framework implemented based on the underlying operations provided by frameworks such as Theano and TensorFlow. It provides a large number of high-level interfaces for rapid training and testing. For common applications, developing with Keras is very efficient. But because there is no low-level implementation, the underlyuing framework needs to be abstracted, so the operation efficiency isnot high, and the flexibility is average.

- TensorFlow is a deep learning framework released by Google in 2015. The initial version only supported symbolic programming. Thanks to its earlier release and Google's influence in the field of deep learning, TensorFlow quickly became the most popular deep learning framework. However, due to frequent changes in the interface design, redundant functional design, and difficulty in symbolic programming development and debuygging, TensorFlow 1.x was once criticized by the industry. In 2019, Google launched the  official version of TensorFlow 2, which runs in dynamic graph priority mode and can avoid many defects of the TensorFlow 1.x version. TensorFlow 2 has been widely recognized by the industry.


At present, TensorFlow and PyTorch are the two most widley used deep learning frameworks in industry. TensoirFlow has a complete solution and user base in the industry. Thanks to its streamlined and flexible interface design, PyTorch can quickly build and debug entworks, which has received ravee reviews in academia. After TensorFlow 2 was released, it makes it easier for users to learn TensorFlow and seamlessly deploy moduels to production. This book users TensorFlow2 as the main framework to implement deep learning algorithms.

Here are the connections and differences between TensorFlow and Keras.  Keras can be understood as a set of high-level API design specifications. Keras itself has an official implementaion fo thespecifications. The same specifications are also implemented in TensorFlwo, which is called the tf.keras module, and tf.keras will be used as the unique high-level interface to avoid interface redundancy,. Unless otherwise specified, Keras in this book refers to tf.keras.





1.4 DEEP LEARNING APPLICATIONS

 An introduced earlier, there is an excess of scenarios and applications where Deep Leaning is being used. Let us look at few applications in Deep Learning for a more profound understanding of where exactly DL is applied

1.3 WHAT IS THE NEED OF A TRANSTITION FROM MACHINE LEARNING TO DEEP LEARNING?

 Machine Learning has been around for a very long time. Machine Learning helped and motivated scientists and researchers to come up with newer algorithms to meet the expectations of technology enthusiasts. The major limitation of Machine Learning lies in the explicit human intervention for the extraction of features in the data that we work (Figure 1.1). Deep Learning allows for automated feature extraction and learning of the model adapting all by itself to the dynamism of data.

Apple => Menual feature extraction => Learning => Machine learning => Apple

Limitation fo Machine Learning.

Apple => Automatic feature extraction and learning => Deep learning => Apple

Advantages of Deep Learning.

Deep Learning very closely tries to imitate the structure and pattern of biological neurons. This single concept, which makes it more complex, still helps to come out with effective predictions. Human intelligence is supposed to be the best of all types of intelligence in the universe. Researchers are still striving to understand the  complexity of how the Human intelligence is supposed to be the best of all types of intelligence in the universe. Researchers are still striving to understand the complexity of how the human brain works. The Deep Learning module acts like a black box, which takes inputs, does the processing in the black box, and gives the desired output. It helps us, with the help of GPUs and TPUs, to work with complex algorithms at a faster pace. The model developed could be reused for similar futuristic applications.



1.2 THE NEED: WHY DEEP LEARING?

 Deep Learning application have become an indispensable part of contemporary life. Whether we acknowledge it or not, there is no single day in which we do not use our virtual assistants like Google Home, Alexa, Siri and Cortana at home. We could commonly see our parents use Google Voice Search for getting the search results easily without requiring the effort of typing. Shopaholics cannot imagine shopping online without the appropriate recommendations scrolling in. We never perceive how intensely Deep Learning has invaded our normal lifestyles. We have automatic cars in the market already, like MG Hector, which can perform according to our communication. We already have hte luxury of smart phones, smart homes, smart electrical applicances and so forth. We invariably are taken to a new status of lifestyle and comfort with the technological advancements that happen in the field of Deep Learning.

1.1 INTRODUCTION

 Artificial Intelligence and Machine Learning have been buzz words for more than a decade now, which makes the machine an artificially intelligent one. The computational speed and enormous amounts of data have stimulated academics to deep dive and unleash the tremendous research  potential that lies within. Even though Machine Learning helped us start learning intricate and robust systems. Deep Learning has curiously entered as a subset for AI, producing incredible results and outputs in the field.

Deep Learning architecture is built very similar to the working of a human brain, whereby scientists teach the machine to learn in a way that humans learn. This definitely is a tedious and challenging task, as the working of the human brain itself is a complex phenomenon. Our research in the field has resulted in valuable outcomes to makes things easily understandable for scholar and scientists to build worthy applications for the welfare of society. They have made the various layers in neural nets in Deep Learning auto-adapt and learn according to the volume of datasets and complexity of algorithms.

The efficacy of Deep Learning algorithms is in no way comparable to traditional Machine Learning helped industrialists to deal with unsolved problems in a convincing way, opening a wide horizon with ample opportunity. Natual language processing, speech and image recognition, the entertainment sector, online retailing sectors, banking and finance sectors, the automotive industry, chat bots, recommender systems, and voice assistants to self-driving car are some of the major advancements in the field of Deep Learning.