페이지

2022년 5월 21일 토요일

1.5.2 TnsorFlow 2 and 1.x

 TensorFlow 2 is a completely different fraework from TensorFlow 1.x in terms of user experience. TensorFlow 2 is not compatible with TensorFlow 1.x code. At the same time, it is very different in programming style and functional interface design. TensorFlow 1.x code needs to rely on artificial migration, and automated migration methods are not reliable. Google is about to stop updating TensorFlow 1.x. It is not recommended to learn TensorFlow 1.x now.

TensorFlow 2 supports the dynamic graph priority mode. You can obtain both the computational graph and the numerical results during the calculation. You can debug the code and print the data in real  time. The network is built like a building block, stacked layer by layer, which is in line with software development thinking.

Taking simple addition 2.0 + 4.0 as an example, in TensorFlow 1.x, we need to create a calculation graph first as follows:

import tensorflow as tf

# 1. Create computaition graph with tf 1.x

# Create 2 input variables with fixed name and type

a_ph = tf.placeholder(tf.float32, name='variable_a')

b_ph = tf.placeholder(tf.loat32, name='variable_b')

# Create output operation and name

c_op = tf.add(a_ph, b_ph, name='variable_c')

The process of creating acomputational graph is analogous to the process of establishing a formula c=a_b through symbols. It only records the computational steps of the formula and does not actually caculate the numerical results. The numberical results can only be obtained by running the output c and assigning values a = 2.0 and b = 4.0 as follows:

# 2. Run computational graph with tf 1.x

# Create running environment

sess = tf.InteractiveSession()

#Initialization

init = tf.global_variables_initializer()

sess.run(init) # Run the initialization

# Run the computation graph and return value to c_numpy

c_numpy = sess.run(c_op, feed_dict={a_ph: 2., b_ph: 4.})

#print out the output

print('a+b', c_numpy)

It can be seen that it is so tedious to perform simple addition operations in TensorFlow 1, let alone to create complex neural network algorithms. This programming method of creating a computational graph and then running it later is called symbolic programming.

Next, we use TensorFlow 2 to complete the same operation as follows:

import tensorflow as tf

# Use TensorFlow 2 to run

# 1.Create and initalize variable

a = tf.constant (2.)

b = tf.constant(4.)

# 2. Run and get result directly

print('a_b=', a_b)

As you can see, the calculation process is very simple, and there are not extra calcuation steps.

The method of getting both computation graphs and numerical results at the same time is called jimperative programming, also known as dynamic graph mode. TensorFlow 2 and PyTorch are both developed using dynamic graph priority mode, which is easy to debug. In general, the dynamic graph mode is highly efficient for development, but it may not be as efficient as the static graph mode for running. TensorFlow 2 also supports converting the dynamic graph mode to the static graph mode through tf.function, achieving a win-win situation of both development and operating efficiency. In the remaining part of this book, we use TensorFlow to represent TensorFlow 2 in general.


댓글 없음: