페이지

2022년 2월 18일 금요일

TensorFlow 2.0

 While representing operations in the dataflow graph as primitives allows flexibility in defining new layers within the Pyuthon client API, it also can result in a lot of "boilerplate" code and repetitive syntax. For this reason, the high-level API Keras was developed to provide a high-level abstration; layer are represented using Python classes, while a particular runtime environment (such as TensorFlow operators can have different underlying implementations on CPUs, GPUs, or TPUs.

While developed as a framework-agnostic library, Keras has been included as part of TensorFlow's main release in version 2.0. For the purposes of readability, we will implement most of our models in this book in Keras, while revertingh to the underlying TensorFlow 2.0 code where it is necessary to implement paticular operations or highlight the underlying logic. Please see Table2.3 for a comparsion between how various neural network algorithm concepts are implement at a low(TensorFlow) of high (Keras) level in theses libraries.

Object                TensorFlow implementation        Keras implementation

Neural network layer    Tensor computation    Python layer classes

Gradient calculation    Graph runtime operator    Python optimizer class

Loss function    Tensor computation    Python loss function

Neural network model    Graph runtime session    Python model class instance


To show you the difference between the abstraction that Keras makes versus TensorFlow 1.0 in implementing basic neural network models, let's look at an example of writing, a convolutional layer (see Chapter 3, Building Blocks of Deep Neural Networks) using both of these frameworks. In the first case, in TensorFlow 1.0, you can see that a lot of the code involves explicitly specifying variables, functions, and matrix operations, along with the gradient function and runtime session to compute the updates to the networks.


This is multilayer perceptron in TensorFlow 1.0

X = tf.placeholder(dtype=tf.float64)

Y = tf.placeholder(dtype=tf.float64)

num_hidden=128


# Build a hidden layer

w_hidden = tf.Variable(np.random.randn(784, num_hidden))

b_hidden = tf.Variable(np.random.randn(num_hidden))

p_hidden = tf.nn.sigmoid( tf.add(tf.matmul(X, W_hidden), b_hidden))


# Build another hidden layer

w_hidden2 = tf.Variable(np.random.radn(num_hidden, num_hidden))

b_hidden2 = tf.Variable(np.random.radn(num_hidden))

p_hidden2 = tf.nn.sigmoid( tf.add(tf.matmul(p_hidden, w_hjidden2), b_hidden2) )


# Build the output layer

w_output = tf.Variable(np.random.radn(num_hidden, 10))

b_output = tf.Variable(np.random.randn(10))

p_output = tf.nn.softmax( tf.add(tf.matmul(p_hidden2, w_output), b_output))

loss = tf.reduce_mean(tf.losses.mean_squared_error(labels=Y, predictions=p_output))

accuracy=1-tf.sqrt(loss)


feed_dict = {

    X:    x_train.reshape(-1, 784),

    Y:    pd.get_dummies(y_train)

}

with tf.Session() as session:

    session.run(tf.global_variables.initializer())

    

    for step in range(10000):

        J_value = session.run(loss, feed_dict)

        acc = session.run(accuracy, feed_dict)

        if step % 100 == 0:

            print("Step:", step, " Loss:", J_value, " Accuracy:", acc)


            session.run(minization_op, feed_dict)

    pred00 = session.run([p_output], feed_dict={X: x_test, reshape(-1, 784)}}


In contrast, the implementation of the same convolutional layer in Keras is vastly simplified through the use of abstract concepts embodied in Python classes, such as layers, models, and optimizers. Underlying details of the computation are encapsulated in these classes, making the logic of the code more readable.


Note also that in TensorFlow 2.0 the notion of running sessions (lazy execution, in which the network is only computed if explicitly compiled and called) has been dropped in favor of eager execution, in which the session and graph are called dynamically when network functions such as call and compile are executed, with the network behaving like any other Python class without explicity creating a session sceop. The notion of a global namespace in which variables are declared with tf.Variable() has also been replaced with a default garbage collection mechanism.

This is multilayer perceptron layer in Keras.

import TensorFlow as tf

from TensorFlow.keras.layers import Input, Dense

from keras.models import Model


l = tf.keras.layers

model = tf.keras.Sequential([

        l.Flatten(input_shape=(784,)),

        l.Dense(128, activation='relu'),

        l.Dense(128, activation='relu'),

        l.Dense(10, activation='softmax')

])

model.comile(loss='categorical_crossentropy',

                    optimizer='adam',

                    metrics = ['accuracy'])

model.summary()

model.fit(x_train.reshape(-1,784),pd.get_dummies(y_train), nb_epoch=15, batch_size=128,verbose=1)

Now that we have coverd some of the details of what the TensorFlow library is and why it is well-suited to the development of deep neural network models (including the generative models we will implement in this book), let's get started building up our research environment, While we could simply use a Python package manager such as pip to install TensorFlow ono our laptop, we want to make sure our process is as robust and reproducible as possible-this weill make it easier to package our code to run on different machines, or keep our computations consistent by specifying the exact verstions of each Python library we use  in an experiment. We will start by installing an Integrated Development Environment(IDE) that will make will make our research easier - VSCode.




댓글 없음: