페이지

2022년 2월 19일 토요일

Kuberanetes: Robust management of multi-container applications

 The Kubernetes project - sometimes abbreviated as k8s- was born out of an internal container management project at Google known as Borg. Kubernetes comes from the Greek word for navigator, as denoted by the seven-spoke whell of the project's log. Kubernetes is written in the Go protgramming language and provids a robust framework to deploy and manage Docker container applications on the underlying resources managed by cloud proviers(such as Amazon Web Service(AWS), Microsoft Azure, and Google Cloud Platform(GCP)).

Kubernetes is fundamentally a tool to control applications composed of one or more Docker containers deployed in the cloud: this collection fo containers is known as a pod. Each pod can bave one or more copies (to allow redundancy), which is known as a replicaset. The two main components of a Kubernetes deployment are a control plane and nodes. The control plane hosts the centralized log for ldeploying and managing pods, and consists of (Figure 2.4):

- Kube-api-server: This is the main application that listens to commands from the user to deploy or update a pod, or manages external access to pods via ingress.

- Kube-controller-manager: An application to manage functions such as controlling the number of replicas per pod.

- Cloud-controller-manager: Manages functions particular to a cloud provider.

- Etcd: A key-value store that maintains the environment and state variables of different pods.

- Kube-scheduler: An application that is responsibile for finding workers to run a pod.

While we could set up own control plane, in practice we will usually have this function managed by our cloud provider, such as Google Kubernetes Engine(GKE) or Amazon's Elastic Kubernetes Services(EKS). The Kubernetes nodes-the individual machines in the cluster - each run an application known as a kubelet, which monitors the pod(s) running on that node.

Now that we have a high-level view of the Kubernetes system, let's look at the important commands you will need to interact with a Kubernetes cluster, update its components, and start and stop applications.




Connecting Docker containers with docker-compose

 So raf we have only discussed a few basic Docker commands, which would allow us to run a single servic ein a singel container. however, you can probably appreciate that in the "real world" we usually need to have one or more applications running concurrently-for example, a website will have both a web application that fetches and processes data inresponse to activity from an end use and a database instance to log that information. In complex applications, the website might even be composed of multiple small web applications or microservices that are specialized to particular use case such as the fron end, user data, or an order management system. For these kinds of applications, we will need to have more than one container communicating with each other. The docker-compose tool(https://docs.docker.com/compose/)is written with such application in mind: it allows us to specify serverl Docker containers in an application file using the YAML format. For example, a configuration for a website with an instance of the Redis ddatabase might look like:

version: '3'

service:

    web:

        build:

        port:

            -"5000:5000"

        volumes:

            -.:/code

            -logvolume01:/var/log

        links:

            -redis

        redis:

            image: redis

    volumes:

        logvolume01: {}

The tow application containers here are web and the redis database. The file also specified the volumes (disks) linked to these two applications. Using this configuration, we can run the command:

docker-compose up

This starts all the containers secified in the YAML file and allows them to communicate with each other. However, even though Docker containers and docker-compose allow us to construct complex applications using consistent execution environments, we may potentially run int issues with robustness when we deploy these services to the cloud. For example, in a web application, we connot be assured that the virtual machines that the application is running on will persist over long periods of time, so we need processes to manage self-healing and redundancy. This is also relevant to distributed machine learning pipeline, in which we do not want us to have backup logic to restart a sub-segment of work. Also, whuile Docker has the docker-compose functionallity to link together serveral containers in an application, it does not have robust rules for how communication should happen among those containers, or how to manage them as a unit. For these purposes, we turn to the Kubernetes library.


Important Docker commands and syntax

 To understand how Docker works, it is useful to walk through the template used for all Docker containers, a Dockerfile. As an example, we will use the TensorFlow container notebook example from the Kubeflow project(https://github.com/kubeflow/kubeflow/blob/master/components/example-notebook-servers/jupyter-tensorflow-full/cpu.Dockerfile).

This file is a set of instructions for h ow Docker should take a base operating environment, add dependencies, and execute a piece of software once it is packaged:


FROM public.ecr.aws/jlr09q0g6/notebook-servers/jupyter-tensorlflow:master-abf9ec48


# install - requirements.txt

COPY -- chown=jovyan:users requirements.txt /tmp/requirements.txt

RUN python3 -m pip install -r /tmp/requirements.txt --quiet --no-cache-dir \

    && rm -f /tmp/requirements.txt

While the exact commands will differ between containers, this will give you a flavor for the way we can use containers to manage an application - in this case running a Jupyternotebook for interactive machine learning experimentation useing a consistent set of libraries. Once we have installed the Docker runtime for our particular operating system, we would execute such a file by running:

Docker build -f <Dockerfilename> -t <image name:tag>

When we do this, a number of things happen. First, we retrieve the base filesystem, or image, from a remote repository, which is not unlike the way we collect JAR files from Artifactory when using Java build tools such as Gradle or Maven, or Python's pip installer. With this filesystem or image, we then set required variables for the Docker build command such as the username and TensorFlow version, and runtime environment variables for the container. We determine what shell program will be used to run the command, then we install dependencies we will need to run TensorFlow and the notebook application, and we specify the command that is run when the Docker container is started. Then we save this snapshot with an identifier composed of a base image name and one or more tags (such as version numbers, or, in many cases, simply a timestamp to uniquely identify this image). Finally, to actually start the notebook server running this container, we would issue the command:

Docker run <image name:tag>

By default, Docker will run the executable command in the Dockerfile file; in our present example, that is the command to start the notebook server. However, this does not hazve to be the case; we could have a DockerFile that simply builds an execution envirnment for an application, and issue a command to run within that environment. In that case, the command would look like:

Docker run <image name:tag> <command>

Docker push <image name:tag>

Note that the iamge name can contain a reference to a particular registry, such as a local registry or one hosted on one of the major cloud providers such as Elastic Container Server(ECS) on AWS, Azure Kubernetes Service(AKS), or Google Container Registry. Publishing to a remote registry allows developers to share images, and us to make containers accessible to deploy in the cloud.


Docker: A lightweight virtualization solution

 A consistent challenge in developing robust software applications is to make them run the same on a machine different than the one on which they are developed. These differences in environments could encompass a number of variables: operating systems, programming language library versions, and hardware such as CPU models.


Traditionally, one approach to dealing with this heterogeneity has been to use  Virtual Machine(VM). While VMs are useful to run applications on diverse hardware and operating systems, they are also limited by being resource-intensive(Figure 2.3): each VM running on a host requires the overhead resources to run a completely separate operating system, along with all the applications of dependencies within the guest system.

However, in some cases this is an unnecessary level of overhead; we do not necessarily need to run an entirely separate operating system, rather than just a consistent environment, including libraries and dependencies within a single operating system. This need for a lightweight framework to specify runtime envirnments prompted the creation of the Docker project for containerization in 2013. In essence, a container is an environment for running an application, including all dependencies and libraries, allowing reproducible deployment of web applications and other programs, such as a database or the computations in a machine learning pipeline. For our use case, we will use it to provide a reproducible Python execution environment (Python language version and libraries) to run the steps in our generative machine learning pipelines.

We will need to have Docker installed for many of the examples that will appear in the rest of this chapter and the projects in this book. For instructions on how to install Docker for your particular operating system, please refer to the directions at (https://docs.docker.com/ install). To verify that you have installed the applications successfully, you should b able to run the following command on your terminal, which will print the available options:

docker run hello-world



VSCode

 Visual Studio Code(VSCode) is  an open-source code editor developed by Microsoft Corporation which can be used with many programming languages, including Python. It allows debugging adn is integrated with version control tools such as Git; we can even run Jupyter notebooks (which we will describe later in this chapter) within VSCode. Instructions for installation very by whether you are using a Linux, macOS, or Windows operating system: please wee individual instructions at https://code.visualstudio.com for your system, Once installed, we need to clone a copy of the source code for the projects in this book using Git, with the command:


git clone git@github.com:PackPublishing/Hands-On-Generative-AI-with-Python-and-TensorFlow-2.git


This command will copy the source code for the projects in this book to our laptop, allowing us to locally run and modify the code. Once you have the code copied, open the GitHub repository for this book using VSCode(Figure 2.1).We are now ready to start installing some of the tools we will need; open the file install.sh

One feature that will be of particular use to us is the fact that VSCode has an integrated(Figure 2.2) terminal where we can run commands: you can access this by selecting View, then Terminal from the drop-down list, which will open a command-line prompt:

Select the TERMINAL. tab, and bash for the interpreter; you should now be able to enter normal commands. Change the directory to Chapter_2, where we will run our installation script, which you can open in VSCode.


The installation script we will run will download and install t he various components we will need in our end-to-end TensorFlow lab; the overarching framework we will use for these experiments will be the Cubeflow library, which handles the various data and training pipeliens that we will utilize for our projects in the later chapters of this volume. In the rest of this chapter, we will describe how Kubeflow is built on Docker and Kubernetes, and how to set up Kuberflow on serveral popular cloud providers.


Kubernetes, the technology which Kuberflow is based  on, is fundamentally a way to manage containerized applications created using Docker, which allows for reproducible, lightweight execution environments to be created and presisted for a variety of applications. While we will make use of Docker for creating reproducible experimental runtimes, to understand its place in the overall landscape of virtualization solutions(and why it has become so important to modern application development), let us take a detour to describe the background of Docker in more detail.

2022년 2월 18일 금요일

TensorFlow 2.0

 While representing operations in the dataflow graph as primitives allows flexibility in defining new layers within the Pyuthon client API, it also can result in a lot of "boilerplate" code and repetitive syntax. For this reason, the high-level API Keras was developed to provide a high-level abstration; layer are represented using Python classes, while a particular runtime environment (such as TensorFlow operators can have different underlying implementations on CPUs, GPUs, or TPUs.

While developed as a framework-agnostic library, Keras has been included as part of TensorFlow's main release in version 2.0. For the purposes of readability, we will implement most of our models in this book in Keras, while revertingh to the underlying TensorFlow 2.0 code where it is necessary to implement paticular operations or highlight the underlying logic. Please see Table2.3 for a comparsion between how various neural network algorithm concepts are implement at a low(TensorFlow) of high (Keras) level in theses libraries.

Object                TensorFlow implementation        Keras implementation

Neural network layer    Tensor computation    Python layer classes

Gradient calculation    Graph runtime operator    Python optimizer class

Loss function    Tensor computation    Python loss function

Neural network model    Graph runtime session    Python model class instance


To show you the difference between the abstraction that Keras makes versus TensorFlow 1.0 in implementing basic neural network models, let's look at an example of writing, a convolutional layer (see Chapter 3, Building Blocks of Deep Neural Networks) using both of these frameworks. In the first case, in TensorFlow 1.0, you can see that a lot of the code involves explicitly specifying variables, functions, and matrix operations, along with the gradient function and runtime session to compute the updates to the networks.


This is multilayer perceptron in TensorFlow 1.0

X = tf.placeholder(dtype=tf.float64)

Y = tf.placeholder(dtype=tf.float64)

num_hidden=128


# Build a hidden layer

w_hidden = tf.Variable(np.random.randn(784, num_hidden))

b_hidden = tf.Variable(np.random.randn(num_hidden))

p_hidden = tf.nn.sigmoid( tf.add(tf.matmul(X, W_hidden), b_hidden))


# Build another hidden layer

w_hidden2 = tf.Variable(np.random.radn(num_hidden, num_hidden))

b_hidden2 = tf.Variable(np.random.radn(num_hidden))

p_hidden2 = tf.nn.sigmoid( tf.add(tf.matmul(p_hidden, w_hjidden2), b_hidden2) )


# Build the output layer

w_output = tf.Variable(np.random.radn(num_hidden, 10))

b_output = tf.Variable(np.random.randn(10))

p_output = tf.nn.softmax( tf.add(tf.matmul(p_hidden2, w_output), b_output))

loss = tf.reduce_mean(tf.losses.mean_squared_error(labels=Y, predictions=p_output))

accuracy=1-tf.sqrt(loss)


feed_dict = {

    X:    x_train.reshape(-1, 784),

    Y:    pd.get_dummies(y_train)

}

with tf.Session() as session:

    session.run(tf.global_variables.initializer())

    

    for step in range(10000):

        J_value = session.run(loss, feed_dict)

        acc = session.run(accuracy, feed_dict)

        if step % 100 == 0:

            print("Step:", step, " Loss:", J_value, " Accuracy:", acc)


            session.run(minization_op, feed_dict)

    pred00 = session.run([p_output], feed_dict={X: x_test, reshape(-1, 784)}}


In contrast, the implementation of the same convolutional layer in Keras is vastly simplified through the use of abstract concepts embodied in Python classes, such as layers, models, and optimizers. Underlying details of the computation are encapsulated in these classes, making the logic of the code more readable.


Note also that in TensorFlow 2.0 the notion of running sessions (lazy execution, in which the network is only computed if explicitly compiled and called) has been dropped in favor of eager execution, in which the session and graph are called dynamically when network functions such as call and compile are executed, with the network behaving like any other Python class without explicity creating a session sceop. The notion of a global namespace in which variables are declared with tf.Variable() has also been replaced with a default garbage collection mechanism.

This is multilayer perceptron layer in Keras.

import TensorFlow as tf

from TensorFlow.keras.layers import Input, Dense

from keras.models import Model


l = tf.keras.layers

model = tf.keras.Sequential([

        l.Flatten(input_shape=(784,)),

        l.Dense(128, activation='relu'),

        l.Dense(128, activation='relu'),

        l.Dense(10, activation='softmax')

])

model.comile(loss='categorical_crossentropy',

                    optimizer='adam',

                    metrics = ['accuracy'])

model.summary()

model.fit(x_train.reshape(-1,784),pd.get_dummies(y_train), nb_epoch=15, batch_size=128,verbose=1)

Now that we have coverd some of the details of what the TensorFlow library is and why it is well-suited to the development of deep neural network models (including the generative models we will implement in this book), let's get started building up our research environment, While we could simply use a Python package manager such as pip to install TensorFlow ono our laptop, we want to make sure our process is as robust and reproducible as possible-this weill make it easier to package our code to run on different machines, or keep our computations consistent by specifying the exact verstions of each Python library we use  in an experiment. We will start by installing an Integrated Development Environment(IDE) that will make will make our research easier - VSCode.