페이지

2022년 2월 13일 일요일

2. Setting Up a TensorFlow Lab

 Now that you have seen all the amazing applications of generative models in Chapter1, An Introduction to Generative AI: "Drawing" Data from Models, you might be wondering how to get started with implementing these projects that use these kinds of algorithms. In this chapter, we will walk through a number of tools that we will use throughout the rest of the book to implement the deep neural networks that are used in various generative AI models. Our primary tool is the TensorFlow 2.0 additional resources to make the implementation process easier(summarized in Table 2.1).


We can broadly categorize these tools:

- Resources for replicabel dependency management(Docker, Anaconda)

- Exploatory tools for data munging and algorith hacking (Jupyter)

- Utilities to deploy these resource to the cloud and manage their lifecydle(Kubernetes, Kubeflow, Terraform)


Tool            Proejct site            use

Docker        www.docker.com    Application runtime dependency encapuslation

Anaconda    www.anaconda.com Python language package management

Jupyter        jupyter.org             Interactive Python runtime and plotting 

                                            data exploration tool

Kubernetes kubernetes.io          Docker container orchestration and reousrce                                                           management

Kuberflow  www.kubeflow.org    Machine learning workflow engine developed on

                                             kubernetes

Terraform   www.terraform.io    Infrastructure scripting, language for configurable and

                                           consistent deployments of Kubeflow and Kubernbetes

VSCode    code.visualstudio.com Integrated development environment(IDE)


On our journey to bring our code from our laptops to the cloud in this chapter, we will first describe some background on how TensorFlow works when running locally. We will then describe a wide array of software tools that will make it easier to run an end-to-end TensorFlow lab locally or in the cloud, such as notebooks, containers, and cluster managers. Finally, we will walk through a simple practical example of setting up a reproducible research environment, running local and distributed training, and recording our result. We will also examine how we might parallelize TensorFlow across multiple CPI GPU units within a machine (vertical scaling) and multiple machines in the cloud(horizontal scaling) to accelerate training. By the end of this chapter, we will be all ready to extend this laboratory framework to tackle implementing projects using various generative AI models.


First, let's start by diving more into the details of TensorFlow, the library we will use to develop models throughout the rest of this book. What problem does TensorFlow solve for neural network model development? What approaches does it use? How has it evolved over the years? To answer these questions, let us review some of the history behind deep neural network libraries that led to the development of TensorFlow.


댓글 없음: