페이지

2022년 5월 24일 화요일

1.6.3 TensorFlow Installation

 TensorFlow, like other Python libraries, can be installed using the Python package management tool "pip install" command. When installing TensorFlow, you need to determine whether to install a more powerful GPU version or a general-performance CPU version based on whether your omputer has an NVIDA GPU graphics card

# Install numpy

pip install numpy

With the preceding command, you should be able to automatically download and install the numpy library. Now let's install the latest GPU verison of TensorFlow. The command is as follows:

# Install TensorFlow GPU version

pip install -U tensorflow

The preceding command should automatically download and install the TensorFlow GPU version, which is currently the official version of TensorFlow 2.x. The "-U" parameter secifies that if this package is installed, the upgrade command is executed.

Now let's test whether the GPU version of TensorFlow is successfully installed. Enter "ipython" on the "cmd" command line to enter the ipython interactive terminal, and thenm enter the "import tensorflow as tf" command. If no errors occur, continue to enter "tf.test.is_gpu_available()" to test whether the GPU is available. This command will print a series of information. The information beginning with "I"(Information) contains information about the available GPU graphics devices and will return "True" or "False" at the end, indicating whether the GPU device is available, as shown in Figure 1-35. If True, the TensorFlow GPU version is successfully installed; if False, the installation fails.

You may need to check the steps of CUDA, cuDNN, and environment variable configuration again or copy the error and seek help from the search engine.

If you don't have GPU, you can install the CPU version. The CPU version cannot use the GPU to accelerate calculations, and the conputational seed is relatively slow. However, because the models introduced as learning purposes in this book are generally not omputationally expensive, the CPU version can also be used. If it also possible to add the NVIDA GPU device after having better understanding of deep learning in the future. If the installation of the TensorFlow GPU version fails, we can also use the CPU version directly. The command to install the CPU version is

# Install TensorFlow CPU version

pip install -U tensorflow-cpu

After installation, enter the "import tensorflow as tf" command in the ipython terminal to verify that the CPU version is successfully installed. Afeter TensorFlow is installed, you can view the version number through "tf._version_". Figure 1-36 shows an example. Note that even the code works for all TensorFlow 2.x versions.

The preceding manaual process of installing CUDA and cuDNN, configuring the Path environment variable, and installing TensorFlow is the standard installation method. Although the steps are tedious, it is of great help to understand the functional role of each library. In fact, for the novice, you can complete the preceding steps by two commands as follows:

# Create virtual environment tf2 with tensorflow-gpu setup required

# to automatically install CUDA, cuDNN, and TensorFlow GPU

conda create -n tf2 tensorflow-gpu

#Activate tf2 environment

conda activate tf2

This quick installation method is called the minimal installation method. This is also the convenience of using the Anaconda distribution.

TensorFlow installed though the minimal version requires activation of the corresponding vertual environment before use, which needs to be distinguished from the standard version. The standard version is installed in Anaconda's default environment base and generally does not require manual activation of the base environment.

Common Python libraries can also be installed by default. The command is as follows:

# Install common python libraries

pip install -U ipython numpy matplotilib pillow pandas

When TensorFlow is running, it will consume all GPU resources by default, which is very computationally unfiendly, especially when the computer has multiple users or programs using GPU resources at the same time. Occuping all GOU resources will make other programs unable to run. Therefore, it is generally recommeded to set the GPU memory usage of TensorFlow to the growth mode, that is, to apply for GPU memory resources based on the actual model size. The code implementation is as follows:

# Set GPU resource usage method

# Get GPU device list

gpus = tf.config.experimental.list_physical_devices('GPU')

if gpus:

    try:

        # Set GPU usage to growth mode

        for gpu in gpus:

            tf.config.experimental.set_memory_growth(gpu, True)

    except RuntimeError as e:

        # print error

        print(e)

댓글 없음: