페이지

2022년 4월 10일 일요일

1.1.2 System View

 From the computer's point of view, the operating system is the program most intimately involved with the hardware. In this context, we can view an operating system as a resource allocator. A computer system has many resources that may be required to solve a problem: CPU time, memory space, storage space, I/O devices, and so on. The operating system acts as the manager of these reources. Facing numberous and possibly conflicting requests for resources, the operating system must decide how to allocate them to specific programs and users so that it can operate the computer system efficiently and fairly.

A slightly different view of an operating system emphasizes the need to control the various I/O devices and user programs. An operating system is a control program. A control program manages the execution of user programs to prevent errers and improper use of the computer. It is especially concerned with the operating and control of I/O devices.

1.1.1 User View

 The user's view of the computer varies according to the interface being used. Many computer users sit with a laptop or in front of a PC consisting of a monitor, keyboard, and mouse. Such as system is designed for one user to monopolize ifs resource. The goal is to maximize the work (or play)that the user is performing. In this case, the operating system is designed mostly for ease of use, with some attention paid to performance and security and none paid to resource utilization-how various hardware and software and software resources and shared.

user


application programs

(compilers, web browsers, development kits, etc)


operating system


computer hardware

(cpu, memory, I/O devices, etc)


Increasingly, many users interact with mobile devices such as smartphones and tablets-devices that are replacing desktop and laptop computer systems for some users. These devices are typically connected to networks through cellular or other wireless technologies. The user interface for mobile computers generally features a touch screen, where the user interacts with the system by pressing and swiping fingers across the screen rather than using a physical keyboarod and mouse. Many mobile devices also allow users to interact through a voice recognition interface, such as Apple's Siri.

Some computers have little or no user view. For example, embedded computers in home devices and automobiles may have numeric keypads and may  turn indicator lights on or off to show status, but they and their operating systems and application are designed primarily to run without user intervention.


1.1 What Operating Systems Do

 We begin our discussion by looking at the operating system's role in the overall computer system. A computer system can be divided roughly into four components: the hardware, the operating system, the application programs, and a user.

The hardware-the central processing unit(CPU), the memory, and the input/output(I/O) devices- provides the basic computing resources for the system. The application programs-such as word processors, spreadsheets, compilers, and web browsers-define the ways in which these resources are used to solve users' computing probelems. The operating system controls the hardware and coordinates its use among the various application programs for the various users.

We can also view a computer system as consisting of hardware, software, and data. The operating system provides the means ofor proper use of these resources in the operating of the computer system. An operating system is similar to a govermment. Like a goverment, it performs no useful function by itself. It simply provides an environment within which other programs can do useful work.

To understand more fully the operating system's role, we next explore operating system from ttwo viewpoints: that of the user and that of the system.


2022년 4월 2일 토요일

Vanilla GAN

 We have covered quite a bit of ground in understanding the basics of GANs. In ghis section, we will apply that understanding and build a GAN from scratch. This generative model will consist of a repreating block architecture, similar to the one presented in the original paper. We will try to replicate the task of generating MNIST digits using our network.

The overall GAN setup can be seen in Figure 6.8. The figure outlines a generator model with moise vector z as input and repeating blocks that transform and scale up the vector to the required dimensions. Each block consists of a dense layer followed by Leaky ReLU activation and a batch-normalization layer, We simply reshape the output from the final block to transform it into the required output image size.

The descriminator, on the other hand, is a simple feedforward network. This model takes an image as input( a real image or the fake output from the generator) and classifies it as real or fake. This simple setup of two competing models helps us to train the overall GAN.

We will be relying on TensorFlow 2 and using the high-level Keras API wherever possible. The first step is to define the discriminator model. In this implementation, we will use a very basic multi-layer perceptron(MLP) as the discriminator model:

def build_discriminator(input_shape=(28,28,), verbose=True):

    """

    Utility method to build a MLP discriminator

    Parameters:

        input_shape:

            type:tuple, shape of input image for classification. 

                Default shape is (28,28)-> MNIST

        verbose:

            type:boolean. Print model summary if set to true.

    Returns:    

        tensorflow.keras.model object

"""

    model = Sequential()

    model.add(Input(shape=input_shape))

    model.add(Flatten())

    model.add(Dense(512))

    model.add(LeakyReLU(alpha=0.2))

    model.add(Dense(1, activation='sigmoid'))


    if vervose:

        model.summary()

    return model

We will use the sequential API to prepare this simple model, with just four layers and the final output layer with sigmoid activation. Since we have a binary classification task, we have only one unit in the final layer, We will use binary cross-entropy loss to train the discriminator model.

The generator model is also a multi-layer perceptron with multiple layers scaling up the  noise vector z to the desired size. Since our task is to generate MNIST-like output samples, the final reshape layer will convert the flat vector into a 28*28 output shape. Note that we will make use of batch normalizaiton to stabilize model training. The following snippet shows a utility method for building the gene4rator model:

def build_generator(z_dim=100, output_shape=(28,28), verbose=True):

    """

    Utility mothod to build a MLP generator

    Parameters:

        z_dim:

            type:int(positive). Size of input noise vector to be used as model input.

                default value is 100

        output_shape:    type:tuple. Shape of output image.

                                Default shape is (28,28)->MNIST

    Returns:

        tensorflow.keras.model object

    """

    model = Sequential()

    model.add(Input(shape=(z_dim,)))

    model.add(Dense(256, input_dim=z_dim))

    model.add(LeakyReLU(alpha=0.2))

    model.add(BatchNormalization(momentum=0.8))

    model.add(Dense(512))

    model.add(LeakyReLU(alpha=0.2))

    model.add(BatchNormalization(momentum=0.8))

    model.add(Dense(np.prod(output_shape), activation='tanh'))

    model.add(Reshape(output_shape))

    

    if verbose:

        model.summary()

    return model

We simply use these utility methods to create generator and discriminator model objects. The following snippet uses these two model objects to create the GAN object as well:

discriminator = build_discriminator()

discriminator.compile(loss='binary_crossentropy',

                                optimizer=adam(0.0002, 0.5),

                                metrics=['accuracy'])

generator = build_Generator()

z_dim = 1000 #noise

z = Input(shape=(z_dim,))

img = generator(z)

#For the combined model  we will only train the generator

discriminator.trainable = False

# The discriminator takes generated images as  input

# and determines validity

validity =- discriminator(img)

#The combined model (stacked generator and discriminator)

# Trains the generator to fool the discriminator

gen_model = Model(z, validity)

gan_model.compile(loss='binary_crossentropy', optimizer=Adam(0.0002, 0.5))

The final piece of the puzzle is defining the training loop. As described in the previous section, we will train both(discriminator and generator) models alternatingly. Doing so is straightforward with high-level Keras APIs. The following code snippet first loads the MNIST dataset and scales the pixel valuyes between -1 and +1:




2022년 3월 25일 금요일

Maximum likelihood game

 The minimax game can be transformed into a maximum likelihood game where the aim is to maximize the likelihood of the generator probability density. This is done to ensure that the generator probability density is similar to the real/training data probability density. In other words, the game can be transformed into minimizeing the divergence between Pz and Pdata. To do so, we make use of kullback-Leibler divergence(KL divergence) to calculate the similarity betwen two distributions of interest. The overall value function can be denoted as:

The cost function for the generator transforms to:

One important point to note is that KL divergence is not a symmetric measure, that is, KL(Pdata || pg) != KL(Pg||Pdata). Themodel typically uses KL(Pg||Pdata) to achieve better results.

The three different cost function discussed so far have slightly different trajectories and thus load to different properties at different stages of training. These three functions can be visualized as shown in Figure 6.7:


Non-saturating generator cost

 I practice, we do not train the generator to minimize log(1-D(G(z))) as this function does not provide sufficient gradients for learning. During the initial learning phases, where G is poor, the discriminator is able to classify the fake from the real with high confidence. This leads to the saturation of log(1-D(G(z))), which hinders improvements in the generator model. We thus tweak the generator to maximize log(D(G(z))) instead:

This provides stronger gradients for the generator to learn. This is shown in Figure 6.6. The x-axis denotes D(G(z)). The top line shows the objective, which is minimizing the likelihood of the discriminator being correct. The bottom line(updated objective) works by maximizing the likelihood of the discirimiator being wrong

Figure 6.6 illustrates how a slight change helps achieve better gradients during the initial phases of training.

Training GANs

 Training a GAN is like playing this game of two adversries. The generator is learning to generate good enough fake samples, while the discriminator is working hard to discriminate between real and fake. More formally, this is termed as the minimax game, where the value function V(G,D) is described as follows:

This is also called the zero-sum game, which has an equilibrium that is the same as the Nash equilibrium. We can better understand the value function V(G,D) by separating out the objective function for each of the players. The following equations describe individual objective functions:

where Jd is the discriminator objective function in the classical sense, Jg is the generator objective equal to the negative of the discriminator, and Pdata is the distribution of the training data. The rest of the terms have their usual meaning. This is one of the simplest ways of defining the game or corresponding objective functions. Over the years, different ways have been studied, some of which we will cover in this chapter.

The objective functions help us to understand the aim of each of the players. If we assume both probability densities are non-zero everywhere, we can get the optimal value of D(x) as:

We will revisit this equation in the latter part of the chapter. For now, the next step is to present a training algorithm whrein the discriminator and generator models frain towards their repspective objectives. The simplest yet widely used way of training a GAN(and by for the most successful one) i s as follows.

Repeat the following steps N times. N is the number of total iterations:

1. Repeat steps k tiems:

* Sample a minibatch of size m from the generator:{z1,z2...zm} = Pmodel(z)

* Sample a minibatch of size m from the actual data:{x1,x2,..xm} = Pdata(x)

* Update the discriminator loss, Jd

2. Set the discriminator as non-trainable

3. Sample a minibatch of size m from the generator: {z1, z2,...zm}=Pmodel(z)

4. Update the generator loss, Jg

In their original paper, Goodfellow et al. used k=1, that is, they trained discriminator and generator models alternately. There are some variants and hacks where it is observed that training the discriminator more often than the generator helps with better convergence.


The following figure(Figure 6.5) showcases the training phases of the generator and discriminator models. The smaller dotted line is the discriminator model, the solid line is the generator model, and the larger dotted line is the actual training data. The vertical lines at the bottom demonstrate the sampling of data points from the distribution of z, that is, x=pmodel(z). The line point to the fact that the generator contracts in the regions of high density and expands in the regions of low density. Part(a) shows the initial stages of the training phases where the discriminator (D) is a partially correct classifier. Parts(b) and (c) show thow improvements in D guide changes in the generator, G. Finally, in part(d) you can see where pmodel=pdata and the discriminator is no longer able to differentiate between fake and real samples, that is D(x)=1/2