페이지

2022년 3월 25일 금요일

Non-saturating generator cost

 I practice, we do not train the generator to minimize log(1-D(G(z))) as this function does not provide sufficient gradients for learning. During the initial learning phases, where G is poor, the discriminator is able to classify the fake from the real with high confidence. This leads to the saturation of log(1-D(G(z))), which hinders improvements in the generator model. We thus tweak the generator to maximize log(D(G(z))) instead:

This provides stronger gradients for the generator to learn. This is shown in Figure 6.6. The x-axis denotes D(G(z)). The top line shows the objective, which is minimizing the likelihood of the discriminator being correct. The bottom line(updated objective) works by maximizing the likelihood of the discirimiator being wrong

Figure 6.6 illustrates how a slight change helps achieve better gradients during the initial phases of training.

Training GANs

 Training a GAN is like playing this game of two adversries. The generator is learning to generate good enough fake samples, while the discriminator is working hard to discriminate between real and fake. More formally, this is termed as the minimax game, where the value function V(G,D) is described as follows:

This is also called the zero-sum game, which has an equilibrium that is the same as the Nash equilibrium. We can better understand the value function V(G,D) by separating out the objective function for each of the players. The following equations describe individual objective functions:

where Jd is the discriminator objective function in the classical sense, Jg is the generator objective equal to the negative of the discriminator, and Pdata is the distribution of the training data. The rest of the terms have their usual meaning. This is one of the simplest ways of defining the game or corresponding objective functions. Over the years, different ways have been studied, some of which we will cover in this chapter.

The objective functions help us to understand the aim of each of the players. If we assume both probability densities are non-zero everywhere, we can get the optimal value of D(x) as:

We will revisit this equation in the latter part of the chapter. For now, the next step is to present a training algorithm whrein the discriminator and generator models frain towards their repspective objectives. The simplest yet widely used way of training a GAN(and by for the most successful one) i s as follows.

Repeat the following steps N times. N is the number of total iterations:

1. Repeat steps k tiems:

* Sample a minibatch of size m from the generator:{z1,z2...zm} = Pmodel(z)

* Sample a minibatch of size m from the actual data:{x1,x2,..xm} = Pdata(x)

* Update the discriminator loss, Jd

2. Set the discriminator as non-trainable

3. Sample a minibatch of size m from the generator: {z1, z2,...zm}=Pmodel(z)

4. Update the generator loss, Jg

In their original paper, Goodfellow et al. used k=1, that is, they trained discriminator and generator models alternately. There are some variants and hacks where it is observed that training the discriminator more often than the generator helps with better convergence.


The following figure(Figure 6.5) showcases the training phases of the generator and discriminator models. The smaller dotted line is the discriminator model, the solid line is the generator model, and the larger dotted line is the actual training data. The vertical lines at the bottom demonstrate the sampling of data points from the distribution of z, that is, x=pmodel(z). The line point to the fact that the generator contracts in the regions of high density and expands in the regions of low density. Part(a) shows the initial stages of the training phases where the discriminator (D) is a partially correct classifier. Parts(b) and (c) show thow improvements in D guide changes in the generator, G. Finally, in part(d) you can see where pmodel=pdata and the discriminator is no longer able to differentiate between fake and real samples, that is D(x)=1/2



2022년 3월 24일 목요일

The generator model

 This is the primary model of interest in the whole game. This model generates samples that are intended to resemble the samples from our training set. The model takes random unstructured noise as input (typically denoted as z) and tries to create a varied set of output. The generator model is usually a differentiable function; it is often represented by a deep neural network but is not restricted to that.

We denote the generator as G and its output as G(z). We typically use a lower-dimensional z as compared to the dimension of the orginal data, x, that is, Zdim <= Xdim. This is done as a way of compressing or encoding real-world information into lower-dimensional space.

In simple words, the generator trains to generate samlples good enough to fool the discriminator, while the discriminator trains to properly classify  real(training samples) versus fake (output from the generator). Thus, this game of adversaries uses a generator model, G, which tries to make D(G(z)) as close to 1 as possible. The discriminator is incentivized to make D(C(z)) close to 0, where 1 denotes real and 0 denotes fake samples. The GAN model achieves equlibrium when the generator starts to easily fool the discriminator, that is, the discriminator reaches its saddle point. While, in theory, GANs Have several advantages over other methods in the family tree described previously, they pose their own set of problems. We will discuss some of them in the upcoming sections.


2022년 3월 22일 화요일

The discriminator model

 This model represents a differentiable function that tries to maximize a probability of 1 for samples drawn from the training distribution. This can be any classification model, but we usually prefer a deep neural network for this. This is the throw-away model(similar to the decoder part of autoencodeers).

The discriminator is also used to classify whether the output from the generator is real or fake. The main utility of this model is to help develop a robust generator. We denote the discriminator model as D and its output as D(x). When it is used to classify output from the generator model. the discriminator model is denoted as D(G(z)), where G(z) is the output from the generator model.

Generative adversarial networks

 GANs have a pretty interesting origin story. It all began as a discussion / argument in a bar with lan Goodfellow and friends discussing work related to generating data using neural networks. The argument ended with everyone downplaying each other's methods. Goodfellow went back home and coded the first version of what we now calls a GAN. To his amazement, the code worked on the first try. Amore verbose description of the chain of events was shared by Goodfellow himself in an interview with Wired magazine.

As mentioned, GANs are implicit density functions that sample directly from the underlying distribution. They do this by defining a two-player game of adversaries. The adversaries compete against each other under welll-defined reward functions and each player tries to maximize its rewards. Without going into the details of game theory, the framework can be explained as follows.


The taxonomy of generative models

 Generative models are a class of models in the unsupervised machine learning space. They help us to model the underlying distributions responsible for generating the dataset under consideration. There are different methods/frameworks to work with generative models. The first set of methods correspond to models that represent data with an explicit density function. Here we define a probability density function, P, explicitly and develop a model that increases the maximum likelihood of sampling from this distribution.

There are two further types within explicit density methods, tractable and approximate density methods. PixelIRNNs are an active area of research for tractable density methods. When we try to model complex real-world data distribution, for example, natural images or speech signals, defining a parametri function becomes challenging. To overcome this, you learned about RBMs and VAEs in Chapter 4, Teching Networks to Generate Digits, and Chapter 5, Painting Pictures with Neural Networks Using VAEs, respectively. These techniques work by approximating the underlying probability density functions explicitly. VAEs work towards maximizing the likelihood estimates of the lower bound, while RBMs use Markov chains to make an estimate of the distribution. The overall landscape of generative models can be described using Figure 6.2:

GANs fall under implicity density modeling methods. The implicit density functions give up the property of explicity defining the underlying distribution but work by defining methods to draw samples from such distributions. The GAN framework is a class of methods that can sample directly from the underlying distributions. This alleviates some of the complexities associated with the methods we have coverd so far, such as difining underlying probability distribution functions and the quality of output. Now that you have a high-level understanding of generative models, let's dive deeper into the details of GANs.


2022년 3월 21일 월요일

6. Image Generation with GANs

 Generative modeling is a powerful concept that provides us with immense potential to approximate or model underlying processes that generate data. In the previous chapters, we covered concepts associated with deep learning in general and more specifically related to restricted Boltzmann machines(RBMs) and variational autoencoders(VAEs). This chapter will introduce another family of generative model called Generative Advaersarial Networks(GANs).

Heavily inspired by the concepts of game theory and picking up some of the best components from preiously discussed tetchniques, GANs provide a powerful framework for working in the generative modeling space. Since their invention in 2014 by Goodfellow et al., GANs have benefitted from termendous research and are now being used to explore creative domains such as art, fashion, and photography.

The following are two amazing high0quality samples from a variant of GANs called StyleGAN(Figure 6.1). The photograph of the kid is actually a fictional person who does not exist. The art sample is also generated by a similar network. StyleGANs are able to generrate high-quality sharp images by using the concept oof progressive growth (we will cover this in detail in later sections). These outputs were generated using the StyleGAN2 model trained on datasets such as the Flickr-Face-HQ or FFHQ dataset.

This chapter will cover:

- The taxonomy of generative models

- A number of improved GANs, such as DCGAN, Conditional-GAN, and so on

- The progressive GAN setup and its various components

- Some of the challenges associated with GANs

- Hands-on examples