페이지

2022년 3월 25일 금요일

Training GANs

 Training a GAN is like playing this game of two adversries. The generator is learning to generate good enough fake samples, while the discriminator is working hard to discriminate between real and fake. More formally, this is termed as the minimax game, where the value function V(G,D) is described as follows:

This is also called the zero-sum game, which has an equilibrium that is the same as the Nash equilibrium. We can better understand the value function V(G,D) by separating out the objective function for each of the players. The following equations describe individual objective functions:

where Jd is the discriminator objective function in the classical sense, Jg is the generator objective equal to the negative of the discriminator, and Pdata is the distribution of the training data. The rest of the terms have their usual meaning. This is one of the simplest ways of defining the game or corresponding objective functions. Over the years, different ways have been studied, some of which we will cover in this chapter.

The objective functions help us to understand the aim of each of the players. If we assume both probability densities are non-zero everywhere, we can get the optimal value of D(x) as:

We will revisit this equation in the latter part of the chapter. For now, the next step is to present a training algorithm whrein the discriminator and generator models frain towards their repspective objectives. The simplest yet widely used way of training a GAN(and by for the most successful one) i s as follows.

Repeat the following steps N times. N is the number of total iterations:

1. Repeat steps k tiems:

* Sample a minibatch of size m from the generator:{z1,z2...zm} = Pmodel(z)

* Sample a minibatch of size m from the actual data:{x1,x2,..xm} = Pdata(x)

* Update the discriminator loss, Jd

2. Set the discriminator as non-trainable

3. Sample a minibatch of size m from the generator: {z1, z2,...zm}=Pmodel(z)

4. Update the generator loss, Jg

In their original paper, Goodfellow et al. used k=1, that is, they trained discriminator and generator models alternately. There are some variants and hacks where it is observed that training the discriminator more often than the generator helps with better convergence.


The following figure(Figure 6.5) showcases the training phases of the generator and discriminator models. The smaller dotted line is the discriminator model, the solid line is the generator model, and the larger dotted line is the actual training data. The vertical lines at the bottom demonstrate the sampling of data points from the distribution of z, that is, x=pmodel(z). The line point to the fact that the generator contracts in the regions of high density and expands in the regions of low density. Part(a) shows the initial stages of the training phases where the discriminator (D) is a partially correct classifier. Parts(b) and (c) show thow improvements in D guide changes in the generator, G. Finally, in part(d) you can see where pmodel=pdata and the discriminator is no longer able to differentiate between fake and real samples, that is D(x)=1/2



댓글 없음: