페이지

2022년 3월 24일 목요일

The generator model

 This is the primary model of interest in the whole game. This model generates samples that are intended to resemble the samples from our training set. The model takes random unstructured noise as input (typically denoted as z) and tries to create a varied set of output. The generator model is usually a differentiable function; it is often represented by a deep neural network but is not restricted to that.

We denote the generator as G and its output as G(z). We typically use a lower-dimensional z as compared to the dimension of the orginal data, x, that is, Zdim <= Xdim. This is done as a way of compressing or encoding real-world information into lower-dimensional space.

In simple words, the generator trains to generate samlples good enough to fool the discriminator, while the discriminator trains to properly classify  real(training samples) versus fake (output from the generator). Thus, this game of adversaries uses a generator model, G, which tries to make D(G(z)) as close to 1 as possible. The discriminator is incentivized to make D(C(z)) close to 0, where 1 denotes real and 0 denotes fake samples. The GAN model achieves equlibrium when the generator starts to easily fool the discriminator, that is, the discriminator reaches its saddle point. While, in theory, GANs Have several advantages over other methods in the family tree described previously, they pose their own set of problems. We will discuss some of them in the upcoming sections.


댓글 없음: