Generative models are a class of models in the unsupervised machine learning space. They help us to model the underlying distributions responsible for generating the dataset under consideration. There are different methods/frameworks to work with generative models. The first set of methods correspond to models that represent data with an explicit density function. Here we define a probability density function, P, explicitly and develop a model that increases the maximum likelihood of sampling from this distribution.
There are two further types within explicit density methods, tractable and approximate density methods. PixelIRNNs are an active area of research for tractable density methods. When we try to model complex real-world data distribution, for example, natural images or speech signals, defining a parametri function becomes challenging. To overcome this, you learned about RBMs and VAEs in Chapter 4, Teching Networks to Generate Digits, and Chapter 5, Painting Pictures with Neural Networks Using VAEs, respectively. These techniques work by approximating the underlying probability density functions explicitly. VAEs work towards maximizing the likelihood estimates of the lower bound, while RBMs use Markov chains to make an estimate of the distribution. The overall landscape of generative models can be described using Figure 6.2:
GANs fall under implicity density modeling methods. The implicit density functions give up the property of explicity defining the underlying distribution but work by defining methods to draw samples from such distributions. The GAN framework is a class of methods that can sample directly from the underlying distributions. This alleviates some of the complexities associated with the methods we have coverd so far, such as difining underlying probability distribution functions and the quality of output. Now that you have a high-level understanding of generative models, let's dive deeper into the details of GANs.
댓글 없음:
댓글 쓰기