In this chapter, you saw how deep neural networks can be used to create representations of complex data such as images that capture more of their variance than traditional dimension reduction techniques, such as PCA. This is demonstrated using the MNIST digits, where a neural network can spatially separate the dirrerent digits in a two-dimensional grid more cleanly than the principal components of those images. The chapter showed how deep neural networks can be used to approximate complex posterior distribution, such as images, using variational methods to sample from an approximation for an intractable distr5ibution, leading to a VAE algorithm based on minimizing the variational lower bound between the true and approximate posterior.
You also learned how the latent vector from this algorithm can be reparameterized to have lower variance, leading to better convergence in stochastic minibnatch gradient descent. You saw how the latent vectors generated by encoders in these models, which are usually independent, can be transformed into more realistic correlated distributions using IAF. Finally, we implemented these models on the CIFAR-10 dataset and showed how they can bbe used to rec onstruct the images and generate new images from random vectors.
The next chapter will introduce GANs and show how we can use them to add stylistic filters to input images, using the StyleGAN model.
댓글 없음:
댓글 쓰기