페이지

2022년 3월 12일 토요일

Summary

 In this cahpter, you learned about one of the most important models from the beginings of the deep learning revolution, the DBN. You saw that DBNs are constructed by stacking together RBNs, and how these undirected models can be trained using CD.

This chapter then describeed a greedy, layer-wise procedure for priming a DBN by sequentitally training each of a stack of RBMs, which can then be fine-truned using the wake-sleep algorithm or backpropagation. We then explored pracical examples of using the TensorFlow 2 API to create an RBM layer and a DBN model, illustraing the use of the GradientTape class to compute update using CD.

You also learned how, following the wake-sleep algorithm, we can compile the DBN as a normal Deep Neural Network and perform backpropagation for upervised training. We applied these models to MNIST data and saw how an RBM can generate digits after training converges, and has features resembling the convolutional filters described in Chapter 3, Building Blocks of Deep Neural Networks.

While the examples in the chapter involved significantly extending the basic layer and model classes of the TensorFlow keras API, they should give you an idea of how to implement your own low-level alternative training procedures. Going forward, we will mostly stick to using the standard fit() and predict() methods, starting with our next topic, Variational Autoencodres, a sophisticated and computationally efficient way to generate image data.


댓글 없음: