페이지

2022년 3월 1일 화요일

Summary

 In this chapter, we've covered the basic vocabulary of deep learning - how initial research into perceptrons and LMPs led to simple learning rules being abandoned for backpropagation. We also looked at specialized neural network architectures such as CNNs, based on the visual cortex, and recurrent networks, specialized for sqequence modeling. Finally, we examined variants of the gradient descent algorithm proposed originally for backpropagation, which have advantages such as momentum, and described weight initialization schemes that place the parameters of the network in a range that is easier to navigate to local minimum.

With this context in place, we are all set to dive into projects in generative modeling, beginning with the generation of MNIST digit using Deep Belief Networks in Chapter 4, Teaching Networks to Generate Digits.


댓글 없음: