Up until now we've primarily discussed the basics of neural networks by referencing feedforward networks, where every input is connected to every output in each layer.
While these feedforward networks are useful for illustrating how deep networks are trained, they are only on class of a broader set of architectures used in modern applications, including generative models, Thus, before covering some of the techniques that make training large networks practical, let's review these alternative deep models.
댓글 없음:
댓글 쓰기