페이지

2022년 5월 16일 월요일

1.2.2 Deep Learning

 In 2006, Geoffirey Hinton et al. found that multilayer neural networks can be better trained through layer-by-layer pre-training and achieved a better error rate than SVM on the  MNIST handwritten digital picture data set, turning on the third artificial intelligence revival. In that paper, Geoffrey Hinton first proposed the concept of deep learning. In 2011, Xavier Glorot proposed a Rectified Linear Unit (ReLU) activation function, which is one of the most widely used activation functions now. In 2012, Alex Krizhevsky propeosed an eight-layer deep neural network AlexNet, which used the ReLU activation functio nand Dropout technology to prevent overfitting. At the same time, it abandoned the layer-byt-layer pre-training method and directly trained the network on two NVIDIA GTX580 GPUs. AlexNet won the first place in the ILSVRC-2012 picture recognition competition, showing a stunning 10.9% reduction in the top-5 error rate compared with the second place.

Since the AlexHNet model was developed, various models have been published successively, including VGG series models increase the number of layers in the network to hundreds or even thousands while manintaining the smae or even better performance, which is the most representative model of deep learning.

In addition to the amazing results in supervised learning, huge achievements have also been made in unsupervised learning and reinforcement learning. In 2014, Ian Goodfellow proposed generative adversarial networks(GANs), which learned the true distribution of samples through adversarial training to generate samples with higher approximation. Since then, a large number of GAN models have been proposed. The latest image generation models can generate images that reach a degree of fidelity hard to discern from the naked eye. In 2016, DeepMind applied deep neural networks to the field of reinforcement learning and proposed the DQN algorithm, which achieved a level comparable to or even higher than that of humans in 49 games in the Atarigame platform. In the field of Go, AlphaGo and AlphaGo Zero intelligent programs from Deep Mind have successively defeated hyuman top Go players Li Shishi, Ke jie, etc. In the multi-agent collaboration Dota 2 game platform, OpenAI five intelligent programs developed by OpenAI defeated the T18 champion team OG in a restricted game environment, showing a large number of professional high-level intelligent operations. Figure 1-9 lists the major time points between 2006 and 2019 for AI development.

댓글 없음: