페이지

2022년 5월 15일 일요일

1.2.1 Shallow Neural Netorks

 In 1943, psychologist Warrent McCulloch and logician Walter Pitts proposed the earliest mathematical model of neuraons based on tghe structure of biological neuraons, called MP neuraon models after their last name initials. The model f(x)=h(g(x)), where g(x)=iXi, Xi∈{0,1}, takes values from g(x) to predict output values as shown in Figure 1-4. If g(x) >=0, output is 1; if g(x) < 0, output is 0. The MO neuraon models have no learning ability and can onlyu complete fixed logic judgments.

In 1958, American psychologist Frank Rosenblatt proposed the first neuron model that can automatically learn weights, called perceptron. As xshown in Figure 1-5, the error between the output value 0 and the true value y is used to adjust the weights of the neuraons {w1, w2, w3...wn}. Frank Rosenblatt then implemented the perceptron model based on the "Mark 1 perceptron" hardware. As shown in Figures 1-7 and 1-7, the input is an image sensor with 400 pixels, and the output has eight nodes. It can successfully identify some English letters. It is generally believed that 1943-1969 is the first prosperous period of artificial intelligence development.

In 1969, the American scientist Marvin Minsky and others pointed out the main flaw of linear models such as perceptrons in the book Perceptrons. They found that perceptrons cannot handle simple linear inseparable problems such as XOR. This directly led to the trough period of perceptron-related research on neural networks. It is generally considered that 1969-1982 was the first winter of artificial intelligence.

Although it was in the trough period of AI, there were still many significant studies published one after another. The most important one is the backpropagation(BP) algorithm, which is still the core foundation of modern deep learning algorithms. In fact, the mathematical idea of the BP algorithm has been derived as early as the 1960s, but it had not been applied to neural networks at that time. In 1974, American scientist Paul Werbos first proposed that the  BP algorithm can be applied to neural networks in his doctoral dissertation. Unfortunately, this result has not received enough attention. In 1986, David Rumelhard et al. published a paper using the BP algorithm for feature learning in Nature, Since then, the BP algorithm started gaining widespread attention.

In 1982, with the introduction of John Hopfield[s cyclically connected Hopfield network, the second wave of artificial intelligence renaissance was started from 1982 to 1995. During this period, convolutional neural networks, recurrent neural networks, and backpropatation algorithms were developed one after another. In 1986, David Rumelhart, Geoffreey Hiton, and other applied the BP algorithm to multilayer perceptrons, in 1989, Yann LeCun and other applied the BP algorithm to handwritten digital image recognition and acghieved great success, which is known as LeNet. The LeNet system was successfully commericalled in zip code recognition, bank check recognition, and many other systems. In 1997, one of the most widely used recurrent neural network variants, Long ShortTerm Memory(LTSM), was proposed by Jurgen Schmidhuber. In the same year, a bidrectional recurrent neural network was also proposed.

Unfortunately, the study of neural networks has graduallyu entered a though with the rise of traditional machine learning algorithms represented by support vector machines(SVNs), which is known as the second winder of artificial intelligence. Suppport vaector amchines have a rigoroutstheoretical founda5tion, requre a small number of training samples,a nd also have good generalization capabilities. In contrast, neural networks lack theorerical foundation and are hard to interpret. Deep networks are difficult to train, and the performance is normal. Figure 1-8 shows the significant time of AI developemnt between 1943and 2006



댓글 없음: