This week in The History of AI at AIWS.net – “Learning Multiple Layers of Representation” by Geoffrey Hinton was published in October 2008. The paper proposed new approaches to deep learning. In place of backpropagation, another concept Hinton introduced prior, Hinton proposes multilayer neural networks. This is so because backpropagation faced limitations such as requiring labeled training data. The paper can be read here.
Deep learning is a part of the broader machine learning field in Artificial Intelligence. The process is a method that is based on artificial neural networks with representation learning. It is “deep” in that it uses multiple layers in the networks. In the modern day, it has been utilised in various fields with good results.
Geoffrey Hinton is an English-Canadian cognitive psychologist and computer scientist. He is most notable for his work on neural networks. He is also known for his work into Deep Learning. Hinton, along with Yoshua Bengio and Yann LeCun (who was a postdoctorate student of Hinton), are considered the “Fathers of Deep Learning.” They were awarded the 2018 ACM Turing Award, considered the Nobel Prize of Computer Science, for their work on deep learning.
This paper is important in the History of AI because it introduces new perspective on deep learning. Instead of another ground-breaking concept like backpropagation, Hinton shows another method in the field. Geoffrey Hinton is also an important role in Deep Learning, which is a field of Machine Learning, part of Artificial Intelligence.