This week in The History of AI at AIWS.net – Marvin Minsky and Seymour Papert published an expanded edition of Perceptrons

This week in The History of AI at AIWS.net – Marvin Minsky and Seymour Papert published an expanded edition of Perceptrons

This week in The History of AI at AIWS.net – Marvin Minsky and Seymour Papert published an expanded edition of Perceptrons in 1988. The original book was published in 1969. The original book explored the concept of the “perceptron”, but also highlighted its limitations. The revised and expanded edition of the book added a chapter countering criticisms of the book made in the twenty years after its publication. The original Perceptrons were pessimistic in its predictions for AI, and was thought to have been a cause for the first AI winter.

Marvin Minksy was an important pioneer in the field of AI. He penned the research proposal for the Dartmouth Conference, which coined the term “Artificial Intelligence”, and he was a participant in it when it was hosted the next summer. Minsky would also co-founded the MIT AI labs, which went through different names, and the MIT Media Laboratory. In terms of popular culture, he was an adviser to Stanley Kubrick’s acclaimed movie 2001: A Space Odyssey. He won the Turing Award in 1969.

Seymour Papert was a South African-born mathematician and computer scientist. He was mainly associated with MIT for his teaching and research. He was also a pioneer in Artificial Intelligence. Papert was also a co-creator of the Logo programming language, which is used educationally.

The History of AI initiative considers this republication important because it revisited and furthered discourses on AI. The original book was also a cause for the first AI winter, a pivotal event in the history of AI. Furthermore, Marvin Minsky was one of the founders of AI. Thus, HAI sees Perceptrons (republished 1988) as meaningful in the development of Artificial Intelligence.

Dr. Lorraine Kisselburgh, a leader of Technology Policy of ACM, joins the History of AI Board

Dr. Lorraine Kisselburgh, a leader of Technology Policy of ACM, joins the History of AI Board

Dr Lorraine Kisselburgh is the inaugural Chair of ACM’s global Technology Policy Council, where she oversees technology policy engagement in the US, Europe, and other global regions. Drawing on 100,000 computer scientists and professional members, ACM’s public policy activities provide nonpartisan technical expertise to policy leaders, stakeholders, and the general public about technology policy issues, including the 2017 Statement on Algorithmic Transparency and Accountability and the 2020 Principles for Facial Recognition Technologies.

The History of AI Board warmly welcomes Dr. Lorraine Kisseburgh.

This week in The History of AI at AIWS.net – the ACM named Yoshua Bengio, Geofrrey Hinton, and Yann LeCun recipients of the Turing Award in 2018

This week in The History of AI at AIWS.net – the ACM named Yoshua Bengio, Geofrrey Hinton, and Yann LeCun recipients of the Turing Award in 2018

This week in The History of AI at AIWS.net – the ACM named Yoshua Bengio, Geofrrey Hinton, and Yann LeCun recipients of the Turing Award in 2018 for breakthroughs that made deep neural networks critical in computing. The Turing Award is one of the most prestigious awards in the field, as it is often considered the Nobel Prize of Computer Science. Other winners include Marvin Minsky and Judea Pearl, both of whom made enormous contributions to Artificial Intelligence.

Yoshua Bengio is a Canadian computer scientist, most notable for his works on neural networks and deep learning. He is an influential scholar, being one of the most cited computer scientists. In the 1990s and 2000s, he helped make deep advancements in the field of deep learning. Bengio is also a Fellow of the Royal Society.

Yann LeCun is a French computer scientist, renowned for his work on deep learning and artificial intelligence. He is also notable for contributions to robotics and computational neuroscience. He is the Silver Professor of the Courant Institute of Mathematical Sciences at NYU. In addition, LeCun is the Chief AI Scientist for Facebook. 

Geoffrey Hinton is an English-Canadian cognitive psychologist and computer scientist. He is most notable for his work on neural networks. He co-authored the seminal paper on backpropagation, “Learning representations by back-propagating errors”, in 1986. He is also known for his work into Deep Learning. Hinton, along with Yoshua Bengio and Yann LeCun (who was a postdoctorate student of Hinton), are considered the “Fathers of Deep Learning”.

The History of AI Initiative considers this award and the recipients important because they play an important role in Deep Learning, which is a field of Machine Learning, part of Artificial Intelligence. It is an acknowledgement of how far AI has developed, and thus, is a part of the History of AI.

This week in The History of AI at AIWS.net – “Learning Multiple Layers of Representation” by Geoffrey Hinton was published

This week in The History of AI at AIWS.net – “Learning Multiple Layers of Representation” by Geoffrey Hinton was published

This week in The History of AI at AIWS.net – “Learning Multiple Layers of Representation” by Geoffrey Hinton was published in October 2008. The paper proposed new approaches to deep learning. In place of backpropagation, another concept Hinton introduced prior, Hinton proposes multilayer neural networks. This is so because backpropagation faced limitations such as requiring labeled training data. The paper can be read here.

Deep learning is a part of the broader machine learning field in Artificial Intelligence. The process is a method that is based on artificial neural networks with representation learning. It is “deep” in that it uses multiple layers in the networks. In the modern day, it has been utilised in various fields with good results.

Geoffrey Hinton is an English-Canadian cognitive psychologist and computer scientist. He is most notable for his work on neural networks. He is also known for his work into Deep Learning. Hinton, along with Yoshua Bengio and Yann LeCun (who was a postdoctorate student of Hinton), are considered the “Fathers of Deep Learning”. They were awarded the 2018 ACM Turing Award, considered the Nobel Prize of Computer Science, for their work on deep learning. 

This paper is important in the History of AI because it introduces new perspective on deep learning. Instead of another ground-breaking concept like backpropagation, Hinton shows another method in the field. Geoffrey Hinton is also an important role in Deep Learning, which is a field of Machine Learning, part of Artificial Intelligence.

This week in The History of AI at AIWS.net – the sudden collapse of the market for specialised AI hardware

This week in The History of AI at AIWS.net – the sudden collapse of the market for specialised AI hardware

This week in The History of AI at AIWS.net – the sudden collapse of the market for specialised AI hardware in 1987. This is due to the fact that computers from Apple and IBM became more powerful than Lisp machines and other expert systems. In the 80s, specialised AI hardware such as Lisp machines became very popular due to its effectiveness in the corporate world. However they were expensive to maintain. By the end of the decade, computers by Apple and IBM had catched up with expert systems, per Moore’s Law, and also at a far cheaper price. Because now consumers no longer require the more expensive expert systems, there was a collapse for the market of such things.

This collapse of the market led to what is dubbed the Second AI Winter. The collapse coincided with the end of the 5th Generation Computer project of Japan and the Strategic Computing Initiative in the USA. The expensive nature of expert systems and the lack of demand led to slowdowns in development of that field. Companies that run Lisp went bankrupt or moved away from the field entirely. Thus, the winter spelled the end for expert systems as a major player in AI and computers.

Expert systems are computer systems that can emulate man’s decision-making abilities. They are designed to solve problems through reasoning adn they can perform at the level of human experts. The first expert system was SAINT, developed by Marvin Minsky and James Robert Slagle. Lisp machines are designed to be able to run expert systems. Lisp machine runs the Lisp programming language, and in a way, it was one of the first commercial and personal workstation computer.

The fall of expert systems highlight lessons that are valuable for the History of AI and the current development of AI as well. It shows the failure to adapt by many in the AI field. The end of expert system in popular usage and the beginnings of the Second AI winter are also important milestones in the development of Artificial Intelligence. Thus, the HAI initiative considers this event an important marker in the history of AI.

This week in The History of AI at AIWS.net – David Rumelhart, Geoffrey Hinton, and Ronald Williams published “Learning representations by back-propagating errors”

This week in The History of AI at AIWS.net – David Rumelhart, Geoffrey Hinton, and Ronald Williams published “Learning representations by back-propagating errors”

This week in The History of AI at AIWS.net – David Rumelhart, Geoffrey Hinton, and Ronald Williams published “Learning representations by back-propagating errors” in October 1986. In this paper, they describe “a new learning procedure, back-propagation, for networks of neurone-like units.” The term backpropagation was introduced in this paper, and the concept of it was also introduced to neural networks. The paper can be found here.

David E. Rumelhart was an American psychologist. He is notable for his contributions to the study of human cognition, in terms of mathematical psychology, symbolic artificial intelligence, and connectionism. At the time of publication of the paper (1986), he was a Professor at the Department of Psychology at University of California, San Diego. In 1987, he then moved to Stanford, becoming Professor there until 1998. Rumelhart also received the MacArthur Fellowship in 1987, and was elected to the National Academy of Sciences in 1991.

Geoffrey Hinton is an English-Canadian cognitive psychologist and computer scientist. He is most notable for his work on neural networks. He is also known for his work into Deep Learning. Hinton, along with Yoshua Bengio and Yann LeCun (who was a postdoctorate student of Hinton), are considered the “Fathers of Deep Learning”. They were awarded the 2018 ACM Turing Award, considered the Nobel Prize of Computer Science, for their work on deep learning. 

Ronald Williams is a computer scientist and a pioneer into neural networks. He is a Professor of Computer Science at Northeastern University. He was an author on the paper “Learning representations by back-propagating errors”, and he also made contributions to recurrent neural networks and reinforcement learning. 

The History of AI Initiative considers this paper important because it introduces backpropagandation. Furthermore, the paper created a boom in research into neural network, a component of AI. Geoffrey Hinton, one of the authors of the paper, would also go on and play an important role in Deep Learning, which is a field of Machine Learning, part of Artificial Intelligence.