Blog

Home » The History of AI » The History of AI » The History of AI House » This week in The History of AI at AIWS.net – David Rumelhart, Geoffrey Hinton, and Ronald Williams published “Learning representations by back-propagating errors”

This week in The History of AI at AIWS.net – David Rumelhart, Geoffrey Hinton, and Ronald Williams published “Learning representations by back-propagating errors”

This week in The History of AI at AIWS.net – David Rumelhart, Geoffrey Hinton, and Ronald Williams published “Learning representations by back-propagating errors” in October 1986. In this paper, they describe “a new learning procedure, back-propagation, for networks of neurone-like units.” The term backpropagation was introduced in this paper, and the concept of it was also introduced to neural networks. The paper can be found here.

David E. Rumelhart was an American psychologist. He is notable for his contributions to the study of human cognition, in terms of mathematical psychology, symbolic artificial intelligence, and connectionism. At the time of publication of the paper (1986), he was a Professor at the Department of Psychology at University of California, San Diego. In 1987, he then moved to Stanford, becoming Professor there until 1998. Rumelhart also received the MacArthur Fellowship in 1987, and was elected to the National Academy of Sciences in 1991.

Geoffrey Hinton is an English-Canadian cognitive psychologist and computer scientist. He is most notable for his work on neural networks. He is also known for his work into Deep Learning. Hinton, along with Yoshua Bengio and Yann LeCun (who was a post-doctorate student of Hinton), are considered the “Fathers of Deep Learning”. They were awarded the 2018 ACM Turing Award, considered the Nobel Prize of Computer Science, for their work on deep learning.

Ronald Williams is a computer scientist and a pioneer into neural networks. He is a Professor of Computer Science at Northeastern University. He was an author on the paper “Learning representations by back-propagating errors”, and he also made contributions to recurrent neural networks and reinforcement learning.

The History of AI Initiative considers this paper important because it introduces back-propagating. Furthermore, the paper created a boom in research into neural network, a component of AI. Geoffrey Hinton, one of the authors of the paper, would also go on and play an important role in Deep Learning, which is a field of Machine Learning, part of Artificial Intelligence.