# Topics > Entities > Personalities >  Geoffrey Hinton

## Airicist

Hinton was awarded the 2018 Turing Prize alongside Yoshua Bengio and Yann LeCun for their work on deep learning.

cs.toronto.edu/~hinton

Geoffrey Hinton on Wikipedia

Projects:

Capsule Neural Network (CapsNet)

[Coursera] Neural Networks for Machine Learning — Geoffrey Hinton 2016

----------


## Airicist

The Next Generation of Neural Networks 

 Uploaded on Dec 4, 2007




> Google Tech Talks
> November, 29 2007
> 
> In the 1980's, new learning algorithms for neural networks promised to solve difficult classification tasks, like speech or object recognition, by learning many layers of non-linear features. The results were disappointing for two reasons: There was never enough labeled data to learn millions of complicated features and the learning was much too slow in deep neural networks with many layers of features. These problems can now be overcome by learning one layer of features at a time and by changing the goal of learning. Instead of trying to predict the labels, the learning algorithm tries to create a generative model that produces data which looks just like the unlabeled training data. These new neural networks outperform other machine learning methods when labeled data is scarce but unlabeled data is plentiful. An application to very fast document retrieval will be described.
> 
> Speaker: Geoffrey Hinton
> Geoffrey Hinton received his BA in experimental psychology from Cambridge in 1970 and his PhD in Artificial Intelligence from Edinburgh in 1978. He did postdoctoral work at Sussex University and the University of California San Diego and spent five years as a faculty member in the Computer Science department at Carnegie-Mellon University. He then became a fellow of the Canadian Institute for Advanced Research and moved to the Department of Computer Science at the University of Toronto. He spent three years from 1998 until 2001 setting up the Gatsby Computational Neuroscience Unit at University College London and then returned to the University of Toronto where he is a University Professor. He holds a Canada Research Chair in Machine Learning. He is the director of the program on "Neural Computation and Adaptive Perception" which is funded by the Canadian Institute for Advanced Research.
> 
> Geoffrey Hinton is a fellow of the Royal Society, the Royal Society of Canada, and the Association for the Advancement of Artificial Intelligence. He is an honorary foreign member of the American Academy of Arts and Sciences, and a former president of the Cognitive Science Society. He received an honorary doctorate from the University of Edinburgh in 2001. He was awarded the first David E. Rumelhart prize (2001), the IJCAI award for research excellence (2005), the IEEE Neural Network Pioneer award (1998) and the ITAC/NSERC award for contributions to information technology (1992).
> ...

----------


## Airicist

Article "The meaning of AlphaGo, the AI program that beat a Go champ"
Geoffrey Hinton, the godfather of ‘deep learning’—which helped Google’s AlphaGo beat a grandmaster—on the past, present and future of AI

by Adrian Lee
March 18, 2016

----------


## Airicist

Geoffrey Hinton talk "What is wrong with convolutional neural nets ?"

Published on Apr 3, 2017




> Brain & Cognitive Sciences - Fall Colloquium Series Recorded December 4, 2014
> 
> Talk given at MIT.
> 
> Geoffrey Hinton talks about his capsules project.

----------


## Airicist

Article "Google’s AI wizard unveils a new twist on neural networks"

by Tom Simonite
November 1, 2017

----------


## Airicist

Geoffrey Hinton - The Neural Network Revolution

Published on Jan 12, 2018




> Geoffrey Hinton is an Engineering Fellow at Google where he manages the Brain Team Toronto, which is a new part of the Google Brain Team and is located at Google's Toronto office at 111 Richmond Street. Brain Team Toronto does basic research on ways to improve neural network learning techniques. He is also the Chief Scientific Adviser of the new Vector Institute and an Emeritus Professor at the University of Toronto. 
> 
> Recorded: December 4th, 2017

----------


## Airicist

Article "Google's AI guru wants computers to think more like brains"

by Tom Simonite
December 12, 2018

----------


## Airicist

Article "Geoffrey Hinton discusses how AI could inform our understanding of the brain"

by Kyle Wiggers
May 9, 2019

----------


## Airicist

Geoff Hinton speaks about his latest research and the future of AI

Dec 17, 2020




> Geoff Hinton Hinton is one of the pioneers of deep learning, and shared the 2018 Turing Award with colleagues Yoshua Bengio and Yann LeCun. In 2017, he introduced capsule networks, an alternative to convolutional neural networks that take into account the pose of objects in a 3D world, solving a problem in computer vision in which elements of an object change their position when viewed from different angles.

----------


## Airicist2

Season 2 Ep 22 Geoff Hinton on revolutionizing artificial intelligence... again

Jun 1, 2022




> Over the past ten years, AI has experienced breakthrough after breakthrough in everything from computer vision to speech recognition, protein folding prediction, and so much more.
> 
> Many of these advancements hinge on the deep learning work conducted by our guest, Geoff Hinton, who has fundamentally changed the focus and direction of the field. A recipient of the Turing Award, the equivalent of the Nobel prize for computer science, he has over half a million citations of his work. 
> 
> Hinton has spent about half a century on deep learning, most of the time researching in relative obscurity. But that all changed in 2012 when Hinton and his students showed deep learning is better at image recognition than any other approaches to computer vision, and by a very large margin. That result, that moment, known as the ImageNet moment, changed the whole AI field. Pretty much everyone dropped what they had been doing and switched to deep learning.
> 
> Geoff joins Pieter in our two-part season finale for a wide-ranging discussion inspired by insights gleaned from Hinton’s journey from academia to Google Brain. The episode covers how existing neural networks and backpropagation models operate differently than how the brain actually works; the purpose of sleep; and why it’s better to grow our computers than manufacture them.
> 
> What's in this episode:
> ...

----------

