Geoffrey Hinton


The Next Generation of Neural Networks

Uploaded on Dec 4, 2007

Google Tech Talks
November, 29 2007

In the 1980's, new learning algorithms for neural networks promised to solve difficult classification tasks, like speech or object recognition, by learning many layers of non-linear features. The results were disappointing for two reasons: There was never enough labeled data to learn millions of complicated features and the learning was much too slow in deep neural networks with many layers of features. These problems can now be overcome by learning one layer of features at a time and by changing the goal of learning. Instead of trying to predict the labels, the learning algorithm tries to create a generative model that produces data which looks just like the unlabeled training data. These new neural networks outperform other machine learning methods when labeled data is scarce but unlabeled data is plentiful. An application to very fast document retrieval will be described.

Speaker: Geoffrey Hinton
Geoffrey Hinton received his BA in experimental psychology from Cambridge in 1970 and his PhD in Artificial Intelligence from Edinburgh in 1978. He did postdoctoral work at Sussex University and the University of California San Diego and spent five years as a faculty member in the Computer Science department at Carnegie-Mellon University. He then became a fellow of the Canadian Institute for Advanced Research and moved to the Department of Computer Science at the University of Toronto. He spent three years from 1998 until 2001 setting up the Gatsby Computational Neuroscience Unit at University College London and then returned to the University of Toronto where he is a University Professor. He holds a Canada Research Chair in Machine Learning. He is the director of the program on "Neural Computation and Adaptive Perception" which is funded by the Canadian Institute for Advanced Research.

Geoffrey Hinton is a fellow of the Royal Society, the Royal Society of Canada, and the Association for the Advancement of Artificial Intelligence. He is an honorary foreign member of the American Academy of Arts and Sciences, and a former president of the Cognitive Science Society. He received an honorary doctorate from the University of Edinburgh in 2001. He was awarded the first David E. Rumelhart prize (2001), the IJCAI award for research excellence (2005), the IEEE Neural Network Pioneer award (1998) and the ITAC/NSERC award for contributions to information technology (1992).

A simple introduction to Geoffrey Hinton's research can be found in his articles in Scientific American in September 1992 and October 1993. He investigates ways of using neural networks for learning, memory, perception and symbol processing and has over 200 publications in these areas. He was one of the researchers who introduced the back-propagation algorithm that has been widely used for practical applications. His other contributions to neural network research include Boltzmann machines, distributed representations, time-delay neural nets, mixtures of experts, Helmholtz machines and products of experts. His current main interest is in unsupervised learning procedures for neural networks with rich sensory input.
 

Geoffrey Hinton talk "What is wrong with convolutional neural nets ?"

Published on Apr 3, 2017

Brain & Cognitive Sciences - Fall Colloquium Series Recorded December 4, 2014

Talk given at MIT.

Geoffrey Hinton talks about his capsules project.
 

Geoffrey Hinton - The Neural Network Revolution

Published on Jan 12, 2018

Geoffrey Hinton is an Engineering Fellow at Google where he manages the Brain Team Toronto, which is a new part of the Google Brain Team and is located at Google's Toronto office at 111 Richmond Street. Brain Team Toronto does basic research on ways to improve neural network learning techniques. He is also the Chief Scientific Adviser of the new Vector Institute and an Emeritus Professor at the University of Toronto.

Recorded: December 4th, 2017
 

Geoff Hinton speaks about his latest research and the future of AI

Dec 17, 2020

Geoff Hinton Hinton is one of the pioneers of deep learning, and shared the 2018 Turing Award with colleagues Yoshua Bengio and Yann LeCun. In 2017, he introduced capsule networks, an alternative to convolutional neural networks that take into account the pose of objects in a 3D world, solving a problem in computer vision in which elements of an object change their position when viewed from different angles.
 

Season 2 Ep 22 Geoff Hinton on revolutionizing artificial intelligence... again

Jun 1, 2022

Over the past ten years, AI has experienced breakthrough after breakthrough in everything from computer vision to speech recognition, protein folding prediction, and so much more.

Many of these advancements hinge on the deep learning work conducted by our guest, Geoff Hinton, who has fundamentally changed the focus and direction of the field. A recipient of the Turing Award, the equivalent of the Nobel prize for computer science, he has over half a million citations of his work.

Hinton has spent about half a century on deep learning, most of the time researching in relative obscurity. But that all changed in 2012 when Hinton and his students showed deep learning is better at image recognition than any other approaches to computer vision, and by a very large margin. That result, that moment, known as the ImageNet moment, changed the whole AI field. Pretty much everyone dropped what they had been doing and switched to deep learning.

Geoff joins Pieter in our two-part season finale for a wide-ranging discussion inspired by insights gleaned from Hinton’s journey from academia to Google Brain. The episode covers how existing neural networks and backpropagation models operate differently than how the brain actually works; the purpose of sleep; and why it’s better to grow our computers than manufacture them.

What's in this episode:

00:00:00 - Introduction
00:02:48 - Understanding how the brain works
00:06:59 - Why we need unsupervised local objective functions
00:09:39 - Masked auto-encoders
00:10:55 - Current methods in end to end learning
00:18:36 - Spiking neural networks
00:23:00 - Leveraging spike times
00:29:55 - The story behind AlexNet
00:36:15 - Transition from pure academia to Google
00:40:23 - The secret auction of Hinton’s company at NuerIPS
00:44:18 - Hinton’s start in psychology and carpentry
00:54:34 - Why computers should be grown rather than manufactured
01:06:57 - The function of sleep and Boltzmann Machines
01:11:49 - Need for negative data
01:19:35 - Visualizing data using t-SNE
 

This Canadian genius created modern AI

Jun 25, 2018

For nearly 40 years, Geoff Hinton has been trying to get computers to learn like people do, a quest almost everyone thought was crazy or at least hopeless - right up until the moment it revolutionized the field. In this Hello World video, Bloomberg Businessweek's Ashlee Vance meets the Godfather of AI.
 
Last edited:

S3 E9 Geoff Hinton, the "Godfather of AI", quits Google to warn of AI risks (Host: Pieter Abbeel)

May 10, 2023

S3 E9 Geoff Hinton, the "Godfather of AI", quits Google to warn of AI risks (Host: Pieter Abbeel)

What's in this episode:

00:00:00 Geoffrey Hinton
00:01:46 Sponsors: Index Ventures and Weights and Biases
00:02:45 Backpropagation on digital computers might be better than whatever the brain has
00:06:30 Will the AI take over control from humans?
00:10:10 Predictive AI vs. Goal-Oriented AI
00:13:34 AI smarter than people
00:16:55 Risks of AI
00:19:35 Let's not forget about the tremendous good AI will do
00:22:44 Letter for a 6-month stop to larger AI model development
00:26:28 Role of regulation
00:32:45 The existential threat of AI taking control
00:36:06 AI as the benevolent super-capable “United Nations” advisor to humanity
00:40:03 What would happen if the AI did take control?
00:41:44 Fusing human and artificial intelligence / Neuralink
00:44:00 Assuming AI will take over, is there room to steer how life will be then
00:47:05 The bottom line: the purpose of life
00:54:00 Technical opportunities to contribute towards a good AI-powered future
00:56:18 What are you going to do now?
 

The Godfather in Conversation: Why Geoffrey Hinton is worried about the future of AI

Jun 22, 2023

Geoffrey Hinton, known to many as the “Godfather of AI,” recently made headlines around the world after leaving his job at Google to speak more freely about the risks posed by unchecked development of artificial intelligence, including popular tools like ChatGPT and Google’s PaLM.

Why does he believe digital intelligence could hold an advantage over biological intelligence? How did he suddenly arrive at this conclusion after a lifetime of work in the field? Most importantly, what – if anything – can be done to safeguard the future of humanity? The University of Toronto University Professor Emeritus addresses these questions and more in The Godfather in Conversation.

00:00 Intro
01:03 Digital intelligence
02:27 Biological intelligence
03:47 Why worry?
04:39 Machine learning
07:07 Neural Nets
13:22 Neural nets and language
17:18 Challenges
18:49 Breakthrough moment
20:41 AlexNet
24:35 Pace of Innovation
26:04 ChatGPT
27:46 Public Reaction
29:49 Benefits for society
33:25 Pace of innovation
35:48 Sudden realization
37:13 Role of government
40:08 Big tech
42:32 Advice to researchers
43:50 Understanding risk
45:20 What’s next?


"Godfather of AI" Geoffrey Hinton: The 60 minutes interview

Oct 9, 2023

There’s no guaranteed path to safety as artificial intelligence advances, Geoffrey Hinton, AI pioneer, warns. He shares his thoughts on AI’s benefits and dangers with Scott Pelley.
 
Last edited:

AI could be smarter than people in 20 years, says 'godfather of AI' | Spotlight

Feb 8, 2024

Geoffrey Hinton has been described as the 'godfather of AI.' But where the artificial intelligence pioneer was once optimistic, he now warns of its dangers. He spoke with Canada Tonight's Travis Dhanraj about the future of tech.
 

Geoffrey Hinton | Will digital intelligence replace biological intelligence?

Feb 2, 2024

The Schwartz Reisman Institute for Technology and Society and the Department of Computer Science at the University of Toronto, in collaboration with the Vector Institute for Artificial Intelligence and the Cosmic Future Initiative at the Faculty of Arts & Science, present Geoffrey Hinton on October 27, 2023, at the University of Toronto.

0:00:00 - 0:07:20 Opening remarks and introduction
0:07:21 - 0:08:43 Overview
0:08:44 - 0:20:08 Two different ways to do computation
0:20:09 - 0:30:11 Do large language models really understand what they are saying?
0:30:12 - 0:49:50 The first neural net language model and how it works
0:49:51 - 0:57:24 Will we be able to control super-intelligence once it surpasses our intelligence?
0:57:25 - 1:03:18 Does digital intelligence have subjective experience?
1:03:19 - 1:55:36 Q&A
1:55:37 - 1:58:37 Closing remarks

Talk title: “Will digital intelligence replace biological intelligence?”

Abstract: Digital computers were designed to allow a person to tell them exactly what to do. They require high energy and precise fabrication, but in return they allow exactly the same model to be run on physically different pieces of hardware, which makes the model immortal. For computers that learn what to do, we could abandon the fundamental principle that the software should be separable from the hardware and mimic biology by using very low power analog computation that makes use of the idiosynchratic properties of a particular piece of hardware. This requires a learning algorithm that can make use of the analog properties without having a good model of those properties. Using the idiosynchratic analog properties of the hardware makes the computation mortal. When the hardware dies, so does the learned knowledge. The knowledge can be transferred to a younger analog computer by getting the younger computer to mimic the outputs of the older one but education is a slow and painful process. By contrast, digital computation makes it possible to run many copies of exactly the same model on different pieces of hardware. Thousands of identical digital agents can look at thousands of different datasets and share what they have learned very efficiently by averaging their weight changes. That is why chatbots like GPT-4 and Gemini can learn thousands of times more than any one person. Also, digital computation can use the backpropagation learning procedure which scales much better than any procedure yet found for analog hardware. This leads me to believe that large-scale digital computation is probably far better at acquiring knowledge than biological computation and may soon be much more intelligent than us. The fact that digital intelligences are immortal and did not evolve should make them less susceptible to religion and wars, but if a digital super-intelligence ever wanted to take control it is unlikely that we could stop it, so the most urgent research question in AI is how to ensure that they never want to take control.

About Geoffrey Hinton

Geoffrey Hinton received his PhD in artificial intelligence from Edinburgh in 1978. After five years as a faculty member at Carnegie Mellon he became a fellow of the Canadian Institute for Advanced Research and moved to the Department of Computer Science at the University of Toronto, where he is now an emeritus professor. In 2013, Google acquired Hinton’s neural networks startup, DNN research, which developed out of his research at U of T. Subsequently, Hinton was a Vice President and Engineering Fellow at Google until 2023. He is a founder of the Vector Institute for Artificial Intelligence where he continues to serve as Chief Scientific Adviser.

Hinton was one of the researchers who introduced the backpropagation algorithm and the first to use backpropagation for learning word embeddings. His other contributions to neural network research include Boltzmann machines, distributed representations, time-delay neural nets, mixtures of experts, variational learning and deep learning. His research group in Toronto made major breakthroughs in deep learning that revolutionized speech recognition and object classification. Hinton is among the most widely cited computer scientists in the world.

Hinton is a fellow of the UK Royal Society, the Royal Society of Canada, the Association for the Advancement of Artificial Intelligence, and a foreign member of the US National Academy of Engineering and the American Academy of Arts and Sciences. His awards include the David E. Rumelhart Prize, the IJCAI Award for Research Excellence, the Killam Prize for Engineering, the IEEE Frank Rosenblatt Medal, the NSERC Herzberg Gold Medal, the IEEE James Clerk Maxwell Gold Medal, the NEC C&C Award, the BBVA Award, the Honda Prize, and most notably the ACM A.M. Turing Award.
 
Last edited:

Prof. Geoffrey Hinton - "Will digital intelligence replace biological intelligence?" Romanes Lecture

Feb 29, 2024

Professor Geoffrey Hinton, CC, FRS, FRSC, the ‘Godfather of AI’, delivered Oxford's annual Romanes Lecture at the Sheldonian Theatre on Monday, 19 February 2024.

The public lecture entitled ‘Will digital intelligence replace biological intelligence?’ discussed the dangers of artificial intelligence (AI) and how to ensure it does not take control of humans, and consequently, wipe out humanity. He said that the fact that digital intelligence is immortal and does not evolve should make it less susceptible to religion and wars, but ‘if a digital super-intelligence ever wanted to take control it is unlikely that we could stop it,’ he added.

The British-Canadian computer scientist and cognitive psychologist also spoke of how AI could replace humans in the workforce and how it could be used to spread misinformation. He had previously believed that it could take AI systems up to a century to become ‘super intelligent’. He now thinks that it could happen much sooner than he had anticipated.
 
Back
Top