Page 1 of 2 12 LastLast
Results 1 to 10 of 19

Thread: Geoffrey Hinton

  1. #1
    Last edited by Airicist2; 2nd May 2023 at 23:13.

  2. #2


    The Next Generation of Neural Networks

    Uploaded on Dec 4, 2007

    Google Tech Talks
    November, 29 2007

    In the 1980's, new learning algorithms for neural networks promised to solve difficult classification tasks, like speech or object recognition, by learning many layers of non-linear features. The results were disappointing for two reasons: There was never enough labeled data to learn millions of complicated features and the learning was much too slow in deep neural networks with many layers of features. These problems can now be overcome by learning one layer of features at a time and by changing the goal of learning. Instead of trying to predict the labels, the learning algorithm tries to create a generative model that produces data which looks just like the unlabeled training data. These new neural networks outperform other machine learning methods when labeled data is scarce but unlabeled data is plentiful. An application to very fast document retrieval will be described.

    Speaker: Geoffrey Hinton
    Geoffrey Hinton received his BA in experimental psychology from Cambridge in 1970 and his PhD in Artificial Intelligence from Edinburgh in 1978. He did postdoctoral work at Sussex University and the University of California San Diego and spent five years as a faculty member in the Computer Science department at Carnegie-Mellon University. He then became a fellow of the Canadian Institute for Advanced Research and moved to the Department of Computer Science at the University of Toronto. He spent three years from 1998 until 2001 setting up the Gatsby Computational Neuroscience Unit at University College London and then returned to the University of Toronto where he is a University Professor. He holds a Canada Research Chair in Machine Learning. He is the director of the program on "Neural Computation and Adaptive Perception" which is funded by the Canadian Institute for Advanced Research.

    Geoffrey Hinton is a fellow of the Royal Society, the Royal Society of Canada, and the Association for the Advancement of Artificial Intelligence. He is an honorary foreign member of the American Academy of Arts and Sciences, and a former president of the Cognitive Science Society. He received an honorary doctorate from the University of Edinburgh in 2001. He was awarded the first David E. Rumelhart prize (2001), the IJCAI award for research excellence (2005), the IEEE Neural Network Pioneer award (1998) and the ITAC/NSERC award for contributions to information technology (1992).

    A simple introduction to Geoffrey Hinton's research can be found in his articles in Scientific American in September 1992 and October 1993. He investigates ways of using neural networks for learning, memory, perception and symbol processing and has over 200 publications in these areas. He was one of the researchers who introduced the back-propagation algorithm that has been widely used for practical applications. His other contributions to neural network research include Boltzmann machines, distributed representations, time-delay neural nets, mixtures of experts, Helmholtz machines and products of experts. His current main interest is in unsupervised learning procedures for neural networks with rich sensory input.

  3. #3
    Article "The meaning of AlphaGo, the AI program that beat a Go champ"
    Geoffrey Hinton, the godfather of ‘deep learning’—which helped Google’s AlphaGo beat a grandmaster—on the past, present and future of AI

    by Adrian Lee
    March 18, 2016

  4. #4


    Geoffrey Hinton talk "What is wrong with convolutional neural nets ?"

    Published on Apr 3, 2017

    Brain & Cognitive Sciences - Fall Colloquium Series Recorded December 4, 2014

    Talk given at MIT.

    Geoffrey Hinton talks about his capsules project.

  5. #5

  6. #6


    Geoffrey Hinton - The Neural Network Revolution

    Published on Jan 12, 2018

    Geoffrey Hinton is an Engineering Fellow at Google where he manages the Brain Team Toronto, which is a new part of the Google Brain Team and is located at Google's Toronto office at 111 Richmond Street. Brain Team Toronto does basic research on ways to improve neural network learning techniques. He is also the Chief Scientific Adviser of the new Vector Institute and an Emeritus Professor at the University of Toronto.

    Recorded: December 4th, 2017

  7. #7

  8. #8

  9. #9


    Geoff Hinton speaks about his latest research and the future of AI

    Dec 17, 2020

    Geoff Hinton Hinton is one of the pioneers of deep learning, and shared the 2018 Turing Award with colleagues Yoshua Bengio and Yann LeCun. In 2017, he introduced capsule networks, an alternative to convolutional neural networks that take into account the pose of objects in a 3D world, solving a problem in computer vision in which elements of an object change their position when viewed from different angles.

  10. #10


    Season 2 Ep 22 Geoff Hinton on revolutionizing artificial intelligence... again

    Jun 1, 2022

    Over the past ten years, AI has experienced breakthrough after breakthrough in everything from computer vision to speech recognition, protein folding prediction, and so much more.

    Many of these advancements hinge on the deep learning work conducted by our guest, Geoff Hinton, who has fundamentally changed the focus and direction of the field. A recipient of the Turing Award, the equivalent of the Nobel prize for computer science, he has over half a million citations of his work.

    Hinton has spent about half a century on deep learning, most of the time researching in relative obscurity. But that all changed in 2012 when Hinton and his students showed deep learning is better at image recognition than any other approaches to computer vision, and by a very large margin. That result, that moment, known as the ImageNet moment, changed the whole AI field. Pretty much everyone dropped what they had been doing and switched to deep learning.

    Geoff joins Pieter in our two-part season finale for a wide-ranging discussion inspired by insights gleaned from Hinton’s journey from academia to Google Brain. The episode covers how existing neural networks and backpropagation models operate differently than how the brain actually works; the purpose of sleep; and why it’s better to grow our computers than manufacture them.

    What's in this episode:

    00:00:00 - Introduction
    00:02:48 - Understanding how the brain works
    00:06:59 - Why we need unsupervised local objective functions
    00:09:39 - Masked auto-encoders
    00:10:55 - Current methods in end to end learning
    00:18:36 - Spiking neural networks
    00:23:00 - Leveraging spike times
    00:29:55 - The story behind AlexNet
    00:36:15 - Transition from pure academia to Google
    00:40:23 - The secret auction of Hinton’s company at NuerIPS
    00:44:18 - Hinton’s start in psychology and carpentry
    00:54:34 - Why computers should be grown rather than manufactured
    01:06:57 - The function of sleep and Boltzmann Machines
    01:11:49 - Need for negative data
    01:19:35 - Visualizing data using t-SNE

Page 1 of 2 12 LastLast

Similar Threads

  1. Replies: 3
    Last Post: 23rd April 2018, 22:22
  2. [Coursera] Neural Networks for Machine Learning — Geoffrey Hinton 2016
    By Airicist in forum Machine learning, deep learning
    Replies: 1
    Last Post: 15th April 2017, 12:13
  3. Geoffrey Drake-Brockman
    By Airicist in forum Persons, Personalities
    Replies: 4
    Last Post: 17th September 2014, 08:38

Социальные закладки

Социальные закладки

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •