Miscellaneous


On Deep Learning with Ian Goodfellow, Andrew Trask, Kelvin Lwin, Siraj Raval and the Udacity Team

Streamed live on Mar 17, 2017

Join us on March 17 at 6pm PST for a panel on the state of deep learning. Brought to you by Udacity's Deep Learning Nanodegree Foundation program.
 

Forget catastrophic forgetting: AI that learns after deployment

Published on May 16, 2017

Neurala CTO Anatoly Gorshechnikov on Lifelong Deep Learning Neural Networks. One of the major hassles of Deep Learning is the need to fully retrain the network on server every time new data becomes available in order to preserve the previous knowledge. This is called 'catastrophic forgetting' and severely impairs the ability to develop a truly autonomous AI. We present the patent pending technology that allows us to solve this problem by simply training on the fly the new object without retraining of the old. Our results not only show state of the art accuracy, but real time performance suitable for deployment of AI directly on the edge, thus moving AI out of the server room and into the hands of consumers. Imagine a toy that can learn to recognize and react to its owner or a drone that can learn and detect objects of interest identified while in flight. (Recorded at the NVIDIA GTC Conference in 2017 at San Jose.)
 

The Deep End of Deep Learning | Hugo Larochelle | TEDxBoston

Published on Oct 12, 2016

Artificial Neural Networks are inspired by some of the "computations" that occur in human brains—real neural networks. In the past 10 years, much progress has been made with Artificial Neural Networks and Deep Learning due to accelerated computer power (GPUs), Open Source coding libraries that are being leveraged, and in-the-moment debates and corroborations via social media. Hugo Larochelle shares his observations of what’s been made possible with the underpinnings of Deep Learning.

Hugo Larochelle is a Research Scientist at Twitter and an Assistant Professor at the Université de Sherbrooke (UdeS). Before 2011, he spent two years in the machine learning group at the University of Toronto, as a postdoctoral fellow under the supervision of Geoffrey Hinton. He obtained his Ph.D. at Université de Montréal, under the supervision of Yoshua Bengio. He is the recipient of two Google Faculty Awards. His professional involvement includes associate editor for the IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), member of the editorial board of the Journal of Artificial Intelligence Research (JAIR) and program chair for the International Conference on Learning Representations (ICLR) of 2015, 2016 and 2017.
 

Stanford Seminar - Crowdsourcing for machine learning

Published on Jun 6, 2017

CS547: Human-Computer Interaction Seminar
Crowdsourcing for Machine Learning
Speaker: Dan Weld, University of Washington
 

The progress we've made in machine learning - Tom Dietterich

Published on Oct 31, 2017

The National Academies of Sciences, Engineering, and Medicine organized a two-day workshop on the capabilities and applications of artificial intelligence and machine learning for the intelligence community on August 9-10, 2017.
 

Developing bug-free machine learning systems using formal mathematics

Published on Nov 5, 2017

Noisy data, non-convex objectives, model misspecification, and numerical instability can all cause undesired behaviors in machine learning systems. As a result, detecting actual implementation errors can be extremely difficult. We demonstrate a methodology in which developers use an interactive proof assistant to both implement their system and to state a formal theorem defining what it means for their system to be correct. The process of proving this theorem interactively in the proof assistant exposes all implementation errors since any error in the program would cause the proof to fail. As a case study, we implement a new system, Certigrad, for optimizing over stochastic computation graphs, and we generate a formal (i.e. machine-checkable) proof that the gradients sampled by the system are unbiased estimates of the true mathematical gradients. We train a variational autoencoder using Certigrad and find the performance comparable to training the same model in TensorFlow.
 

Probabilistic Machine Learning - Prof. Zoubin Ghahramani

Published on Nov 12, 2017

Zoubin Ghahramani is Professor of Information Engineering at the University of Cambridge, Co-Director of Uber AI Labs, and the Cambridge Director of the Alan Turing Institute, the UK's national institute for Data Science.

He is also the Deputy Academic Director of the Leverhulme Centre for the Future of Intelligence. He has worked and studied at the University of Pennsylvania, MIT, the University of Toronto, the Gatsby Unit at UCL, and CMU.

His research spans Neuroscience, AI, Machine Learning and Statistics. In 2015 he was elected a Fellow of the Royal Society.

Recorded, 7th March 2017
 

Machine learning - a new programming paradigm

Published on Jun 4, 2018

In this video from RedHat Summit 2018, Cassie Kozyrkov demystifies machine learning and AI. She describes how they're simply a different way to program computers, letting you explain your wishes with examples instead of instructions. See why this concept is powerful and how to think about applying it to solve your problems.
 

Learning to dress: synthesizing human dressing motion via deep reinforcement learning

Published on Sep 10, 2018

Video results for the paper "Learning To Dress: Synthesizing Human Dressing Motion via Deep Reinforcement Learning" to be presented at Siggraph Asia 2018.
 

Machine learning: living in the age of AI | A WIRED film

Published on Jun 20, 2019

“Machine Learning: Living in the Age of AI,” examines the extraordinary ways in which people are interacting with AI today. Hobbyists and teenagers are now developing tech powered by machine learning and WIRED shows the impacts of AI on schoolchildren and farmers and senior citizens, as well as looking at the implications that rapidly accelerating technology can have. The film was directed by filmmaker Chris Cannucciari, produced by WIRED, and supported by McCann Worldgroup.
 

Deep Learning State of the Art (2020) | MIT Deep Learning Series

Jan 10, 2020

Lecture on most recent research and developments in deep learning, and hopes for 2020. This is not intended to be a list of SOTA benchmark results, but rather a set of highlights of machine learning and AI innovations and progress in academia, industry, and society in general. This lecture is part of the MIT Deep Learning Lecture Series.

Website: https://deeplearning.mit.edu
Slides: http://bit.ly/2QEfbAm
Playlist: http://bit.ly/deep-learning-playlist

OUTLINE:
0:00 - Introduction
0:33 - AI in the context of human history
5:47 - Deep learning celebrations, growth, and limitations
6:35 - Deep learning early key figures
9:29 - Limitations of deep learning
11:01 - Hopes for 2020: deep learning community and research
12:50 - Deep learning frameworks: TensorFlow and PyTorch
15:11 - Deep RL frameworks
16:13 - Hopes for 2020: deep learning and deep RL frameworks
17:53 - Natural language processing
19:42 - Megatron, XLNet, ALBERT
21:21 - Write with transformer examples
24:28 - GPT-2 release strategies report
26:25 - Multi-domain dialogue
27:13 - Commonsense reasoning
28:26 - Alexa prize and open-domain conversation
33:44 - Hopes for 2020: natural language processing
35:11 - Deep RL and self-play
35:30 - OpenAI Five and Dota 2
37:04 - DeepMind Quake III Arena
39:07 - DeepMind AlphaStar
41:09 - Pluribus: six-player no-limit Texas hold'em poker
43:13 - OpenAI Rubik's Cube
44:49 - Hopes for 2020: Deep RL and self-play
45:52 - Science of deep learning
46:01 - Lottery ticket hypothesis
47:29 - Disentangled representations
48:34 - Deep double descent
49:30 - Hopes for 2020: science of deep learning
50:56 - Autonomous vehicles and AI-assisted driving
51:50 - Waymo
52:42 - Tesla Autopilot
57:03 - Open question for Level 2 and Level 4 approaches
59:55 - Hopes for 2020: autonomous vehicles and AI-assisted driving
1:01:43 - Government, politics, policy
1:03:03 - Recommendation systems and policy
1:05:36 - Hopes for 2020: Politics, policy and recommendation systems
1:06:50 - Courses, Tutorials, Books
1:10:05 - General hopes for 2020
1:11:19 - Recipe for progress in AI
1:14:15 - Q&A: what made you interested in AI
1:15:21 - Q&A: Will machines ever be able to think and feel?
1:18:20 - Q&A: Is RL a good candidate for achieving AGI?
1:21:31 - Q&A: Are autonomous vehicles responsive to sound?
1:22:43 - Q&A: What does the future with AGI look like?
1:25:50 - Q&A: Will AGI systems become our masters?
 
Back
Top