RI Seminar: Sergey Levine : Deep Robotic Learning
Apr 7, 2017
Sergey Levine
Assistant Professor, UC Berkeley
Abstract
Deep learning methods have provided us with remarkably powerful, flexible, and robust solutions in a wide range of passive perception areas: computer vision, speech recognition, and natural language processing. However, active decision making domains such as robotic control present a number of additional challenges, standard supervised learning methods do not extend readily to robotic decision making, where supervision is difficult to obtain. In this talk, I will discuss experimental results that hint at the potential of deep learning to transform robotic decision making and control, present a number of algorithms and models that can allow us to combine expressive, high-capacity deep models with reinforcement learning and optimal control, and describe some of our recent work on scaling up robotic learning through collective learning with multiple robots.
Speaker Biography
Sergey Levine received a BS and MS in Computer Science from Stanford University in 2009, and a Ph.D. in Computer Science from Stanford University in 2014. He joined the faculty of the Department of Electrical Engineering and Computer Sciences at UC Berkeley in fall 2016. His work focuses on machine learning for decision making and control, with an emphasis on deep learning and reinforcement learning algorithms. Applications of his work include autonomous robots and vehicles, as well as computer vision and graphics. His research includes developing algorithms for end-to-end training of deep neural network policies that combine perception and control, scalable algorithms for inverse reinforcement learning, deep reinforcement learning algorithms, and more.
Sergey Levine: Robotics and Machine Learning | AI Podcast #108 with Lex Fridman
Jul 14, 2020
Sergey Levine is a professor at Berkeley and a world-class researcher in deep learning, reinforcement learning, robotics, and computer vision, including the development of algorithms for end-to-end training of neural network policies that combine perception and control, scalable algorithms for inverse reinforcement learning, and deep RL algorithms. This conversation is part of the Artificial Intelligence podcast.
Outline:
0:00 - Introduction
3:05 - State-of-the-art robots vs humans
16:13 - Robotics may help us understand intelligence
22:49 - End-to-end learning in robotics
27:01 - Canonical problem in robotics
31:44 - Commonsense reasoning in robotics
34:41 - Can we solve robotics through learning?
44:55 - What is reinforcement learning?
1:06:36 - Tesla Autopilot
1:08:15 - Simulation in reinforcement learning
1:13:46 - Can we learn gravity from data?
1:16:03 - Self-play
1:17:39 - Reward functions
1:27:01 - Bitter lesson by Rich Sutton
1:32:13 - Advice for students interesting in AI
1:33:55 - Meaning of life
Season 2 Ep. 1 Sergey Levine explains the challenges of real world robotics
Jan 5, 2022
In Episode One of Season Two, Host Pieter Abbeel is joined by guest (and close collaborator) Sergey Levine, professor at UC Berkeley, EECS. Sergey discusses the early years of his career, how Andrew Ng influenced him to become interested in machine learning, his current projects, and his lab's recent accomplishments.
The conversation concludes with Sergey's view on the dangers of machines not being intelligent enough and his advice for students seeking a career in robotic.
What's in this episode:
00:00:00 Introduction
00:02:07 Sergey's PhD work
00:04:55 Andrew Ng's influence
00:05:46 Defining supervised learning and reinforcement learning (RL)
00:10:03 Sergey's predictions for RL in the future
00:11:29 Sergey's switch from graphics to robotics
00:13:58 Robots learning in the real world and Sergey's current research
00:32:48 How to keep collecting useful data
00:43:31 Offline model-based optimization
00:47:12 The concept of singularity
Социальные закладки