Andrej Karpathy


Andrej Karpathy: Tesla AI, Self-Driving, Optimus, Aliens, and AGI | Lex Fridman Podcast #333

Oct 29, 2022

Andrej Karpathy is a legendary AI researcher, engineer, and educator. He's the former director of AI at Tesla, a founding member of OpenAI, and an educator at Stanford.

Outline:

0:00 - Introduction
0:58 - Neural networks
6:01 - Biology
11:32 - Aliens
21:43 - Universe
33:34 - Transformers
41:50 - Language models
52:01 - Bots
58:21 - Google's LaMDA
1:05:44 - Software 2.0
1:16:44 - Human annotation
1:18:41 - Camera vision
1:23:46 - Tesla's Data Engine
1:27:56 - Tesla Vision
1:34:26 - Elon Musk
1:39:33 - Autonomous driving
1:44:28 - Leaving Tesla
1:49:55 - Tesla's Optimus
1:59:01 - ImageNet
2:01:40 - Data
2:11:31 - Day in the life
2:24:47 - Best IDE
2:31:53 - arXiv
2:36:23 - Advice for beginners
2:45:40 - Artificial general intelligence
2:59:00 - Movies
3:04:53 - Future of human civilization
3:09:13 - Book recommendations
3:15:21 - Advice for young people
3:17:12 - Future of machine learning
3:24:00 - Meaning of life
 

No Priors Ep. 80 | With Andrej Karpathy from OpenAI and Tesla

Sep 5, 2024

Andrej Karpathy joins Sarah and Elad in this week of No Priors. Andrej, who was a founding team member of OpenAI and the former Tesla Autopilot leader, needs no introduction. In this episode, Andrej discusses the evolution of self-driving cars, comparing Tesla's and Waymo’s approaches, and the technical challenges ahead. They also cover Tesla’s Optimus humanoid robot, the bottlenecks of AI development today, and how AI capabilities could be further integrated with human cognition. Andrej shares more about his new mission Eureka Labs and his insights into AI-driven education and what young people should study to prepare for the reality ahead.Sign up for new podcasts every week.

Notes:

0:00 Introduction
0:33 Evolution of self-driving cars
2:23 The Tesla vs. Waymo approach to self-driving
6:32 Training Optimus with automotive models
10:26 Reasoning behind the humanoid form factor
13:22 Existing challenges in robotics
16:12 Bottlenecks of AI progress
20:27 Parallels between human cognition and AI models
22:12 Merging human cognition with AI capabilities
27:10 Building high performance small models
30:33 Andrej’s current work in AI-enabled education
36:17 How AI-driven education reshapes knowledge networks and status
41:26 Eureka Labs
42:25 What young people study to prepare for the future


 

Andrej Karpathy: Software Is Changing (Again)

Jun 19, 2025
Andrej Karpathy's keynote on June 17, 2025 at AI Startup School in San Francisco.
Chapters

00:00 - Intro
01:25 - Software evolution: From 1.0 to 3.0
04:40 - Programming in English: Rise of Software 3.0
06:10 - LLMs as utilities, fabs, and operating systems
11:04 - The new LLM OS and historical computing analogies
14:39 - Psychology of LLMs: People spirits and cognitive quirks
18:22 - Designing LLM apps with partial autonomy
23:40 - The importance of human-AI collaboration loops
26:00 - Lessons from Tesla Autopilot & autonomy sliders
27:52 - The Iron Man analogy: Augmentation vs. agents
29:06 - Vibe Coding: Everyone is now a programmer
33:39 - Building for agents: Future-ready digital infrastructure
38:14 - Summary: We’re in the 1960s of LLMs — time to build
Drawing on his work at Stanford, OpenAI, and Tesla, Andrej sees a shift underway. Software is changing, again. We’ve entered the era of “Software 3.0,” where natural language becomes the new programming interface and models do the rest.

He explores what this shift means for developers, users, and the design of software itself— that we're not just using new tools, but building a new kind of computer.

Thoughts (From Andrej Karpathy!)
0:49 - Imo fair to say that software is changing quite fundamentally again. LLMs are a new kind of computer, and you program them *in English*. Hence I think they are well deserving of a major version upgrade in terms of software.
6:06 - LLMs have properties of utilities, of fabs, and of operating systems → New LLM OS, fabbed by labs, and distributed like utilities (for now). Many historical analogies apply - imo we are computing circa ~1960s.
14:39 - LLM psychology: LLMs = "people spirits", stochastic simulations of people, where the simulator is an autoregressive Transformer. Since they are trained on human data, they have a kind of emergent psychology, and are simultaneously superhuman in some ways, but also fallible in many others. Given this, how do we productively work with them hand in hand?Switching gears to opportunities...
18:16 - LLMs are "people spirits" → can build partially autonomous products.
29:05 - LLMs are programmed in English → make software highly accessible! (yes, vibe coding)
33:36 - LLMs are new primary consumer/manipulator of digital information (adding to GUIs/humans and APIs/programs) → Build for agents!
 

Andrej Karpathy — “RL is terrible; everything else is much worse”

Oct 17, 2025

The Andrej Karpathy episode.

During this interview, Andrej explains why reinforcement learning is terrible (but everything else is much worse), why AGI will just blend into the previous ~2.5 centuries of 2% GDP growth, why self driving took so long to crack, and what he sees as the future of education.

𝐄𝐏𝐈𝐒𝐎𝐃𝐄 𝐋𝐈𝐍𝐊𝐒

𝐓𝐈𝐌𝐄𝐒𝐓𝐀𝐌𝐏𝐒

00:00:00 – AGI is still a decade away
00:30:33 – LLM cognitive deficits
00:40:53 – RL is terrible
00:50:26 – How do humans learn?
01:07:13 – AGI will blend into 2% GDP growth
01:18:24 – ASI
01:33:38 – Evolution of intelligence & culture
01:43:43 - Why self driving took so long
01:57:08 - Future of education
 
Back
Top