Miscellaneous

"SITUATIONAL AWARENESS: The Decade Ahead"

by Leopold Aschenbrenner
June 2024


Leopold Aschenbrenner - 2027 AGI, China/US Super-Intelligence Race, & The Return of History

Jun 4, 2024

Chatted with my friend Leopold Aschenbrenner about the trillion dollar cluster, unhobblings + scaling = 2027 AGI, CCP espionage at AI labs, leaving OpenAI and starting an AGI investment firm, dangers of outsourcing clusters to the Middle East, & The Project.

Timestamps
00:00:00 The trillion-dollar cluster and unhobbling
00:21:20 AI 2028: The return of history
00:41:15 Espionage & American AI superiority
01:09:09 Geopolitical implications of AI
01:32:12 State-led vs. private-led AI
02:13:12 Becoming Valedictorian of Columbia at 19
02:31:24 What happened at OpenAI
02:46:00 Intelligence explosion
03:26:47 Alignment
03:42:15 On Germany, and understanding foreign perspectives
03:57:53 Dwarkesh's immigration story and path to the podcast
04:03:16 Random questions
04:08:47 Launching an AGI hedge fund
04:20:03 Lessons from WWII
04:29:57 Coda: Frederick the Great


Inside the Trillion Dollar AI Race | Situational Awareness | Leopold Aschenbrenner

Jun 14, 2024

Listen to the full audiobook version of "Situational Awareness: The Decade Ahead" by Leopold Aschenbrenner, a former OpenAI employee. This report, "Situational Awareness: The Decade Ahead", offers a chillingly realistic glimpse into what the next few years hold as we rapidly approach AGI and the even more powerful superintelligence that will follow.

Chapters:
0:00:00 - Introduction
0:02:33 - I. From GPT-4 to AGI: Counting the OOMs
0:51:32 - II. From AGI to Superintelligence: the Intelligence Explosion
1:40:11 - III. The Challenges
1:42:07 - IIIa. Racing to the Trillion-Dollar Cluster
2:04:31 - IIIb. Lock Down the Labs: Security for AGI
2:31:59 - IIIc. Superalignment
3:10:45 - IIId. The Free World Must Prevail
3:39:35 - IV. The Project
4:09:26 - V. Parting Thoughts
4:16:35 - Appendix
 
Last edited:

Keynote: Yann LeCun, "Human-Level AI"

Oct 13, 2024

There are four essential characteristics of human intelligence that current AI systems don’t possess: reasoning, planning, persistent memory, and understanding the physical world. Once we have systems with such capabilities, it will still take a while before we bring them up to human level.

World models
 
Article "DeepMind’s 145-page paper on AGI safety may not convince skeptics"

by Kyle Wiggers
April 2, 2025

"An Approach to Technical AGI Safety and Security"

by Rohin Shah, Alex Irpan, Alexander Matt Turner, Anna Wang, Arthur Conmy, David Lindner, Jonah Brown-Cohen, Lewis Ho, Neel Nanda, Raluca Ada Popa, Rishub Jain, Rory Greig, SamuelAlbanie, Scott Emmons, Sebastian Farquhar, Sébastien Krier, Senthooran Rajamanoharan, Sophie Bridgers, Tobi Ijitoye, Tom Everitt, Victoria Krakovna, Vikrant Varma, Vladimir Mikulik, Zachary Kenton, Dave Orr, Shane Legg, Noah Goodman, Allan Dafoe, Four Flynn and Anca Dragan
April, 2025
 
Back
Top