Miscellaneous

Article "Microsoft Says New A.I. Shows Signs of Human Reasoning"
A provocative paper from researchers at Microsoft claims A.I. technology shows the ability to understand the way people do. Critics say those scientists are kidding themselves.

by Cade Metz
May 16, 2023

"Sparks of Artificial General Intelligence:Early experiments with GPT-4"

by Sebastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg,
Harsha Nori, Hamid Palangi, Marco Tulio Ribeiro, Yi Zhang
April 13, 2023
 
Last edited:
"Levels of AGI: Operationalizing Progress on the Path to AGI"

by Meredith Ringel Morris, Jascha Sohl-dickstein, Noah Fiedel, Tris Warkentin, Allan Dafoe, Aleksandra Faust, Clement Farabet, Shane Legg
November 4, 2023


We propose a framework for classifying the capabilities and behavior of Artificial General Intelligence (AGI) models and their precursors. This framework introduces levels of AGI performance, generality, and autonomy. It is our hope that this framework will be useful in an analogous way to the levels of autonomous driving, by providing a common language to compare models, assess risks, and measure progress along the path to AGI. To develop our framework, we analyze existing definitions of AGI, and distill six principles that a useful ontology for AGI should satisfy. These principles include focusing on capabilities rather than mechanisms; separately evaluating generality and performance; and defining stages along the path toward AGI, rather than focusing on the endpoint. With these principles in mind, we propose 'Levels of AGI' based on depth (performance) and breadth (generality) of capabilities, and reflect on how current systems fit into this ontology. We discuss the challenging requirements for future benchmarks that quantify the behavior and capabilities of AGI models against these levels. Finally, we discuss how these levels of AGI interact with deployment considerations such as autonomy and risk, and emphasize the importance of carefully selecting Human-AI Interaction paradigms for responsible and safe deployment of highly capable AI systems.
 
"SITUATIONAL AWARENESS: The Decade Ahead"

by Leopold Aschenbrenner
June 2024


Leopold Aschenbrenner - 2027 AGI, China/US Super-Intelligence Race, & The Return of History

Jun 4, 2024

Chatted with my friend Leopold Aschenbrenner about the trillion dollar cluster, unhobblings + scaling = 2027 AGI, CCP espionage at AI labs, leaving OpenAI and starting an AGI investment firm, dangers of outsourcing clusters to the Middle East, & The Project.

Timestamps
00:00:00 The trillion-dollar cluster and unhobbling
00:21:20 AI 2028: The return of history
00:41:15 Espionage & American AI superiority
01:09:09 Geopolitical implications of AI
01:32:12 State-led vs. private-led AI
02:13:12 Becoming Valedictorian of Columbia at 19
02:31:24 What happened at OpenAI
02:46:00 Intelligence explosion
03:26:47 Alignment
03:42:15 On Germany, and understanding foreign perspectives
03:57:53 Dwarkesh's immigration story and path to the podcast
04:03:16 Random questions
04:08:47 Launching an AGI hedge fund
04:20:03 Lessons from WWII
04:29:57 Coda: Frederick the Great


Inside the Trillion Dollar AI Race | Situational Awareness | Leopold Aschenbrenner

Jun 14, 2024

Listen to the full audiobook version of "Situational Awareness: The Decade Ahead" by Leopold Aschenbrenner, a former OpenAI employee. This report, "Situational Awareness: The Decade Ahead", offers a chillingly realistic glimpse into what the next few years hold as we rapidly approach AGI and the even more powerful superintelligence that will follow.

Chapters:
0:00:00 - Introduction
0:02:33 - I. From GPT-4 to AGI: Counting the OOMs
0:51:32 - II. From AGI to Superintelligence: the Intelligence Explosion
1:40:11 - III. The Challenges
1:42:07 - IIIa. Racing to the Trillion-Dollar Cluster
2:04:31 - IIIb. Lock Down the Labs: Security for AGI
2:31:59 - IIIc. Superalignment
3:10:45 - IIId. The Free World Must Prevail
3:39:35 - IV. The Project
4:09:26 - V. Parting Thoughts
4:16:35 - Appendix
 
Last edited:

Keynote: Yann LeCun, "Human-Level AI"

Oct 13, 2024

There are four essential characteristics of human intelligence that current AI systems don’t possess: reasoning, planning, persistent memory, and understanding the physical world. Once we have systems with such capabilities, it will still take a while before we bring them up to human level.

World models
 
Back
Top