Roman Yampolskiy


Roman Yampolskiy on Artificial Superintelligence

Published on Sep 7, 2015

There are those of us who philosophize and debate the finer points surrounding the dangers of artificial intelligence. And then there are those who dare go in the trenches and get their hands dirty by doing the actual work that may just end up making the difference. So if AI turns out to be like the terminator then Prof. Roman Yampolskiy may turn out to be like John Connor – but better. Because instead of fighting by using guns and brawn he is utilizing computer science, human intelligence and code. Whether that turns out to be the case and whether Yampolskiy will be successful or not is to be seen. But at this point I was very happy to have Roman back on my podcast for our second interview. [See his first interview here.]

During our 1 hour conversation with Prof. Yampolskiy we cover a variety of interesting topics such as: slowing down the path to the singularity; expert advice versus celebrity endorsements; crowd-funding and going viral or “potato salad – yes; superintelligence – not so much”; his recent book on Artificial Superintelligence; intellectology, AI complete problems, singularity paradox and wire-heading; why machine ethics and robot rights are misguided and AGI research is unethical; the beauty of brute force algorithm; his differences from Nick Bostrom’s Superintelligence; Roman’s definition of humanity; theology and superintelligence…
 

Artificial Intelligence safety and security - Roman V. Yampolskiy, PhD

Nov 12, 2019

In near-term, the rise of AI-enabled cyberattacks is expected to cause an explosion of network penetrations, personal data thefts, and an epidemic-level spread of intelligent computer viruses as well as fake forensic evidence. Ironically, our best hope to defend against AI-enabled hacking is by using AI. Will AI enhance cybersecurity or make it more difficult to keep us safe?
 

Roman Yampolskiy: Dangers of Superintelligent AI | Lex Fridman Podcast #431

Jun 2, 2024

Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable.

Outline:

0:00 - Introduction
2:20 - Existential risk of AGI
8:32 - Ikigai risk
16:44 - Suffering risk
20:19 - Timeline to AGI
24:51 - AGI turing test
30:14 - Yann LeCun and open source AI
43:06 - AI control
45:33 - Social engineering
48:06 - Fearmongering
57:57 - AI deception
1:04:30 - Verification
1:11:29 - Self-improving AI
1:23:42 - Pausing AI development
1:29:59 - AI Safety
1:39:43 - Current AI
1:45:05 - Simulation
1:52:24 - Aliens
1:53:57 - Human mind
2:00:17 - Neuralink
2:09:23 - Hope for the future
2:13:18 - Meaning of life
 
Back
Top