Sam Harris


Can we build AI without losing control over it? | Sam Harris

Published on Oct 19, 2016

Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris -- and not just in some theoretical way. We're going to build superhuman machines, says Harris, but we haven't yet grappled with the problems associated with creating something that may treat us the way we treat ants.
 

Sam Harris on global priorities, existential risk, and what matters most

Jun 4, 2020

Human civilization increasingly has the potential both to improve the lives of everyone and to completely destroy everything. The proliferation of emerging technologies calls our attention to this never-before-seen power — and the need to cultivate the wisdom with which to steer it towards beneficial outcomes. If we're serious both as individuals and as a species about improving the world, it's crucial that we converge around the reality of our situation and what matters most. What are the most important problems in the world today and why? In this episode of the Future of Life Institute Podcast, Sam Harris joins us to discuss some of these global priorities, the ethics surrounding them, and what we can do to address them.

Topics discussed in this episode include:

-The problem of communication
-Global priorities
-Existential risk
-Animal suffering in both wild animals and factory farmed animals
-Global poverty
-Artificial general intelligence risk and AI alignment
-Ethics
-Sam’s book, The Moral Landscape

Timestamps:

0:00 Intro
3:52 What are the most important problems in the world?
13:14 Global priorities: existential risk
20:15 Why global catastrophic risks are more likely than existential risks
25:09 Longtermist philosophy
31:36 Making existential and global catastrophic risk more emotionally salient
34:41 How analyzing the self makes longtermism more attractive
40:28 Global priorities & effective altruism: animal suffering and global poverty
56:03 Is machine suffering the next global moral catastrophe?
59:36 AI alignment and artificial general intelligence/superintelligence risk
01:11:25 Expanding our moral circle of compassion
01:13:00 The Moral Landscape, consciousness, and moral realism
01:30:14 Can bliss and wellbeing be mathematically defined?
01:31:03 Where to follow Sam and concluding thoughts
 

Sam Harris: Consciousness, Free Will, Psychedelics, AI, UFOs, and Meaning | Lex Fridman Podcast #185

May 20, 2021

Sam Harris is an author, podcaster, and philosopher.

Outline:

0:00 - Introduction
1:48 - Where do thoughts come from?
7:49 - Consciousness
25:21 - Psychedelics
34:44 - Nature of reality
51:40 - Free will
1:50:25 - Ego
1:59:29 - Joe Rogan
2:02:30 - How will human civilization destroy itself?
2:09:57 - AI
2:30:40 - Jordan Peterson
2:38:43 - UFOs
2:46:32 - Brazilian Jiu Jitsu
2:56:17 - Love
3:07:21 - Meaning of life
 

Sam Harris: Trump, Pandemic, Twitter, Elon, Bret, IDW, Kanye, AI & UFOs | Lex Fridman Podcast #365

Mar 14, 2023

Sam Harris is an author, podcaster, and philosopher.

Outline:

0:00 - Introduction
3:38 - Empathy and reason
11:30 - Donald Trump
54:24 - Military industrial complex
58:58 - Twitter
1:23:05 - COVID
2:06:48 - Kanye West
2:23:24 - Platforming
2:41:21 - Joe Rogan
2:58:13 - Bret Weinstein
3:11:51 - Elon Musk
3:23:59 - Artificial Intelligence
3:40:01 - UFOs
3:53:16 - Free will
4:20:31 - Hope for the future
 
Back
Top