PDA

View Full Version : Nick Bostrom



Airicist
28th March 2013, 13:12
Founding Director of Future of Humanity Institute (https://pr.ai/showthread.php?8899)

Co-founder of Institute for Ethics and Emerging Technologies (https://pr.ai/showthread.php?543)

Personal website - nickbostrom.com (https://nickbostrom.com)

Bostrom's Existential Risk page - existential-risk.org (https://www.existential-risk.org)

Bostrom's Simulation Argument website - simulation-argument.com (https://www.simulation-argument.com)

Nick Bostrom (https://en.wikipedia.org/wiki/Nick_Bostrom) on Wikipedia

Nick Bostrom (https://www.amazon.com/Nick-Bostrom/e/B001HCZVL8) on Amazon

Books:

"Deep Utopia: Life and Meaning in a Solved World (https://pr.ai/showthread.php?t=25315)", 2024

"Superintelligence: Paths, Dangers, Strategies (https://pr.ai/showthread.php?8897)", 2014

"Global Catastrophic Risks (https://pr.ai/showthread.php?8898)", 2011

Airicist
28th March 2013, 13:15
https://youtu.be/P0Nf3TcMiHo

The end of humanity: Nick Bostrom at TEDxOxford

Published on Mar 26, 2013


Swedish philosopher Nick Bostrom began thinking of a future full of human enhancement, nanotechnology and cloning long before they became mainstream concerns. Bostrom approaches both the inevitable and the speculative using the tools of philosophy, bioethics and probability.

Nick is Professor in the Faculty of Philosophy at Oxford University and founding Director of the Future of Humanity Institute and of the Programme on the Impacts of Future Technology within the Oxford Martin School. He's also the co-founder and chair of both the World Transhumanist Association, which advocates the use of technology to extend human capabilities and lifespans, and the Institute for Ethics and Emerging Technologies.

Airicist
27th April 2015, 18:26
https://youtu.be/MnT1xgZgkpk

Nick Bostrom: What happens when our computers get smarter than we are?

Published on Apr 27, 2015


Artificial intelligence is getting smarter by leaps and bounds — within this century, research suggests, a computer AI could be as "smart" as a human being. And then, says Nick Bostrom, it will overtake us: "Machine intelligence is the last invention that humanity will ever need to make." A philosopher and technologist, Bostrom asks us to think hard about the world we're building right now, driven by thinking machines. Will our smart machines help to preserve humanity and our values — or will they have values of their own?

TEDTalks is a daily video podcast of the best talks and performances from the TED Conference, where the world's leading thinkers and doers give the talk of their lives in 18 minutes (or less). Look for talks on Technology, Entertainment and Design -- plus science, business, global issues, the arts and much more.

Airicist
12th April 2016, 14:40
https://youtu.be/xPTIGAZQKtU

Keynote - Dr. Nick Bostrom, University of Oxford

Published on Apr 12, 2016


CeBIT Global Conferences - 17 March 2016: Keynote Dr. Nick Bostrom, Director, Future of Humanity Institute, University of Oxford

Airicist
16th January 2017, 20:58
https://youtu.be/_H-uxRq2w-c

Interactions between the AI Control Problem and the Governance Problem | Nick Bostrom

Published on Jan 30, 2017


Nick Bostrom explores the likely outcomes of human-level AI and problems regarding governing AI at the January 2017 Asilomar conference organized by the Future of Life Institute.

The Beneficial AI 2017 Conference: In our sequel to the 2015 Puerto Rico AI conference, we brought together an amazing group of AI researchers from academia and industry, and thought leaders in economics, law, ethics, and philosophy for five days dedicated to beneficial AI. We hosted a two-day workshop for our grant recipients and followed that with a 2.5-day conference, in which people from various AI-related fields hashed out opportunities and challenges related to the future of AI and steps we can take to ensure that the technology is beneficial.

Airicist
2nd November 2017, 00:54
Article "Voices in AI – Episode 6: A Conversation with Nick Bostrom (https://gigaom.com/2017/10/02/voices-in-ai-episode-6-a-conversation-with-nick-bostrum)"

by Byron Reese
October 2, 2017

Airicist
2nd March 2020, 22:03
https://youtu.be/wxavuvEHwG8

Is Artificial Intelligence dangerous and poses a threat to humans?

Mar 1, 2020


In this interview, I talk with Oxford University Professor Nick Bostrom, who is the New York Times best-selling author of the book "Superintelligence: Paths, Dangers, Strategies". I visited him at the Future of Humanity Institute, which he founded, to discuss whether AI is dangerous and poses a threat to humans.

"Is Artificial Intelligence (AI) A Threat To Humans? (https://www.forbes.com/sites/bernardmarr/2020/03/02/is-artificial-intelligence-ai-a-threat-to-humans)"

by Bernard Marr
March 2, 2020

Airicist
26th March 2020, 13:16
https://youtu.be/rfKiTGj-zeQ

Nick Bostrom: Simulation and Superintelligence | AI Podcast #83 with Lex Fridman

Mar 25, 2020


Nick Bostrom is a philosopher at University of Oxford and the director of the Future of Humanity Institute. He has worked on fascinating and important ideas in existential risks, simulation hypothesis, human enhancement ethics, and the risks of superintelligent AI systems, including in his book Superintelligence. I can see talking to Nick multiple times on this podcast, many hours each time, but we have to start somewhere. This conversation is part of the Artificial Intelligence podcast.

OUTLINE:
0:00 - Introduction
2:48 - Simulation hypothesis and simulation argument
12:17 - Technologically mature civilizations
15:30 - Case 1: if something kills all possible civilizations
19:08 - Case 2: if we lose interest in creating simulations
22:03 - Consciousness
26:27 - Immersive worlds
28:50 - Experience machine
41:10 - Intelligence and consciousness
48:58 - Weighing probabilities of the simulation argument
1:01:43 - Elaborating on Joe Rogan conversation
1:05:53 - Doomsday argument and anthropic reasoning
1:23:02 - Elon Musk
1:25:26 - What's outside the simulation?
1:29:52 - Superintelligence
1:47:27 - AGI utopia
1:52:41 - Meaning of life

Airicist2
4th February 2023, 12:26
https://youtu.be/X0SssEwsuOA

Nick Bostrom: Superintelligence & the Simulation Hypothesis

Premiered Sep 8, 2022


Nick Bostrom is a Swedish-born philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, superintelligence risks, and the reversal test. In 2011, he founded the Oxford Martin Program on the Impacts of Future Technology, and is the founding director of the Future of Humanity Institute at Oxford University. In 2009 and 2015, he was included in Foreign Policy's Top 100 Global Thinkers list.

Bostrom is the author of over 200 publications, and has written two books and co-edited two others. The two books he has authored are Anthropic Bias: Observation Selection Effects in Science and Philosophy (2002) and Superintelligence: Paths, Dangers, Strategies (2014). Superintelligence was a New York Times bestseller, was recommended by Elon Musk and Bill Gates among others, and helped to popularize the term "superintelligence".

Bostrom believes that superintelligence, which he defines as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest," is a potential outcome of advances in artificial intelligence. He views the rise of superintelligence as potentially highly dangerous to humans, but nonetheless rejects the idea that humans are powerless to stop its negative effects.

In his book Superintelligence, Professor Bostrom asks the questions: What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life.

The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. If machine brains surpassed human brains in general intelligence, then this new superintelligence could become extremely powerful - possibly beyond our control. As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence.

But we have one advantage: we get to make the first move. Will it be possible to construct a seed Artificial Intelligence, to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?

00:00:00 Intro
00:01:30 Judging Nick's book by its cover. Can you find the Easter Egg on the cover?
00:06:38 How could an AI have emotions and be creative?
00:08:11 How could a computing device / AI feel pain?
00:13:28 The Turing Test.
00:15:00 WIll the year 2100 be when the Turing Test is really passed by an AI?
00:17:55 Could I create an AI Galileo?
00:20:07 How does Nick describe the simulation hypothesis for which he is famous.
00:22:34 Is there a "Drake Equation" for the simulation hypothesis?
00:26:50 What do you think of the Penrose-Hammeroff orchestrated reduction theory of consciousness and Roger's objection to the simulation hypothesis?
00:34:41 Is our human history typical? How would we know?
00:35:50 SETI and the prospect of extraterrestial life. Should we be afraid?
00:48:53 Are computers really getting "smarter"?
00:49:48 Is compute power reaching an asymptotic saturation?
00:53:43 Audience questions -Global risk, world order, and should we kill the "singelton" if it should arise?