Results 1 to 9 of 9

Thread: Nick Bostrom

  1. #1

    Nick Bostrom

    Founding Director of Future of Humanity Institute

    Co-founder of Institute for Ethics and Emerging Technologies

    Personal website - nickbostrom.com

    Bostrom's Existential Risk page - existential-risk.org

    Bostrom's Simulation Argument website - simulation-argument.com

    Nick Bostrom on Wikipedia

    Nick Bostrom on Amazon

    Books:

    "Superintelligence: Paths, Dangers, Strategies", September 3, 2014

    "Global Catastrophic Risks", August 1, 2011
    Last edited by Airicist2; 4th February 2023 at 12:23.

  2. #2


    The end of humanity: Nick Bostrom at TEDxOxford

    Published on Mar 26, 2013

    Swedish philosopher Nick Bostrom began thinking of a future full of human enhancement, nanotechnology and cloning long before they became mainstream concerns. Bostrom approaches both the inevitable and the speculative using the tools of philosophy, bioethics and probability.

    Nick is Professor in the Faculty of Philosophy at Oxford University and founding Director of the Future of Humanity Institute and of the Programme on the Impacts of Future Technology within the Oxford Martin School. He's also the co-founder and chair of both the World Transhumanist Association, which advocates the use of technology to extend human capabilities and lifespans, and the Institute for Ethics and Emerging Technologies.

  3. #3


    Nick Bostrom: What happens when our computers get smarter than we are?

    Published on Apr 27, 2015

    Artificial intelligence is getting smarter by leaps and bounds — within this century, research suggests, a computer AI could be as "smart" as a human being. And then, says Nick Bostrom, it will overtake us: "Machine intelligence is the last invention that humanity will ever need to make." A philosopher and technologist, Bostrom asks us to think hard about the world we're building right now, driven by thinking machines. Will our smart machines help to preserve humanity and our values — or will they have values of their own?

    TEDTalks is a daily video podcast of the best talks and performances from the TED Conference, where the world's leading thinkers and doers give the talk of their lives in 18 minutes (or less). Look for talks on Technology, Entertainment and Design -- plus science, business, global issues, the arts and much more.

  4. #4


    Keynote - Dr. Nick Bostrom, University of Oxford

    Published on Apr 12, 2016

    CeBIT Global Conferences - 17 March 2016: Keynote Dr. Nick Bostrom, Director, Future of Humanity Institute, University of Oxford

  5. #5


    Interactions between the AI Control Problem and the Governance Problem | Nick Bostrom

    Published on Jan 30, 2017

    Nick Bostrom explores the likely outcomes of human-level AI and problems regarding governing AI at the January 2017 Asilomar conference organized by the Future of Life Institute.

    The Beneficial AI 2017 Conference: In our sequel to the 2015 Puerto Rico AI conference, we brought together an amazing group of AI researchers from academia and industry, and thought leaders in economics, law, ethics, and philosophy for five days dedicated to beneficial AI. We hosted a two-day workshop for our grant recipients and followed that with a 2.5-day conference, in which people from various AI-related fields hashed out opportunities and challenges related to the future of AI and steps we can take to ensure that the technology is beneficial.

  6. #6

  7. #7


    Is Artificial Intelligence dangerous and poses a threat to humans?

    Mar 1, 2020

    In this interview, I talk with Oxford University Professor Nick Bostrom, who is the New York Times best-selling author of the book "Superintelligence: Paths, Dangers, Strategies". I visited him at the Future of Humanity Institute, which he founded, to discuss whether AI is dangerous and poses a threat to humans.
    "Is Artificial Intelligence (AI) A Threat To Humans?"

    by Bernard Marr
    March 2, 2020

  8. #8


    Nick Bostrom: Simulation and Superintelligence | AI Podcast #83 with Lex Fridman

    Mar 25, 2020

    Nick Bostrom is a philosopher at University of Oxford and the director of the Future of Humanity Institute. He has worked on fascinating and important ideas in existential risks, simulation hypothesis, human enhancement ethics, and the risks of superintelligent AI systems, including in his book Superintelligence. I can see talking to Nick multiple times on this podcast, many hours each time, but we have to start somewhere. This conversation is part of the Artificial Intelligence podcast.

    OUTLINE:
    0:00 - Introduction
    2:48 - Simulation hypothesis and simulation argument
    12:17 - Technologically mature civilizations
    15:30 - Case 1: if something kills all possible civilizations
    19:08 - Case 2: if we lose interest in creating simulations
    22:03 - Consciousness
    26:27 - Immersive worlds
    28:50 - Experience machine
    41:10 - Intelligence and consciousness
    48:58 - Weighing probabilities of the simulation argument
    1:01:43 - Elaborating on Joe Rogan conversation
    1:05:53 - Doomsday argument and anthropic reasoning
    1:23:02 - Elon Musk
    1:25:26 - What's outside the simulation?
    1:29:52 - Superintelligence
    1:47:27 - AGI utopia
    1:52:41 - Meaning of life

  9. #9


    Nick Bostrom: Superintelligence & the Simulation Hypothesis

    Premiered Sep 8, 2022

    Nick Bostrom is a Swedish-born philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, superintelligence risks, and the reversal test. In 2011, he founded the Oxford Martin Program on the Impacts of Future Technology, and is the founding director of the Future of Humanity Institute at Oxford University. In 2009 and 2015, he was included in Foreign Policy's Top 100 Global Thinkers list.

    Bostrom is the author of over 200 publications, and has written two books and co-edited two others. The two books he has authored are Anthropic Bias: Observation Selection Effects in Science and Philosophy (2002) and Superintelligence: Paths, Dangers, Strategies (2014). Superintelligence was a New York Times bestseller, was recommended by Elon Musk and Bill Gates among others, and helped to popularize the term "superintelligence".

    Bostrom believes that superintelligence, which he defines as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest," is a potential outcome of advances in artificial intelligence. He views the rise of superintelligence as potentially highly dangerous to humans, but nonetheless rejects the idea that humans are powerless to stop its negative effects.

    In his book Superintelligence, Professor Bostrom asks the questions: What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life.

    The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. If machine brains surpassed human brains in general intelligence, then this new superintelligence could become extremely powerful - possibly beyond our control. As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence.

    But we have one advantage: we get to make the first move. Will it be possible to construct a seed Artificial Intelligence, to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?

    00:00:00 Intro
    00:01:30 Judging Nick's book by its cover. Can you find the Easter Egg on the cover?
    00:06:38 How could an AI have emotions and be creative?
    00:08:11 How could a computing device / AI feel pain?
    00:13:28 The Turing Test.
    00:15:00 WIll the year 2100 be when the Turing Test is really passed by an AI?
    00:17:55 Could I create an AI Galileo?
    00:20:07 How does Nick describe the simulation hypothesis for which he is famous.
    00:22:34 Is there a "Drake Equation" for the simulation hypothesis?
    00:26:50 What do you think of the Penrose-Hammeroff orchestrated reduction theory of consciousness and Roger's objection to the simulation hypothesis?
    00:34:41 Is our human history typical? How would we know?
    00:35:50 SETI and the prospect of extraterrestial life. Should we be afraid?
    00:48:53 Are computers really getting "smarter"?
    00:49:48 Is compute power reaching an asymptotic saturation?
    00:53:43 Audience questions -Global risk, world order, and should we kill the "singelton" if it should arise?

Similar Threads

  1. Nick Kohut
    By Airicist in forum Persons, Personalities
    Replies: 1
    Last Post: 21st April 2017, 23:58
  2. Nick Marra
    By Airicist in forum Persons, Personalities
    Replies: 2
    Last Post: 7th April 2015, 17:20
  3. Nick Morozovsky
    By Airicist in forum Persons, Personalities
    Replies: 0
    Last Post: 8th March 2015, 06:20
  4. Replies: 1
    Last Post: 17th November 2014, 01:24
  5. Replies: 2
    Last Post: 17th November 2014, 01:20

Социальные закладки

Социальные закладки

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •