Results 1 to 5 of 5

Thread: Miscellaneous

  1. #1

    Miscellaneous



    Will Superintelligent AI end the world? | Eliezer Yudkowsky | TED

    Jul 11, 2023

    Decision theorist Eliezer Yudkowsky has a simple message: superintelligent AI could probably kill us all. So the question becomes: Is it possible to build powerful artificial minds that are obedient, even benevolent? In a fiery talk, Yudkowsky explores why we need to act immediately to ensure smarter-than-human AI systems don't lead to our extinction.
    Last edited by Airicist2; 31st December 2023 at 01:22.

  2. #2


    Super intelligent AI: 10 scientific discoveries it will make

    Jun 10, 2023

  3. #3


    2075: When superintelligent AI takes over

    Premiered Oct 20, 2023

    Futurist Gray Scott explores the future of AI in the year 2075. Are you worried about the future of AI? In this video, we'll look at a sci-fi scenario where a superintelligent AI has taken over the planet in 2075 and what that might mean for our future.

    Ultimately, we need to be prepared for the future, that means being aware of superintelligent AI and how this future might unfold.

  4. #4


    The 10 stages of Artificial Intelligence

    Oct 28, 2023

    This video explores the 10 stages of AI, including God-Like AI.

  5. #5


    How could we control superintelligent AI?

    Dec 24, 2023

    The advent of superintelligence would be transformative. Superintelligence or ASI is an AI that is many times more intelligent than humans. It could arise quickly in a so-called “hard takeoff” scenario by allowing AI to engage in recursive self-improvement. Basically, allowing an AI to start improving itself would result in dramatically faster breakthroughs on the way to a technological singularity.

    Superintelligence could lead to powerful and beneficial technologies, curing any biological disease, halting climate change, etc. On the other hand, it could also be very hard to control and may make decisions on its own that are detrimental to humans. In the worst case, it might wipe out the human race.

    That's why there is a lot of research on AI alignment or AI safety. The goal is to make sure an ASI’s actions are aligned with human values and morality. Current actions include government regulation and sponsorship, industry grants, and of course academic research. Everyone can help out by raising awareness of the issue and the nuances of how economic and military pressures could lead to an uncontrollable intelligence explosion.

    This video is a Christmas special in the tradition of Doctor Who. At least, that's my excuse for why it's so long.

Социальные закладки

Социальные закладки

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •