PDA

View Full Version : Miscellaneous



Airicist2
31st December 2023, 01:11
https://youtu.be/Yd0yQ9yxSYY?si=6NfwpJEr5yYxZy7a

Will Superintelligent AI end the world? | Eliezer Yudkowsky (https://pr.ai/showthread.php?t=11760) | TED

Jul 11, 2023


Decision theorist Eliezer Yudkowsky has a simple message: superintelligent AI could probably kill us all. So the question becomes: Is it possible to build powerful artificial minds that are obedient, even benevolent? In a fiery talk, Yudkowsky explores why we need to act immediately to ensure smarter-than-human AI systems don't lead to our extinction.

Airicist2
31st December 2023, 01:29
https://youtu.be/zFbMJ-2QpG8?si=XBmQT_vPI8FHR8Eb

Super intelligent AI: 10 scientific discoveries it will make

Jun 10, 2023

Airicist2
31st December 2023, 01:30
https://youtu.be/jMX7fxnzBbY?si=XmshPL8mj5NpYX7u

2075: When superintelligent AI takes over

Premiered Oct 20, 2023


Futurist Gray Scott explores the future of AI in the year 2075. Are you worried about the future of AI? In this video, we'll look at a sci-fi scenario where a superintelligent AI has taken over the planet in 2075 and what that might mean for our future.

Ultimately, we need to be prepared for the future, that means being aware of superintelligent AI and how this future might unfold.

Airicist2
31st December 2023, 01:33
https://youtu.be/tFx_UNW9I1U?si=3aHjZ3IwQNtN8f_B

The 10 stages of Artificial Intelligence

Oct 28, 2023


This video explores the 10 stages of AI, including God-Like AI.

Airicist2
31st December 2023, 01:39
https://youtu.be/rpFQCI4pl_o?si=8yk5xatXEGdhu_ot

How could we control superintelligent AI?

Dec 24, 2023


The advent of superintelligence would be transformative. Superintelligence or ASI is an AI that is many times more intelligent than humans. It could arise quickly in a so-called “hard takeoff” scenario by allowing AI to engage in recursive self-improvement. Basically, allowing an AI to start improving itself would result in dramatically faster breakthroughs on the way to a technological singularity.

Superintelligence could lead to powerful and beneficial technologies, curing any biological disease, halting climate change, etc. On the other hand, it could also be very hard to control and may make decisions on its own that are detrimental to humans. In the worst case, it might wipe out the human race.

That's why there is a lot of research on AI alignment or AI safety. The goal is to make sure an ASI’s actions are aligned with human values and morality. Current actions include government regulation and sponsorship, industry grants, and of course academic research. Everyone can help out by raising awareness of the issue and the nuances of how economic and military pressures could lead to an uncontrollable intelligence explosion.

This video is a Christmas special in the tradition of Doctor Who. At least, that's my excuse for why it's so long.