The advent of superintelligence would be transformative. Superintelligence or ASI is an AI that is many times more intelligent than humans. It could arise quickly in a so-called “hard takeoff” scenario by allowing AI to engage in recursive self-improvement. Basically, allowing an AI to start improving itself would result in dramatically faster breakthroughs on the way to a technological singularity.
Superintelligence could lead to powerful and beneficial technologies, curing any biological disease, halting climate change, etc. On the other hand, it could also be very hard to control and may make decisions on its own that are detrimental to humans. In the worst case, it might wipe out the human race.
That's why there is a lot of research on AI alignment or AI safety. The goal is to make sure an ASI’s actions are aligned with human values and morality. Current actions include government regulation and sponsorship, industry grants, and of course academic research. Everyone can help out by raising awareness of the issue and the nuances of how economic and military pressures could lead to an uncontrollable intelligence explosion.
This video is a Christmas special in the tradition of Doctor Who. At least, that's my excuse for why it's so long.