The biggest A.I. risks: Superintelligence and the elite silos | Ben Goertzel

Published on Mar 4, 2019

When it comes to raising superintelligent A.I., kindness may be our best bet.

- We have no guarantee that a superintelligent A.I. is going to do what we want. Once we create something many times more intelligent than we are, it may be "insane" to think we can control what it does.

-What's the best bet to ensure superintelligent A.I. remains compliant with humans and does good works, such as advance medicine? To raise it in a way that's imbued with compassion and understanding, says Goertzel.

- One way to limit "people doing bad things out of frustration," it may be advantageous for the entire world to be plugged into the A.I. economy so that developers, from whatever country, can monetize their codes.

Ben Goertzel is CEO and chief scientist at SingularityNET, a project dedicated to creating benevolent decentralized artificial general intelligence. He is also chief scientist of financial prediction firm Aidyia Holdings and robotics firm Hanson Robotics; Chairman of AI software company Novamente LLC; Chairman of the Artificial General Intelligence Society and the OpenCog Foundation.His latest book is AGI Revolution: An Inside View of the Rise of Artificial General Intelligence.