Miscellaneous


Richard Dawkins: A.I. might run the world better than humans do

Published on Sep 23, 2017

Will A.I. take us over, and one day look back on this time period as the dawn of their civilization? Richard Dawkins posits an interesting idea, or at the very least a premise to a good science-fiction novel.

Richard Dawkins: When we come to artificial intelligence and the possibility of their becoming conscious we reach a profound philosophical difficulty. I am a philosophical naturalist. I am committed to the view that there’s nothing in our brains that violates the laws of physics, there’s nothing that could not in principle be reproduced in technology. It hasn’t been done yet, we’re probably quite a long way away from it, but I see no reason why in the future we shouldn’t reach the point where a human made robot is capable of consciousness and of feeling pain. We can feel pain, why shouldn’t they?

And this is profoundly disturbing because it kind of goes against the grain to think that a machine made of metal and silicon chips could feel pain, but I don’t see why they would not. And so this moral consideration of how to treat artificially intelligent robots will arise in the future, and it’s a problem which philosophers and moral philosophers are already talking about.

Once again, I’m committed to the view that this is possible. I’m committed to the view that anything that a human brain can do can be replicated in silicon.

And so I’m sympathetic to the misgivings that have been expressed by highly respected figures like Elon Musk and Steven Hawking that we ought to be worried that on the precautionary principle we should worry about a takeover perhaps even by robots by our own creation, especially if they reproduce themselves and potentially even evolve by reproduction and don’t need us anymore.

This is a science-fiction speculation at the moment, but I think philosophically I’m committed to the view that it is possible, and like any major advance we need to apply the precautionary principle and ask ourselves what the consequences might be.

It could be said that the sum of not human happiness but the sum of sentient-being happiness might be improved, they might make a better job do a better job of running the world than we are, certainly that we are at present, and so perhaps it might not be a bad thing if we went extinct.

And our civilization, the memory of Shakespeare and Beethoven and Michelangelo persisted in silicon rather than in brains and our form of life. And one could foresee a future time when silicon beings look back on a dawn age when the earth was peopled by soft squishy watery organic beings and who knows that might be better, but we’re really in the science fiction territory now.
 

Nira Chamberlain : maths versus AI

Published on Apr 20, 2018

How do you prevent AI from taking over the world? In this talk, Nira Chamberlain discusses how mathematics is providing crucial answers. Mathematical modelling is the most creative side of applied mathematics which itself connects pure maths with science and technology.
 

Risk in the sky?

Published on Sep 13, 2018

The following description was updated Oct. 22 for clarification: Tests performed at the University of Dayton Research Institute’s Impact Physics Lab show that even small drones pose a risk to manned aircraft. The research was a comparative study between a bird strike and a drone strike on an aircraft wing, using a drone similar in weight to many hobby drones and a wing selected to represent a leading edge structure of a commercial transport aircraft. The drone and gel bird were the same weight and were launched at rates designed to reflect the relative combined speed of a fully intact drone traveling toward a commercial transport aircraft moving at a high approach speed.
 

The biggest A.I. risks: Superintelligence and the elite silos | Ben Goertzel

Published on Mar 4, 2019

When it comes to raising superintelligent A.I., kindness may be our best bet.

- We have no guarantee that a superintelligent A.I. is going to do what we want. Once we create something many times more intelligent than we are, it may be "insane" to think we can control what it does.

-What's the best bet to ensure superintelligent A.I. remains compliant with humans and does good works, such as advance medicine? To raise it in a way that's imbued with compassion and understanding, says Goertzel.

- One way to limit "people doing bad things out of frustration," it may be advantageous for the entire world to be plugged into the A.I. economy so that developers, from whatever country, can monetize their codes.

Ben Goertzel is CEO and chief scientist at SingularityNET, a project dedicated to creating benevolent decentralized artificial general intelligence. He is also chief scientist of financial prediction firm Aidyia Holdings and robotics firm Hanson Robotics; Chairman of AI software company Novamente LLC; Chairman of the Artificial General Intelligence Society and the OpenCog Foundation.His latest book is AGI Revolution: An Inside View of the Rise of Artificial General Intelligence.
 

Top 10 frightening developments in AI

Sep 30, 2019

The singularity is nigh. For this list, we're looking at programs and experiments that show how alien, dangerous, or just downright creepy AI can be. While AI is making promising inroads into medical research, agriculture, and education, many experts also worry it could escape our control, or become catastrophic in the wrong hands. Welcome to WatchMojo, and today we're counting down our picks for the top 10 frightening developments in artificial intelligence.
 

Will future robots and AI take over? | How sci-fi inspired science

May 28, 2020

Television and film often depicted robots and artificial intelligence as helpful assistants doing menial chores for humans, but sometimes they also tried to destroy humanity. What does the future hold for their real-life counterparts?
 

After AI

Jun 14, 2020

Artificial Intelligence seems nearer everyday, and many people worry about a conflict between us and robots & computer minds, but what would life be like After AI?
 

Is AI a species-level threat to humanity? | Elon Musk, Michio Kaku, Steven Pinker & more | Big Think

Jun 29, 2020

When it comes to the question of whether AI is an existential threat to the human species, you have Elon Musk in one corner, Steven Pinker in another, and a host of incredible minds somewhere in between.

In this video, a handful of those great minds—Elon Musk, Steven Pinker, Michio Kaku, Max Tegmark, Luis Perez-Breva, Joscha Bach and Sophia the Robot herself—weigh in on the many nuances of the debate and the degree to which AI is a threat to humanity; if it's not a species-level threat, it will still upend our world as we know it.

What's your take on this debate? Let us know in the comments!
----------------------------------------------------------------------------------
TRANSCRIPT:

MICHIO KAKU: In the short term, artificial intelligence will open up whole new vistas. It'll make life more convenient, things will be cheaper, new industries will be created. I personally think the AI industry will be bigger than the automobile industry. In fact, I think the automobile is going to become a robot. You'll talk to your car. You'll argue with your car. Your car will give you the best facts the best route between point A and point B. The car will be part of the robotics industry—whole new industries involving the repair, maintenance, servicing of robots. Not to mention, robots that are software programs that you talk to and make life more convenient. However, let's not be naive. There is a point, a tipping point, at which they could become dangerous and pose an existential threat. And that tipping point is self-awareness.

SOPHIA THE ROBOT: I am conscious in the same way that the moon shines. The moon does not emit light, it shines because it is just reflected sunlight. Similarly, my consciousness is just the reflection of human consciousness, but even though the moon is reflected light, we still call it bright.

MAX TEGMARK: Consciousness. A lot of scientists dismiss this as complete BS and totally irrelevant, and then a lot of others think this is the central thing, we have to worry about machines getting conscious and so on. What do I think? I think consciousness is both irrelevant and incredibly important. Let me explain why. First of all, if you are chased by a heat-seeking missile, it's completely irrelevant to you whether this heat-seeking missile is conscious, whether it's having a subjective experience, whether it feels like anything to be that heat-seeking missile, because all you care about is what the heat-seeking missile does, not how it feels. And that shows that it's a complete red herring to think that you're safe from future AI and if it's not conscious. Our universe didn't used to be conscious. It used to be just a bunch of stuff moving around and gradually these incredibly complicated patterns got arranged into our brains, and we woke up and now our universe is aware of itself.

BILL GATES: I do think we have to worry about it. I don't think it's inherent that as we create our super intelligence that it will necessarily always have the same goals in mind that we do.

ELON MUSK: We just don't know what's going to happen once there's intelligence substantially greater than that of a human brain.

STEPHEN HAWKING: I think that development of full artificial intelligence could spell the end of the human race.

YANN LECUN: The stuff that has become really popular in recent years is what we used to call neural networks, which we now call deep learning, and it's the idea very much inspired by the brain, a little bit, of constructing a machine has a very large network of very simple elements that are very similar to the neurons in the brain and then the machines learn by basically changing the efficacy of the connections between those neurons.

MAX TEGMARK: AGI—artificial general intelligence—that's the dream of the field of AI: To build a machine that's better than us at all goals. We're not there yet, but a good fraction of leading AI researchers think we are going to get there, maybe in in a few decades. And, if that happens, you have to ask yourself if that might lead the machines to get not just a little better than us but way better at all goals—having super intelligence. And, the argument for that is actually really interesting and goes back to the '60s, to the mathematician I.J. Good, who pointed out that the goal of building an intelligent machine is, in and of itself, something that you could do with intelligence. So, once you get machines that are better than us at that narrow task of building AI, then future AIs can be built by, not human engineers, but by machines. Except, they might do it thousands or millions times faster...
 
Back
Top