Robot Takeover World 2013
Published on Apr 29, 2013
Dr. Samuel L Douglas makes a special guest appearance and talks on the Robot Takeover that is taking place on earth as we speak and how technology is advancing for the worst.
Dr. Samuel L Douglas makes a special guest appearance and talks on the Robot Takeover that is taking place on earth as we speak and how technology is advancing for the worst.
Lasers, AI, geo-enginieering - tech is advancing at an alarming rate, and one day it's bound to all go tits-up. Here's a troubling look at 10 future technologies that are going to kill us all.
At this year's Aspen Ideas Festival, we asked a group of futurists, technology experts, and artists to predict the future of robotics. So, will machines revolt against humans? Maybe not, but as Duke University professor Missy Cummings explains, they may have already won. "We don't even realize where robots are in our world," she says.
The Tesla and SpaceX C.E.O. discusses his fears at Vanity Fair's New Establishment Summit.
Professor Stephen Hawking has told the BBC that artificial intelligence could spell the end for the human race.
In an interview after the launch of a new software system designed to help him communicate more easily, he said there were many benefits to new technology but also some risks.
Scientists are pushing to advance artificial intelligence and create smart machines. But now Stephen Hawking and Elon Musk have flagged that this technology could be dangerous.
Futurist Michael Vassar explains why it makes perfect sense to conclude that the creation of greater-than-human AI would doom humanity. The only thing that could save us is if due caution were observed and a framework installed to prevent such a thing from happening. Yet Vassar makes note that AI itself isn't the greatest risk to humanity. Rather, it's "the absence of social, intellectual frameworks" through which experts making key discoveries and drawing analytical conclusions can swiftly and convincingly communicate these ideas to the public.
Before you scream in terror, the researchers responsible for the literal rage machine did it for a good reason-- to help understand and deal with people who are enraged themselves. But according to artificial intelligence experts, it's not the angry robots we should worry about, but the ones that show no expression at all...
In Avengers: Age of Ultron, the villain (Ultron) starts out as an artificial intelligence experiment gone wrong. Is this just Hollywood storytelling, or should we be worried about a future dictated by robot overlords? To help answer this question, we’ve teamed up with the awesome Rusty Ward over at Science Friction!
Professors and researchers often speak about artificial intelligence as a type of computing that will present many positive opportunities for smarter and more efficient machines and technologies. Many in the general public, however, often associate AI with the apocalyptic fictional images they see on TV and in film.
While there are those who are overly fearful or overly excited about AI - there are many in between who understand both the potential opportunities granted by AI and the need for its deliberate and careful implementation.
You know what Skynet is, right? The insane global digital defense system that inadvertently started the apocalypse in the Terminator series? No, the NSA uses it for something else. Which begs the question....why, NSA?
In case movie franchises like Terminator and The Matrix haven't already made it totally clear, we'll soon be calling robots "master". Here's what we've figured out so far. Welcome to WatchMojo's Top 5 Facts; the series where we reveal – you guessed it – five random facts about a fascinating topic. In today's instalment we’re counting down five things you probably didn't know about the impending Robopocalypse.
All new technology is frightening, says physicist Lawrence Krauss. But there are many more reasons to welcome machine consciousness than to fear it.
Transcript - I see no obstacle to computers eventually becoming conscious in some sense. That’ll be a fascinating experience and as a physicist I’ll want to know if those computers do physics the same way humans do physics. And there’s no doubt that those machines will be able to evolve computationally potentially at a faster rate than humans. And in the long term the ultimate highest forms of consciousness on the planet may not be purely biological. But that’s not necessarily a bad thing. We always present computers as if they don’t have capabilities of empathy or emotion. But I would think that any intelligent machine would ultimately have experience. It’s a learning machine and ultimately it would learn from its experience like a biological conscious being. And therefore it’s hard for me to believe that it would not be able to have many of the characteristics that we now associate with being human.
Elon Musk and others who have expressed concern and Stephen Hawking are friends of mine and I understand their potential concerns but I’m frankly not as concerned about AI in the near term at the very least as many of my friends and colleagues are. It’s far less powerful than people imagine. I mean you try to get a robot to fold laundry and I’ve just been told you can’t even get robots to fold laundry. Someone just wrote me they were surprised when I said an elevator as an old example of the fact that when you get in an elevator it’s a primitive form of a computer and you’re giving up control of the fact that it’s going to take you where you want to go. Cars are the same thing. Machines are useful because they’re tools that help us do what we want to do. And I think computation machines are good examples of that. One has to be very careful in creating machines to not assume they’re more capable than they are. That’s true in cars. That’s true in vehicles that we make. That’s true in weapons we create. That’s true in defensive mechanisms we create. And so to me the dangers of AI are mostly due to the fact that people may assume the devices they create are more capable than they are and don’t need more control and monitoring. I guess I find the opportunities to be far more exciting than the dangers. The unknown is always dangerous but ultimately machines and computational machines are improving our lives in many ways. We of course have to realize that the rate at which machines are evolving in capability may far exceed the rate at which society is able to deal with them. The fact that teenagers aren’t talking to each other but always looking at their phones – not just teenagers – I was just in a restaurant here in New York this afternoon and half the people were not talking to people they were with but were staring at their phones. Well that may be not a good thing for societal interaction and people may have to come to terms with that. But I don’t think people view their phones as a danger. They view their phones as a tool that in many ways allow them to do what they otherwise do more effectively.
Stuart Armstrong presents a talk about the risks and rewards of long and short term AI - and why he chose to work in the field of extreme AI security.
Cybersecurity expert Peter W. Singer discusses the similarities between drones and computer viruses. Singer is the author of Cybersecurity and Cyberwar: What Everyone Needs to Know. You can learn more at cybersecuritybook.com.
Peter W. Singer: There's been an enormous amount of changing forces on warfare in the twenty-first century. And they range from new actors in war like private contractors, the black waters of the world to the growth of warlord and child soldier groups to technologic shifts. The introduction of robotics to cyber. And one of the interesting things that ties these together is how not only the who of war is being expanded but also the where and the when. So one of the things that links, for example, drones and robotics with cyber weapons is that you're seeing a shift in both the geographic location of the human role. Humans are still involved. We're not in the world of the Terminator. Humans are still involved but there's been a geographic shift where the operation can be happening in Pakistan but the person flying the plane might be back in Nevada 7,000 miles away.
Or on the cyber side where the software might be hitting Iranian nuclear research centrifuges like what Stuxnet did but the people who designed it and decided to send it are, again, thousands of miles away. And in that case it was a combined U.S./Israeli operation. One of the next steps in this both with the physical side of robotics and the software side of cyber is a shift in that human role -- not just geographically but chronologically where the humans are still making decisions but they're sending the weapon out in the world to then make its own decisions as it plays out there. In robotics we think about this as autonomy. With Stuxnet it was a weapon. It was a weapon like anything else in history, you know, a stone, a drone -- it caused physical damage.
But it was sent out in the world on a mission in a way no previous weapon has done. Go out, find this one target and cause harm to that target and nothing else. And so it plays out over a matter of, you know, Stuxnet plays out over a series of time. It also is interesting because it's the first weapon that can be both here, there, everywhere and nowhere. Unlike a stone. Unlike a drone. It's not a thing and so that software is hitting the target, those Iranian nuclear research facilities, but it also pops up in 25,000 other computers around the world. That's actually how we discover it, how we know about it. The final thing that makes this interesting is it introduces a difficult ethical wrinkle.
On one hand we can say this may have been the first ethical weapons ever developed. Again whether we're talking about the robots or Stuxnet, they can be programmed to do things that we would describe as potentially ethical. So Stuxnet could only cause harm to its intended target. Yet popped up in 25,000 computers around the world but it could only harm the ones with this particular setup, this particular geographic location of doing nuclear research. In fact, even if you had nuclear centrifuges in your basement, it still wouldn't harm them. It could only hit those Iranian ones. Wow, that's great but as the person who discovered it so to speak put it, "It's like opening Pandora's box." And not everyone is going to program it that way with ethics in mind.
Directed/Produced by Jonathan Fowler and Dillon Fitton