Page 2 of 10 FirstFirst 1234 ... LastLast
Results 11 to 20 of 95

Thread: Miscellaneous

  1. #11


    How dangerous is Artificial Intelligence?

    Published on Apr 22, 2015

    In Avengers: Age of Ultron, the villain (Ultron) starts out as an artificial intelligence experiment gone wrong. Is this just Hollywood storytelling, or should we be worried about a future dictated by robot overlords? To help answer this question, we’ve teamed up with the awesome Rusty Ward over at Science Friction!

  2. #12


    Artificial Intelligence: The Intersection of Opportunity and Fear

    Published on Apr 29, 2015

    Professors and researchers often speak about artificial intelligence as a type of computing that will present many positive opportunities for smarter and more efficient machines and technologies. Many in the general public, however, often associate AI with the apocalyptic fictional images they see on TV and in film.

    While there are those who are overly fearful or overly excited about AI - there are many in between who understand both the potential opportunities granted by AI and the need for its deliberate and careful implementation.

  3. #13


    The NSA Actually Has a Skynet Program

    Published on May 17, 2015

    You know what Skynet is, right? The insane global digital defense system that inadvertently started the apocalypse in the Terminator series? No, the NSA uses it for something else. Which begs the question....why, NSA?
    Article "The NSA has an actual Skynet program"

    by Kim Zetter
    June 8, 2015

  4. #14


    Top 5 facts about the impending Robopocalypse

    Published on Jun 22, 2015

    In case movie franchises like Terminator and The Matrix haven't already made it totally clear, we'll soon be calling robots "master". Here's what we've figured out so far. Welcome to WatchMojo's Top 5 Facts; the series where we reveal – you guessed it – five random facts about a fascinating topic. In today's instalment we’re counting down five things you probably didn't know about the impending Robopocalypse.

  5. #15


    Elon Musk and Stephen Hawking fear a robot Apocalypse. But a major physicist disagrees

    Published on Jun 24, 2015

    All new technology is frightening, says physicist Lawrence Krauss. But there are many more reasons to welcome machine consciousness than to fear it.

    Transcript - I see no obstacle to computers eventually becoming conscious in some sense. That’ll be a fascinating experience and as a physicist I’ll want to know if those computers do physics the same way humans do physics. And there’s no doubt that those machines will be able to evolve computationally potentially at a faster rate than humans. And in the long term the ultimate highest forms of consciousness on the planet may not be purely biological. But that’s not necessarily a bad thing. We always present computers as if they don’t have capabilities of empathy or emotion. But I would think that any intelligent machine would ultimately have experience. It’s a learning machine and ultimately it would learn from its experience like a biological conscious being. And therefore it’s hard for me to believe that it would not be able to have many of the characteristics that we now associate with being human.

    Elon Musk and others who have expressed concern and Stephen Hawking are friends of mine and I understand their potential concerns but I’m frankly not as concerned about AI in the near term at the very least as many of my friends and colleagues are. It’s far less powerful than people imagine. I mean you try to get a robot to fold laundry and I’ve just been told you can’t even get robots to fold laundry. Someone just wrote me they were surprised when I said an elevator as an old example of the fact that when you get in an elevator it’s a primitive form of a computer and you’re giving up control of the fact that it’s going to take you where you want to go. Cars are the same thing. Machines are useful because they’re tools that help us do what we want to do. And I think computation machines are good examples of that. One has to be very careful in creating machines to not assume they’re more capable than they are. That’s true in cars. That’s true in vehicles that we make. That’s true in weapons we create. That’s true in defensive mechanisms we create. And so to me the dangers of AI are mostly due to the fact that people may assume the devices they create are more capable than they are and don’t need more control and monitoring. I guess I find the opportunities to be far more exciting than the dangers. The unknown is always dangerous but ultimately machines and computational machines are improving our lives in many ways. We of course have to realize that the rate at which machines are evolving in capability may far exceed the rate at which society is able to deal with them. The fact that teenagers aren’t talking to each other but always looking at their phones – not just teenagers – I was just in a restaurant here in New York this afternoon and half the people were not talking to people they were with but were staring at their phones. Well that may be not a good thing for societal interaction and people may have to come to terms with that. But I don’t think people view their phones as a danger. They view their phones as a tool that in many ways allow them to do what they otherwise do more effectively.

  6. #16

  7. #17


    Risks of emerging technology part 3 robotics
    May 31, 2010

  8. #18


    Predicting AI - Shanghai

    Published on Jul 13, 2015

    Stuart Armstrong presents a talk about the risks and rewards of long and short term AI - and why he chose to work in the field of extreme AI security.

  9. #19


    Future Day - Patrick Robotham - Existential Risk

    Published on Mar 4, 2014

  10. #20


    Published on Feb 28, 2014

    Cybersecurity expert Peter W. Singer discusses the similarities between drones and computer viruses. Singer is the author of Cybersecurity and Cyberwar: What Everyone Needs to Know. You can learn more at cybersecuritybook.com.

    Peter W. Singer: There's been an enormous amount of changing forces on warfare in the twenty-first century. And they range from new actors in war like private contractors, the black waters of the world to the growth of warlord and child soldier groups to technologic shifts. The introduction of robotics to cyber. And one of the interesting things that ties these together is how not only the who of war is being expanded but also the where and the when. So one of the things that links, for example, drones and robotics with cyber weapons is that you're seeing a shift in both the geographic location of the human role. Humans are still involved. We're not in the world of the Terminator. Humans are still involved but there's been a geographic shift where the operation can be happening in Pakistan but the person flying the plane might be back in Nevada 7,000 miles away.

    Or on the cyber side where the software might be hitting Iranian nuclear research centrifuges like what Stuxnet did but the people who designed it and decided to send it are, again, thousands of miles away. And in that case it was a combined U.S./Israeli operation. One of the next steps in this both with the physical side of robotics and the software side of cyber is a shift in that human role -- not just geographically but chronologically where the humans are still making decisions but they're sending the weapon out in the world to then make its own decisions as it plays out there. In robotics we think about this as autonomy. With Stuxnet it was a weapon. It was a weapon like anything else in history, you know, a stone, a drone -- it caused physical damage.

    But it was sent out in the world on a mission in a way no previous weapon has done. Go out, find this one target and cause harm to that target and nothing else. And so it plays out over a matter of, you know, Stuxnet plays out over a series of time. It also is interesting because it's the first weapon that can be both here, there, everywhere and nowhere. Unlike a stone. Unlike a drone. It's not a thing and so that software is hitting the target, those Iranian nuclear research facilities, but it also pops up in 25,000 other computers around the world. That's actually how we discover it, how we know about it. The final thing that makes this interesting is it introduces a difficult ethical wrinkle.

    On one hand we can say this may have been the first ethical weapons ever developed. Again whether we're talking about the robots or Stuxnet, they can be programmed to do things that we would describe as potentially ethical. So Stuxnet could only cause harm to its intended target. Yet popped up in 25,000 computers around the world but it could only harm the ones with this particular setup, this particular geographic location of doing nuclear research. In fact, even if you had nuclear centrifuges in your basement, it still wouldn't harm them. It could only hit those Iranian ones. Wow, that's great but as the person who discovered it so to speak put it, "It's like opening Pandora's box." And not everyone is going to program it that way with ethics in mind.

    Directed/Produced by Jonathan Fowler and Dillon Fitton

Page 2 of 10 FirstFirst 1234 ... LastLast

Социальные закладки

Социальные закладки

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •