Three Laws of Robotics, Isaac Asimov


the first law

Published on May 1, 2016

The first robot to autonomously and intentionally break Asimov's first law, which states:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
The robot makes a decision to injure a person or not in a way the creator can not predict (in this video it decided for injury).
This project beings up questions of ethics and design along with the truth that there now exists a machine which on its own decides if it should injure a person or not. Even the so called "killer drones" still have a person in the loop.

"This Robot Autonomously Breaks Asimov’s First Law and Makes You Bleed"

Designer - Alexander Reben
 

What are Asimov's Three Laws of Robotics?

Published on Jul 18, 2016

Science fiction has tried to define the laws of the robotics world for many decades. Isaac Asimov was one of the first to suggest three laws that should govern all robotics in his 1942 short story "I, Robot".

Asimov's Laws:
01. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
02. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
03. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Moral Math of Robots full program coming soon!
 
Back
Top