Transcript - Everybody’s concerned about killer robots. We should ban them. We shouldn’t do any research into them. It may be unethical to do so. There’s a wonderful paper in fact by a professor at the post naval graduate school in Monterrey I believe, B.J. Strawser. I believe the title is the moral requirement to deploy autonomous drones. And his basic point in that is really pretty straightforward. We have obligations to our military forces to protect them and things that we can do which may protect them. A failure to do that is itself an ethical decision which may cause – may be the wrong thing to do if you have technologies.
So let me give you an interesting scale that whole thing down to show you this doesn’t have to be about terminator like robots coming in and shooting at people and things like that. Think about a landmine. Now a landmine has a sensor, a little switch. You step on it and it blows up. There’s a sensor, there’s an action that’s taken as a result of a change in its environment. Now it’s a fairly straightforward matter to take some artificial intelligence technologies right off the shelf today and just put a little camera on that. It’s not expensive, same kind you have in your cell phone. There’s a little bit of processing power that could look at what’s actually happening around that landmine. And you might think well okay, I can see that the person who is nearby me is carrying a gun. I can see that they’re wearing a military uniform so I’m going to blow up. But if you see it’s just some peasant out in a field with a rake or a hoe we can avoid blowing up under the circumstances. Oh, that’s a child. I don’t want to blow up. I’m begin stepped on by an animal. Okay, I’m not going to blow up. Now that is an autonomous military technology of just the sort that there was a recent letter signed by a great many scientists. This falls into that class.
And in the emerging that devices like that be banned. But I give this as an example of the device for which there’s a good argument that if we can’t deploy that technology it’s more humane, it’s more targeted and it’s more ethical to do so. Now that isn’t always the case. My point is not that that’s right and you should just go ahead willy nilly and develop killer robots. My point is this is a much more subtle area which requires considerable more thought and research. And we should let the people who are working on it think through these problems and make sure that they understand the kinds of sensitivities and concerns that we have as a society about the use and deployment of these types of technologies.
If robots are going to drive our cars and play with our kids, we’ll need to teach them right from wrong. Here's how a group of scientists plans to build moral machines.
If your autonomous car has to decide who lives -- you or the people it's heading for on the highway -- who should it save? Executive Editor Ken Mingis, Senior Writer Lucas Mearian and Multimedia Editor Keith Shaw drive the conversation.
Machine intelligence is here, and we're already using it to make subjective decisions. But the complex way AI grows and improves makes it hard to understand and even harder to control. In this cautionary talk, techno-sociologist Zeynep Tufekci explains how intelligent machines can fail in ways that don't fit human error patterns -- and in ways we won't expect or be prepared for. "We cannot outsource our responsibilities to machines," she says. "We must hold on ever tighter to human values and human ethics".
Benjamin Kuipers from the University of Michigan is asking us a simple but nevertheless essential question: how can we trust a robot?
We are increasingly seeing robots and other AIs that perceive and respond to the complexities of the human environment, making decisions about how it is appropriate to act in the current situation. In effect, they are functioning as members of our society. They drive cars autonomously on our roads, they help care for children and the elderly, and they operate complex distributed systems in the infrastructures of our world. How can we design a robot to be trustworthy? How can we verify its trustworthiness? How does a robot decide what to do? The standard notion of rationality in artificial intelligence, derived from game theory, says that a rational agent should choose the action that maximizes its expected utility. In principle, "utility" can be very sophisticated, but in practice, it is typically defined as the agent's own reward. Unfortunately, scenarios like the Tragedy of the Commons and the Prisoner's Dilemma show that self-interested reward-maximization can easily lead to very poor outcomes both for the individual and for society. (Fictional and non-fictional scenarios demonstrate that these problems are quite realistic.) Trust is a critical foundation for the non-zero-sum cooperative activities that allow society, and the individuals within it, to thrive. In order to build robots that function well in society, we need to formalize trust. We draw on classical theories in philosophical ethics, and on recent progress in the cognitive sciences, to understand the roles of trust, morality, and ethics in human society. We examine efforts to express trust within the formalism of game theory, and outline a research agenda for the future.
---
About Benjamin Kuipers:
Benjamin Kuipers joined the University of Michigan in January 2009 as Professor of Computer Science and Engineering. Prior to that, he held an endowed Professorship in Computer Sciences at the University of Texas at Austin. He received his B.A. from Swarthmore College, and his Ph.D. from MIT. He investigates the representation of commonsense and expert knowledge, with particular emphasis on the effective use of incomplete knowledge. His research accomplishments include developing the TOUR model of spatial knowledge in the cognitive map, the QSIM algorithm for qualitative simulation, the Algernon system for knowledge representation, and the Spatial Semantic Hierarchy model of knowledge for robot exploration and mapping. He has served as Department Chair at UT Austin, and is a Fellow of AAAI, IEEE, and AAAS.
Mady Delvaux, Member of the European Parliament, on her working group on ethical and legal implications of robotics on society.
The interview was filmed at the European Robotics Forum 2016 in Ljubljana.