Quote:
Benjamin Kuipers from the University of Michigan is asking us a simple but nevertheless essential question: how can we trust a robot?
We are increasingly seeing robots and other AIs that perceive and respond to the complexities of the human environment, making decisions about how it is appropriate to act in the current situation. In effect, they are functioning as members of our society. They drive cars autonomously on our roads, they help care for children and the elderly, and they operate complex distributed systems in the infrastructures of our world. How can we design a robot to be trustworthy? How can we verify its trustworthiness? How does a robot decide what to do? The standard notion of rationality in artificial intelligence, derived from game theory, says that a rational agent should choose the action that maximizes its expected utility. In principle, "utility" can be very sophisticated, but in practice, it is typically defined as the agent's own reward. Unfortunately, scenarios like the Tragedy of the Commons and the Prisoner's Dilemma show that self-interested reward-maximization can easily lead to very poor outcomes both for the individual and for society. (Fictional and non-fictional scenarios demonstrate that these problems are quite realistic.) Trust is a critical foundation for the non-zero-sum cooperative activities that allow society, and the individuals within it, to thrive. In order to build robots that function well in society, we need to formalize trust. We draw on classical theories in philosophical ethics, and on recent progress in the cognitive sciences, to understand the roles of trust, morality, and ethics in human society. We examine efforts to express trust within the formalism of game theory, and outline a research agenda for the future.
---
About Benjamin Kuipers:
Benjamin Kuipers joined the University of Michigan in January 2009 as Professor of Computer Science and Engineering. Prior to that, he held an endowed Professorship in Computer Sciences at the University of Texas at Austin. He received his B.A. from Swarthmore College, and his Ph.D. from MIT. He investigates the representation of commonsense and expert knowledge, with particular emphasis on the effective use of incomplete knowledge. His research accomplishments include developing the TOUR model of spatial knowledge in the cognitive map, the QSIM algorithm for qualitative simulation, the Algernon system for knowledge representation, and the Spatial Semantic Hierarchy model of knowledge for robot exploration and mapping. He has served as Department Chair at UT Austin, and is a Fellow of AAAI, IEEE, and AAAS.