Results 1 to 4 of 4

Thread: Jerry Kaplan

  1. #1

  2. #2


    Would You Buy a Car That’s Programmed to Kill You? You Just Might

    Published on Aug 26, 2015

    Author and entrepreneur Jerry Kaplan offers an interesting crash course on computational ethics, the idea that robots and machines will require programming to make them cognizant of morals, decorum, manners, and various other social nuances.

    Transcript

    As machines become increasingly autonomous by which I mean they can sense their environment and they can make decisions about what to do or what not to do. Of course it’s based on the programming and their experience. But we don’t have as direct control over what they do as we do today with the kinds of technology that we have. Now there’s a couple of very interesting consequences of that. One of them is that they’re going to be faced with having to make ethical decisions. I’ll call it ethics junior is just making socially appropriate decisions. So we’re taking machines and we’re putting them in situations where they’re around people. And something that we take for granted and it seems so natural that machines do not take for granted and do not find natural is the normal kinds of social courtesies and conventions that we operate by in dealing with other people. You don’t want to have a robot that’s making a delivery run down the sidewalk and everybody’s got to get out of the way. It has to be able to walk in a crowd in a socially appropriate way. Your autonomous car, right. There are lots of very interesting ethical conundrums that come up but a lot of them are just social. Okay it pulls up to the crosswalk. Should you cross? Wait? How’s it going to signal you? It’s right now the social conventions should make eye contact with the driver and they tell you whether to cross.

    Now I can’t make eye contact with an autonomous car so there are lots of these sort of rough edges around how machines ought to be able to behave. And the situations are highly variable. You can’t just make a list of them and say do this and do that. We need to program into these devices some fairly general principles. You can call it ethical if you like which will allow them to guide their own behavior in ways and in directions that are consistent with the expectations that we have in society.

    Now I’m teaching at Stanford and I can tell you I haven’t seen anything about this in the engineering curriculum. There’s how to be an ethical engineer but there isn’t how do you build a device to be ethical. This is a completely new area. It’s sometimes goes by the name of moral programming, computational ethics. There’s some excellent books on this subject. But unfortunately if you read those books which I have to do because that’s my job, they’re mostly pointing out the problems. Nobody has a really good scheme for how to go about doing this. So we need to develop an engineering discipline of computational ethics and we need to have course sequences in our engineering schools that teach how to get machines to behave appropriately in a wide variety of new circumstances.

  3. #3
    Transcript

    Let me point out some of the more serious kinds of conundrums just to give you a feel for it and then others that are just inconveniences, okay. On the very serious side there’s a classic philosophical debate that goes on over what’s called the trolley problem. And the trolley problem is basically you’re in a trolley and there’s a track that splits. If you take no action the trolley is going to go to the right and there are four people on the track and it’s going to kill those people. You can flip a switch and it’ll go down the left track and there’s only one person on that track. The ethical question is is it ethical to flip that switch? It is true that the loss of life would be minimized but it is also true that you now had taken an action to kill somebody. And if you’re that person you may not think that’s the right thing to do. So philosophers have been studying this and many variations and there’s a lot of very subtle and interesting work that goes on in this. But this is about to become very real because autonomous cars will face exactly these kinds of decisions. So I’m going to buy an autonomous car and I’m in the car. I’m the one guy. And there may be circumstances in which there are four lives, four people, in front of the car in some way. And to save their lives my car has to drive off the edge of the bridge.

    There’s a philosophical theory called utilitarianism which has been around for a couple of centuries at least that would say that maximizing the good for society is my car should kill me. But I’m not buying that car. And so we have a conundrum here. I don’t want to see people buy a Ford instead of a Chevy because the Ford is more likely to save my life no matter what and the Chevy is going to be a little more forgiving of that. And it might kill me to save the lives of other people. I don’t want that to be a selling point in cars. So we need to have a societal discussion over how does this work. To demonstrate why that is so interesting I’ll just give you a little twist on what I said. Right now we’re talking about me buying an autonomous car. But let’s suppose I’m signed up for the great Uber network in the sky of the future and cars are coming or whatever. And I don’t own that car. Now I feel a little bit differently about it because it’s not my car. I’m just like I’m getting on a train. You would never allow a train to – the people on the train to vote, you know, like a gal on my car killed and to kill me and not that one.

    There are certain – then it makes more sense for the societal average interests to be operational. So when I think about this issue even the fact of who owns the car changes my own moral judgment about this particular kind of an issue. Well we need to be able to take these kinds of principles, talk about them, vet them and put them into cars. So autonomous driving cars has got a number of different issues that are very, very important. Now so far I’ve just talked about life and death. But there’s lots of shades of gray in between that are really quite different. In fact I’m going to make an argument to you today that we’re already down this path and we haven’t even recognized yet for a very interesting reason. Because in order to avoid pointing out this problem the car manufacturers do not talk about this as artificial intelligence. Let me give you an example. A common functioning cars is ABS, adaptive braking systems I think is what that stands for. And what that will do is if it can detect, which it can, that you’re about to skid it’s going to pump the brakes and do various things to maintain control of the car and keep it going in a particular direction.

    Okay now what you might not know is that ABS in many cases on certain surfaces has a longer stopping distance than if you just jammed on the brakes, locked them and the car spun around. So imagine you’re driving your car and oh my god, there’s a kid in the middle of the road. And you just want that car to stop as quickly as it can and you slam on the brakes. Well the car is going to prioritize keeping going straight over running over that kid in today’s technology. There were circumstances in which that decision which an engineer made a while back in designing that system. We want to keep the car stable. You no longer have the freedom to make the decision. I don’t mind if the car spins out of control as long as I miss that kid. So now imagine that the ABS function had been described as we’re simulating the actions of a professional driver. Now we’re taking that judgment and we’re programming it to a machine using these advanced artificial intelligence techniques so that the car can keep under control the say way a professional driver might. Well we might have felt a little bit differently about that if I presented you with that example and we were talking about in as an AI technology. But by saying it’s simply a function of the car and it’s like every other function, you know, it’s like the turn signals and everything else. This issue never really go raised. It never really got vetted. But as we look to the future autonomous driving it’s going to be a problem.

    Let me move on though to less severe situations. You’re in your autonomous car and it pulls, you’re on a two lane street and this happens all the time. There’s a UPS truck right in front of you just come to a stop. The guy jumps out, he opens up the back, grabs the package and starts heading off. Now you as a driver are permitted a certain amount of latitude in how you behave. And what would you do? You look around it, you go across the double yellow line and you pass that UPS truck. It’s perfectly acceptable behavior. May I point out your breaking a rule. You’re crossing a double yellow line. If we were to program our cars simply to say you’re never supposed to cross a double yellow line. That car is going to sit there until the guy is done which might be a very long time if he’s gone to his lunch. So the kinds of latitude that we permit people in their behavior in a lot of these circumstances to be able to break rules or bend rules in a very appropriate way we need to talk about whether it’s okay for a car to engage in that kind of behavior.

    Let me give you another one. What would you feel if you went down to the movie theater and there are scarce tickets available and all of a sudden you find there’s 16 robots in line in front of you and you’re at the back of the line. You might actually be like wait a minute, that’s not fair. Why do we have 16 robots and they’re going to pick up tickets for whoever owns the robots. I’m here, you know. We should prioritize me over those robots. I think when that begins to happen in practice people will be up in arms because they can see what is actually happening. But that same situation is already happening today. If you try to get a ticket to Billy Joel at Madison Square Garden and scalpers run programs that snap up all of those tickets in a matter of seconds leaving all the humans who are sitting there trying to press the return button and god forbid fill out the little caption. They don’t get stuff. So it’s exactly the same situation. The robots are owned and working for somebody else are grabbing an asset before you have an opportunity or a fair chance to acquire that asset to get that particular ticket, you know. And if you could see that people would be really made today but it’s invisible because all this stuff is in the cloud. So we’re already facing a lot of these same ethical and social issues but they’re not as visible as they need to be for us to have a meaningful public discussion about these particular topics.

  4. #4


    If Your Robot Commits Murder, Should You Go to Jail?

    Published on Sep 16, 2015

    Self-driving cars aren't the only emerging technology facing major questions about ethics and accountability. Jerry Kaplan's latest book is "Humans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial Intelligence".

    Transcript - There’s a whole other set of issues about how robots should be treated under the law. Now the obvious knee jerk reaction is well you own a robot and you’re responsible for everything that it does. But as these devices become much more autonomous it’s not at all clear that that’s really the right answer or a good answer. You go out and you buy a great new robot and you send it down the street to go pick you up a Frappuccino down at Starbucks and maybe it’s accidental but it’s standing at the corner and it happens to bump some kid into traffic and a car runs the kid over. The police come and they’re going to come and arrest you for this action. Do you really feel that you’re as responsible as you would be if you had gone like this and pushed that kid into traffic? I would argue no you don’t. So we’re going to need new kinds of laws that deal with the consequences of well-intentioned autonomous actions that robots take. Now interestingly enough there’s a number of historical precedents for this. You might say well how can you hold a robot responsible for its behavior? You really can actually and let me point out a couple of things.

    The first is most people don’t realize it. Corporations can commit criminal acts in dependent of the people in the corporation. So in the Deepwater Horizon Gulf coast accident as an example BP oil was charged with criminal violations even though people in the corporation were not necessarily charged with those same criminal violations. And rightfully so. So how do we punish a corporation? We punish a corporation by interfering with its ability to achieve its stated goal, make huge fines as they did in that particular case. You can make the company go out of business. You can revoke its license to operate which is a death penalty for a corporation. You can have it monitored as they do in antitrust cases in many companies. IBM, Microsoft I think have monitors to make sure they’re abiding by certain kinds of behavioral standards. Well that same kind of activity can apply to a robot. You don’t have to put a robot in jail but you can interfere with what it’s trying to do. And if these robots are adaptable, logical and are learning. They’ll say well I’ll get it, you know. I can’t do that because my goal is to accomplish something in particular and if I take this particular action that’s actually going to be working against my interest in accomplishing that situation. So rehabilitation and modification of robot behavior just as with a corporation is much more logical than you might think. Now another interesting historical precedent is prior to the Civil War there were a separate set of laws that applied to slaves. They were called the slave codes. And slaves were property. But interestingly enough the slave owners were only held liable under certain conditions for the actions of their slaves. The slaves themselves were punished under if they committed crimes. And so we have a historical precedent for the kinds of ways in which we can sort this out so that you are not in constant fear that your robot is going to bump into somebody and you’re going to go to jail for 20 years for negligent homicide or whatever it might be.

Similar Threads

  1. Replies: 7
    Last Post: 2nd July 2016, 07:50
  2. Replies: 6
    Last Post: 20th May 2014, 06:27

Социальные закладки

Социальные закладки

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •