Results 1 to 3 of 3

Thread: Miscellaneous

  1. #1

  2. #2
    The industry experts and gurus are all facing the fact and a book published by Martin Ford, Rise of the Robots, has clearly outspoken how AI will overtake humans. I see computers, algorithms and machines doing the daily tasks of a contemporary human routine. The evolution that is going on is like when the entire economy was agricultural based and now it has turned up to be a more mechanized society. The downsides of AI are long hauling but there are key positive takeaways as well.

    A human can face fatigue or demotivation doing the same monotonous task but a machine does not feel fatigue and the productivity could stay aligned. Since AI and Machine Learning learns from recurring patterns so there could come a time that it supersede every pattern and become super intelligent just like it happened when Deep Blue defeated the world champion Garry kasparov, however the overall score remained 4-2 but the potential was unleashed. AI could reduce chances of error and enhance accuracy.

    I belief in the myth that super intelligence by 2100 is inevitable but the fact remains, nobody knows when it can happen in decades or in centuries. The final words remain that intelligence enable control that is the reason why a human can control an animal or a machine but the time has arrived intelligence is set and in place; so do we see a future controlled by Robots and Computer Programs still require years to unleash.

  3. #3


    How regulation today could avoid tomorrow’s A.I. disaster | Joanna Bryson

    Published on Apr 8, 2018

    Joanna Bryson isn't a fan of companies that can't be adults and hold themselves responsible. Too many tech companies, she argues, think that they're above the law and that they should create what they want, no matter who it hurts, and have society pick up the pieces later. This libertarian attitude might be fine in theory, or if you're a college philosophy major. But if you're a major company releasing something like unmanned flying machine guns upon the world, perhaps there should be some oversight. Tech companies, she argues, could potentially create something catastrophic that they can't take back. Which is why regulation over these tech behemoths is needed now more than ever.

    If we're coding AI and we understand that there's moral consequences does that mean the programmer has to understand it? It isn't only the programmer, although we do really think that we need to train programmers to be watching out for these kinds of situations, knowing how to whistle blow, knowing when to whistle blow. There is a problem of people being over-reactive and that costs companies and I understand that, but we also have sort of a Nuremberg situation that we need everybody to be responsible. But ultimately it isn't just about the programmers, the programmers work within the context of a company and the companies work in the context of regulation and so it is about the law, it's about society. One of the things, one of the papers that had come out in 2017 was Professor Alan Winfield was a thing about how if legislatures can't be expected to keep up with the pace of technological change, what they could keep up with is which professional societies do they trust. And they already do this in various disciplines; it's just new for AI. You say you have to achieve the moral standards of at least one professional organization so when they give their rules about what's okay. And that sort of allows you kind of a loose coupling because it's wrong for professional organizations to enforce the law to go after people to sue them, whatever. That's not what professional organizations are for. But it's also sensible it is what professional organizations are for is to keep up with their own field and to set things like codes of conduct. So that's why you want to bring those two things together the executive government and the professional organizations and you can kind of have the legislature join those two together.

    This is what I'm working hard to keep in the regulations that it's always people in organizations that are accountable and so then they will be motivated to make sure that they can demonstrate they followed due process, so both of the people who are operating the AI and the people who developed the AI. Because it's like a car, when there's a car accident normally the driver is at fault, sometimes the person they hit is at fault because they did something completely unpredictable. But sometimes the manufacturer did something wrong with the brakes and that's a real problem. So we need to be able to show that the manufacturer followed good practice and it really is the fault of the driver. Or sometimes that there really isn't a fact of the matter like it was an unforeseeable thing in the past, but of course now it's happened so in the future we'll be more careful.
    That just happened recently in Europe there was a case where somebody was on... it wasn't like a totally driverless car, but I guess it was cruise control or something it had some extra AI and unfortunately somebody had a stroke. Now what happens a lot and what automobile manufacturers have to look for is falling asleep at the wheel, but this guy had a stroke, which is different from falling asleep. So he was still kind of holding on semi in control but couldn't see anything, hit a family and killed two of the three of the family. And so the survivor was the father and he said he wasn't happy only to get money from insurance or whatever the liability or whatever, he wanted to know that whoever had caused this accident was being held accountable.

Социальные закладки

Социальные закладки

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •