Page 9 of 13 FirstFirst ... 7891011 ... LastLast
Results 81 to 90 of 130

Thread: Miscellaneous

  1. #81

  2. #82
    Article "Vatican Advisory Group Issues Call for AI Ethics"
    IBM and Microsoft have signed on to the Pontifical Academy for Life’s charter on artificial intelligence

    by John McCormick
    February 28, 2020

  3. #83


    Is facial recognition ethical?

    Premiered Mar 5, 2020

    In this episode of the Pretentious Geeks we talk of the various ethical dilemmas that are presented by the facial recognition technology.

  4. #84

  5. #85

  6. #86


    Regulating the rise of Artificial General Intelligence

    Jun 1, 2020

    As research around the world proceeds to improve the power, the scope, and the generality of AI systems, should developers adopt regulatory frameworks to help steer progress?

    What are the main threats that such regulations should be guarding against? In the midst of an intense international race to obtain better AI, are such frameworks doomed to be ineffective? Might such frameworks do more harm than good, hindering valuable innovation? Are there good examples of precedents, from other fields of technology, of international agreements proving beneficial? Or is discussion of frameworks for the governance of AGI (Artificial General Intelligence) a distraction from more pressing issues, given the potential long time scales ahead before AGI becomes a realistic prospect?

    This 90 minute London Futurists live Zoom webinar featured a number of panellists with deep insight into the issues of improving AI:

    *) Joanna Bryson, Professor of Ethics and Technology at the Hertie School, Berlin
    *) Dan Faggella, CEO and Head of Research, Emerj Artificial Intelligence Research
    *) Nell Watson, tech ethicist, machine learning researcher, and social reformer

  7. #87


    Reflections on AI: Q&A with Clara Neppel

    Jun 29, 2020

    The TUM IEAI had the pleasure of speaking with Clara Neppel prior to her Speaker Series session on 18 June 2020 about the topic of Using Ethics Standardization and Certification for Establishing Trust in the AI Ecosystem.

    We were able to ask her some brief questions about her lecture, AI ethics, how to apply AI ethics in practice, and the role of academia and research institutions in creating frameworks for AI. Clara Neppel joined IEEE in 2017 after working with the European Patent Office (EPO), and now serves as the Senior Director of the IEEE European office in Vienna, where she is responsible for the growth of IEEE’s operations and presence in Europe, focusing on the needs of industry, academia, and government. Clara serves as a point of contact for initiatives with regard to technology, engineering and related public policy issues that help implementing IEEE’s continued global commitment to fostering technological innovation for the benefit of humanity. She holds a Ph.D. in Computer Science from the Technical University of Munich and a Master in Intellectual Property Law and Management from the University of Strasbourg.

  8. #88


    Does conscious AI deserve rights? | Richard Dawkins, Joanna Bryson, Peter Singer & more | Big Think

    Jul 8, 2020

    Does AI—and, more specifically, conscious AI—deserve moral rights? In this thought exploration, evolutionary biologist Richard Dawkins, ethics and tech professor Joanna Bryson, philosopher and cognitive scientist Susan Schneider, physicist Max Tegmark, philosopher Peter Singer, and bioethicist Glenn Cohen all weigh in on the question of AI rights.

    Given the grave tragedy of slavery throughout human history, philosophers and technologists must answer this question ahead of technological development to avoid humanity creating a slave class of conscious beings.

    One potential safeguard against that? Regulation. Once we define the context in which AI requires rights, the simplest solution may be to not build that thing.
    ---------------------------------------------------------------------------------
    TRANSCRIPT:

    RICHARD DAWKINS: When we come to artificial intelligence and the possibility of their becoming conscious, we reach a profound philosophical difficulty. I am a philosophical naturalist; I'm committed to the view that there is nothing in our brains that violates the laws of physics, there's nothing that could not, in principle, be reproduced in technology. It hasn't been done yet; we're probably quite a long way away from it, but I see no reason why in the future we shouldn't reach the point where a human-made robot is capable of consciousness and of feeling pain.

    BABY X: Da. Da.

    MARK SAGAR: Yes, that's right. Very good.

    BABY X: Da. Da.

    MARK SAGAR: Yeah.

    BABY X: Da. Da.

    MARK SAGAR: That's right.

    JOANNA BRYSON: So, one of the things that we did last year, which was pretty cool, the headlines, because we were replicating some psychology stuff about implicit bias—actually, the best one is something like 'Scientists show that AI is sexist and racist and it's our fault,' which, that's pretty accurate because it really is about picking things up from our society. Anyway, the point was, so here is an AI system that is so humanlike that it's picked up our prejudices and whatever and it's just vectors. It's not an ape, it's not going to take over the world, it's not going to do anything, it's just a representation, it's like a photograph. We can't trust our intuitions about these things.

    SUSAN SCHNEIDER: So why should we care about whether artificial intelligence is conscious? Well, given the rapid-fire developments in artificial intelligence, it wouldn't be surprising if within the next 30 to 80 years we start developing very sophisticated general intelligences. They may not be precisely like humans, they may not be as smart as us, but they may be sentient beings. If they're conscious beings, we need ways of determining whether that's the case. It would be awful if, for example, we sent them to fight our wars, force them to clean our houses, made them essentially a slave class. We don't want to make that mistake, we want to be sensitive to those issues, so we have to develop ways to determine whether artificial intelligence is conscious or not.

    ALEX GARLAND: The Turing Test was a test set by Alan Turing, the father of modern computing. He understood that at some point the machines they were working on could become thinking machines as opposed to just calculating machines and he devised a very simple test.

    DOMHNALL GLEESON (IN CHARACTER): It's when a human interacts with a computer and if the human doesn't know they're interacting with a computer the test is passed.

    DOMHNALL GLEESON: And this Turing Test is a real thing and it's never, ever been passed.

    ALEX GARLAND: What the film does is engage with the idea that it will, at some point, happen. The question is what that leads to.

    MARK SAGAR: So, she can see me and hear me. Hey, sweetheart, smile at Dad. Now, she's not copying my smile, she's responding to my smile. We've got different sorts of neuromodulators, which you can see up here. So, for example, I'm going to abandon the baby, I'm just going to go away and she's going to start wondering where I've gone. And if you watch up where the mouse is you should start seeing cortisol levels and other sorts of neuromodulators rising. She's going to get increasingly—this is a mammalian maternal separation distress response. It's okay, sweetheart. It's okay. Aw. It's okay. Hey. It's okay.

    RICHARD DAWKINS: This is profoundly disturbing because it goes against the grain to think that a machine made of metal and silicon chips could feel pain, but I don't see why they would not. And so, this moral consideration of how to treat artificially...

    Read the full transcript

  9. #89

  10. #90

Page 9 of 13 FirstFirst ... 7891011 ... LastLast

Социальные закладки

Социальные закладки

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •