Page 2 of 2 FirstFirst 12
Results 11 to 17 of 17

Thread: Miscellaneous

  1. #11


    An Open Letter from Artificial Intelligence and Robotics Researchers: Professor Stuart Russell

    Published on Oct 15, 2015

  2. #12


    How can you stop killer robots | Toby Walsh | TEDxBerlin

    Published on Oct 8, 2015

    Tobay Walsh on "How can you stop killer robots" at TEDxBerlin

    Toby Walsh is one of the leading researchers in the world in Artificial Intelligence.

    He is currently working in Berlin thanks to a Humboldt Research award. He is a Professor of Artificial Intelligence at the University of New South Wales back in Sydney, Australia, and a Research Group leader at NICTA, Australia's Centre of Excellence for ICT Research. He has been elected a fellow of the Association for the Advancement of AI for his contributions to AI research.

    Earlier this year, he was one of the initial signatories of an Open Letter calling for a ban on offensive autonomous weapons. The letter was also signed by Stephen Hawking, Elon Musk and Steve Wozniak. In total, the letter now has close to 20,000 signatures and has pushed this issue into the world's spotlight. The letter argues that we need to take action today to prevent an arms race in which these lethal autonomous weapons fall into the hands of terrorists and rogue nations.

  3. #13


    Exploring the ethics of lethal autonomous weapons

    Uploaded on Nov 6, 2015

    AJung Moon, a Ph.D. candidate at The University of British Columbia and co-founder of the Open Roboethics initiative (ORi), discusses the results of a survey that explores the ethics of lethal autonomous weapons.

    Read the full story here:
    "Most people want fully autonomous weapons banned: UBC survey"

    November 9, 2015

  4. #14


    Published on Jan 23, 2016

    Remarkable advances in artificial intelligence may soon have implications for the future of warfare. What if autonomous weapon systems replace both soldiers and generals?

  5. #15


    Lethal autonomous weapons

    Published on Apr 8, 2016

    Biography:
    Stuart Russell received his B.A. with first-class honours in physics from Oxford University in 1982 and his Ph.D. in computer science from Stanford in 1986. He then joined the faculty of the University of California at Berkeley, where he is Professor (and formerly Chair) of Electrical Engineering and Computer Sciences and holder of the Smith-Zadeh Chair in Engineering. He is also an Adjunct Professor of Neurological Surgery at UC San Francisco and Vice-Chair of the World Economic Forum's Council on AI and Robotics. He has published over 150 papers on a wide range of topics in artificial intelligence including machine learning, probabilistic reasoning, knowledge representation, planning, real-time decision making, multitarget tracking, computer vision, computational physiology, and global seismic monitoring. His books include "The Use of Knowledge in Analogy and Induction", "Do the Right Thing: Studies in Limited Rationality" (with Eric Wefald), and "Artificial Intelligence: A Modern Approach" (with Peter Norvig).

    Abstract:
    Autonomous weapons systems select and engage targets without human intervention; they become lethal when those targets include humans. LAWS might include, for example, armed quadcopters that can search for and eliminate enemy combatants in a city, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. The artificial intelligence (AI) and robotics communities face an important ethical decision: whether to support or oppose the development of lethal autonomous weapons systems (LAWS).
    The UN has held three major meetings in Geneva under the auspices of the Convention on Certain Conventional Weapons, or CCW, to discuss the possibility of a treaty banning autonomous weapons. There is at present broad agreement on the need for "meaningful human control" over selection of targets and decisions to apply deadly force. Much work remains to be done on refining the necessary definitions and identifying exactly what should or should not be included in any proposed treaty.

    Wednesday, April 6, 2016 from 12:00 PM to 1:00 PM (PDT)
    Sutardja Dai Hall - Banatao Auditorium
    University of California, Berkeley

  6. #16
    "Don't be evil?"
    A survey of the tech sector’s stance on lethal autonomous weapons

    August 19, 2019

  7. #17
    Article "Are AI-Powered Killer Robots Inevitable?"
    Military scholars warn of a “battlefield singularity,” a point at which humans can no longer keep up with the pace of conflict.

    by Paul Scharre
    May 19, 2020

Page 2 of 2 FirstFirst 12

Социальные закладки

Социальные закладки

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •