Article "Elon Musk Says Artificial Intelligence Is the ‘Greatest Risk We Face as a Civilization’"
by David Z. Morris
July 15, 2017
Printable View
Article "Elon Musk Says Artificial Intelligence Is the ‘Greatest Risk We Face as a Civilization’"
by David Z. Morris
July 15, 2017
Article "Elon Musk leads 116 experts calling for outright ban on killer robots"
Open letter signed by Tesla chief and Google’s Mustafa Suleyman urges UN to block use of lethal autonomous weapons to prevent third age of war
by Samuel Gibbs
August 20, 2017
https://youtu.be/SM__RSJXeHA
Richard Dawkins: A.I. might run the world better than humans do
Published on Sep 23, 2017
Quote:
Will A.I. take us over, and one day look back on this time period as the dawn of their civilization? Richard Dawkins posits an interesting idea, or at the very least a premise to a good science-fiction novel.
Richard Dawkins: When we come to artificial intelligence and the possibility of their becoming conscious we reach a profound philosophical difficulty. I am a philosophical naturalist. I am committed to the view that there’s nothing in our brains that violates the laws of physics, there’s nothing that could not in principle be reproduced in technology. It hasn’t been done yet, we’re probably quite a long way away from it, but I see no reason why in the future we shouldn’t reach the point where a human made robot is capable of consciousness and of feeling pain. We can feel pain, why shouldn’t they?
And this is profoundly disturbing because it kind of goes against the grain to think that a machine made of metal and silicon chips could feel pain, but I don’t see why they would not. And so this moral consideration of how to treat artificially intelligent robots will arise in the future, and it’s a problem which philosophers and moral philosophers are already talking about.
Once again, I’m committed to the view that this is possible. I’m committed to the view that anything that a human brain can do can be replicated in silicon.
And so I’m sympathetic to the misgivings that have been expressed by highly respected figures like Elon Musk and Steven Hawking that we ought to be worried that on the precautionary principle we should worry about a takeover perhaps even by robots by our own creation, especially if they reproduce themselves and potentially even evolve by reproduction and don’t need us anymore.
This is a science-fiction speculation at the moment, but I think philosophically I’m committed to the view that it is possible, and like any major advance we need to apply the precautionary principle and ask ourselves what the consequences might be.
It could be said that the sum of not human happiness but the sum of sentient-being happiness might be improved, they might make a better job do a better job of running the world than we are, certainly that we are at present, and so perhaps it might not be a bad thing if we went extinct.
And our civilization, the memory of Shakespeare and Beethoven and Michelangelo persisted in silicon rather than in brains and our form of life. And one could foresee a future time when silicon beings look back on a dawn age when the earth was peopled by soft squishy watery organic beings and who knows that might be better, but we’re really in the science fiction territory now.
Article "Artificial Intelligence Is Our Future. But Will It Save Or Destroy Humanity?"
by Patrick Caughill
September 29, 2017
Article "Stuart Russell wrote the textbook on AI - now he wants to save us from catastrophe"
Stuart Russell co-authored one of the most influential textbooks on artificial intelligence, and now more than ever society needs to consider what happens next if a general AI is actually achieved.
by Tamlin Magee
October 16, 2017
https://youtu.be/d5_N67la9tw
Nira Chamberlain : maths versus AI
Published on Apr 20, 2018
Quote:
How do you prevent AI from taking over the world? In this talk, Nira Chamberlain discusses how mathematics is providing crucial answers. Mathematical modelling is the most creative side of applied mathematics which itself connects pure maths with science and technology.
Article "How Artificial Intelligence Could Increase the Risk of Nuclear War"
by Doug Irving
April 24, 2018
Article "Google’s Sergey Brin warns of the threat from AI in today’s ‘technology renaissance’"
Google co-founder says the company is giving ‘serious thought’ to problems like job destruction
by James Vincent
April 28, 2018
https://youtu.be/7gt8a_ETPRE
Risk in the sky?
Published on Sep 13, 2018
Quote:
The following description was updated Oct. 22 for clarification: Tests performed at the University of Dayton Research Institute’s Impact Physics Lab show that even small drones pose a risk to manned aircraft. The research was a comparative study between a bird strike and a drone strike on an aircraft wing, using a drone similar in weight to many hobby drones and a wing selected to represent a leading edge structure of a commercial transport aircraft. The drone and gel bird were the same weight and were launched at rates designed to reflect the relative combined speed of a fully intact drone traveling toward a commercial transport aircraft moving at a high approach speed.
https://youtu.be/dLRLYPiaAoA
27
Published on Mar 25, 2016