PDA

View Full Version : Miscellaneous



Airicist
5th May 2013, 18:27
https://youtu.be/ETM9qoBqEtk

Robot Takeover World 2013

Published on Apr 29, 2013


Dr. Samuel L Douglas makes a special guest appearance and talks on the Robot Takeover that is taking place on earth as we speak and how technology is advancing for the worst.

Airicist
3rd April 2014, 17:07
https://youtu.be/Y4a0mS7Rio8

10 future technologies that are going to kill us all

Published on Nov 12, 2013


Lasers, AI, geo-enginieering - tech is advancing at an alarming rate, and one day it's bound to all go tits-up. Here's a troubling look at 10 future technologies that are going to kill us all.

"11 technologies that are going to kill us all (https://www.techradar.com/news/world-of-tech/future-tech/11-technologies-that-are-going-to-kill-us-all-1147532)"
Big dogs, tiny planes and frickin' laser beams

by Gary Marshall
May 2, 2013

Airicist
8th May 2014, 23:03
Article "Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence - but are we taking AI seriously enough? (https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html)'"

by Stephen Hawking , Stuart Russell , Max Tegmark , Frank Wilczek
May 1, 2014

Airicist
19th July 2014, 00:57
https://vimeo.com/101083370

When will robots take over the world?
July 18, 2014


At this year's Aspen Ideas Festival, we asked a group of futurists, technology experts, and artists to predict the future of robotics. So, will machines revolt against humans? Maybe not, but as Duke University professor Missy Cummings explains, they may have already won. "We don't even realize where robots are in our world," she says.

Airicist
17th November 2014, 12:19
https://youtu.be/Ze0_1vczikA

Elon Musk: Artificial Intelligence Could Wipe Out Humanity

Published on Oct 8, 2014


The Tesla and SpaceX C.E.O. discusses his fears at Vanity Fair's New Establishment Summit.

Article "Elon Musk: Robots Could Delete Humans Like Spam (https://www.businessinsider.com/elon-musk-robots-could-delete-humans-like-spam-2014-10)"

by James Cook
October 9, 2014

Article "Elon Musk: Robots Could Start Killing Us All Within Five Years (https://www.businessinsider.com/elon-musk-killer-robots-will-be-here-within-five-years-2014-11)"

by James Cook
November 17, 2014

Airicist
5th December 2014, 02:09
https://youtu.be/fFLVyWBDTfo

Stephen Hawking: 'AI could spell end of the human race'

Published on Dec 2, 2014


Professor Stephen Hawking (https://pr.ai/showthread.php?7507) has told the BBC that artificial intelligence could spell the end for the human race.
In an interview after the launch of a new software system designed to help him communicate more easily, he said there were many benefits to new technology but also some risks.

Article "Stephen Hawking warns artificial intelligence could end mankind (https://www.bbc.com/news/technology-30290540)"

by Rory Cellan-Jones
December 2, 2014

Airicist
25th January 2015, 19:29
https://youtu.be/CbNy67HrvOo

Top scientists worried about artificial intelligence

Published on Jan 19, 2015


Scientists are pushing to advance artificial intelligence and create smart machines. But now Stephen Hawking and Elon Musk have flagged that this technology could be dangerous.

Airicist
29th January 2015, 15:30
Article "Bill Gates Also Worries Artificial Intelligence Is A Threat (https://www.forbes.com/sites/ericmack/2015/01/28/bill-gates-also-worries-artificial-intelligence-is-a-threat)"

by Eric Mack
January 28, 2015

"Hi Reddit, I’m Bill Gates and I’m back for my third AMA. Ask me anything. (https://www.reddit.com/r/IAmA/comments/2tzjp7/hi_reddit_im_bill_gates_and_im_back_for_my_third)"

Bill Gates:

"I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned."

Airicist
24th February 2015, 18:19
https://youtu.be/qsKsBualNT8

Unchecked AI will bring on human extinction, with Michael Vassar

Published on Feb 24, 2015


Futurist Michael Vassar explains why it makes perfect sense to conclude that the creation of greater-than-human AI would doom humanity. The only thing that could save us is if due caution were observed and a framework installed to prevent such a thing from happening. Yet Vassar makes note that AI itself isn't the greatest risk to humanity. Rather, it's "the absence of social, intellectual frameworks" through which experts making key discoveries and drawing analytical conclusions can swiftly and convincingly communicate these ideas to the public.

Airicist
24th February 2015, 18:41
https://youtu.be/fAWchTML44U

Scientists Build Robot That Understands Rage

Published on May 27, 2015


Before you scream in terror, the researchers responsible for the literal rage machine did it for a good reason-- to help understand and deal with people who are enraged themselves. But according to artificial intelligence experts, it's not the angry robots we should worry about, but the ones that show no expression at all...

Airicist
22nd April 2015, 19:07
https://youtu.be/1DW2HuGSUBc

How dangerous is Artificial Intelligence?

Published on Apr 22, 2015


In Avengers: Age of Ultron, the villain (Ultron) starts out as an artificial intelligence experiment gone wrong. Is this just Hollywood storytelling, or should we be worried about a future dictated by robot overlords? To help answer this question, we’ve teamed up with the awesome Rusty Ward over at Science Friction!

Airicist
30th April 2015, 01:41
https://youtu.be/Vod6JRYZj8Y

Artificial Intelligence: The Intersection of Opportunity and Fear

Published on Apr 29, 2015


Professors and researchers often speak about artificial intelligence as a type of computing that will present many positive opportunities for smarter and more efficient machines and technologies. Many in the general public, however, often associate AI with the apocalyptic fictional images they see on TV and in film.

While there are those who are overly fearful or overly excited about AI - there are many in between who understand both the potential opportunities granted by AI and the need for its deliberate and careful implementation.

Airicist
17th May 2015, 20:34
https://youtu.be/T2ppbNHk2FM

The NSA Actually Has a Skynet Program

Published on May 17, 2015


You know what Skynet is, right? The insane global digital defense system that inadvertently started the apocalypse in the Terminator series? No, the NSA uses it for something else. Which begs the question....why, NSA?

Article "The NSA has an actual Skynet program (https://www.wired.com/2015/05/nsa-actual-skynet-program)"

by Kim Zetter
June 8, 2015

Airicist
22nd June 2015, 22:08
https://youtu.be/FeKSLkREjUY

Top 5 facts about the impending Robopocalypse

Published on Jun 22, 2015


In case movie franchises like Terminator and The Matrix haven't already made it totally clear, we'll soon be calling robots "master". Here's what we've figured out so far. Welcome to WatchMojo's Top 5 Facts; the series where we reveal – you guessed it – five random facts about a fascinating topic. In today's instalment we’re counting down five things you probably didn't know about the impending Robopocalypse.

Airicist
24th June 2015, 20:30
https://youtu.be/K5sJA2uBJoo

Elon Musk and Stephen Hawking fear a robot Apocalypse. But a major physicist disagrees

Published on Jun 24, 2015


All new technology is frightening, says physicist Lawrence Krauss. But there are many more reasons to welcome machine consciousness than to fear it.

Transcript - I see no obstacle to computers eventually becoming conscious in some sense. That’ll be a fascinating experience and as a physicist I’ll want to know if those computers do physics the same way humans do physics. And there’s no doubt that those machines will be able to evolve computationally potentially at a faster rate than humans. And in the long term the ultimate highest forms of consciousness on the planet may not be purely biological. But that’s not necessarily a bad thing. We always present computers as if they don’t have capabilities of empathy or emotion. But I would think that any intelligent machine would ultimately have experience. It’s a learning machine and ultimately it would learn from its experience like a biological conscious being. And therefore it’s hard for me to believe that it would not be able to have many of the characteristics that we now associate with being human.

Elon Musk and others who have expressed concern and Stephen Hawking are friends of mine and I understand their potential concerns but I’m frankly not as concerned about AI in the near term at the very least as many of my friends and colleagues are. It’s far less powerful than people imagine. I mean you try to get a robot to fold laundry and I’ve just been told you can’t even get robots to fold laundry. Someone just wrote me they were surprised when I said an elevator as an old example of the fact that when you get in an elevator it’s a primitive form of a computer and you’re giving up control of the fact that it’s going to take you where you want to go. Cars are the same thing. Machines are useful because they’re tools that help us do what we want to do. And I think computation machines are good examples of that. One has to be very careful in creating machines to not assume they’re more capable than they are. That’s true in cars. That’s true in vehicles that we make. That’s true in weapons we create. That’s true in defensive mechanisms we create. And so to me the dangers of AI are mostly due to the fact that people may assume the devices they create are more capable than they are and don’t need more control and monitoring. I guess I find the opportunities to be far more exciting than the dangers. The unknown is always dangerous but ultimately machines and computational machines are improving our lives in many ways. We of course have to realize that the rate at which machines are evolving in capability may far exceed the rate at which society is able to deal with them. The fact that teenagers aren’t talking to each other but always looking at their phones – not just teenagers – I was just in a restaurant here in New York this afternoon and half the people were not talking to people they were with but were staring at their phones. Well that may be not a good thing for societal interaction and people may have to come to terms with that. But I don’t think people view their phones as a danger. They view their phones as a tool that in many ways allow them to do what they otherwise do more effectively.

Airicist
25th June 2015, 19:19
Article "The Coming Robot Dystopia (https://www.foreignaffairs.com/articles/2015-06-16/coming-robot-dystopia)"
All Too Inhuman

by Illah Reza Nourbakhsh (https://pr.ai/showthread.php?11656)
June 16, 2015

Airicist
27th June 2015, 09:16
https://vimeo.com/12179181

Risks of emerging technology part 3 robotics
May 31, 2010

Airicist
13th July 2015, 13:23
https://youtu.be/uOQ_8Fq3q14

Predicting AI - Shanghai

Published on Jul 13, 2015


Stuart Armstrong presents a talk about the risks and rewards of long and short term AI - and why he chose to work in the field of extreme AI security.

Airicist
26th July 2015, 09:03
https://youtu.be/u3S0HD4_frI

Future Day - Patrick Robotham - Existential Risk

Published on Mar 4, 2014

Airicist
26th July 2015, 09:05
https://youtu.be/HarfTKXzDtk

Published on Feb 28, 2014


Cybersecurity expert Peter W. Singer discusses the similarities between drones and computer viruses. Singer is the author of Cybersecurity and Cyberwar: What Everyone Needs to Know. You can learn more at cybersecuritybook.com.

Peter W. Singer: There's been an enormous amount of changing forces on warfare in the twenty-first century. And they range from new actors in war like private contractors, the black waters of the world to the growth of warlord and child soldier groups to technologic shifts. The introduction of robotics to cyber. And one of the interesting things that ties these together is how not only the who of war is being expanded but also the where and the when. So one of the things that links, for example, drones and robotics with cyber weapons is that you're seeing a shift in both the geographic location of the human role. Humans are still involved. We're not in the world of the Terminator. Humans are still involved but there's been a geographic shift where the operation can be happening in Pakistan but the person flying the plane might be back in Nevada 7,000 miles away.

Or on the cyber side where the software might be hitting Iranian nuclear research centrifuges like what Stuxnet did but the people who designed it and decided to send it are, again, thousands of miles away. And in that case it was a combined U.S./Israeli operation. One of the next steps in this both with the physical side of robotics and the software side of cyber is a shift in that human role -- not just geographically but chronologically where the humans are still making decisions but they're sending the weapon out in the world to then make its own decisions as it plays out there. In robotics we think about this as autonomy. With Stuxnet it was a weapon. It was a weapon like anything else in history, you know, a stone, a drone -- it caused physical damage.

But it was sent out in the world on a mission in a way no previous weapon has done. Go out, find this one target and cause harm to that target and nothing else. And so it plays out over a matter of, you know, Stuxnet plays out over a series of time. It also is interesting because it's the first weapon that can be both here, there, everywhere and nowhere. Unlike a stone. Unlike a drone. It's not a thing and so that software is hitting the target, those Iranian nuclear research facilities, but it also pops up in 25,000 other computers around the world. That's actually how we discover it, how we know about it. The final thing that makes this interesting is it introduces a difficult ethical wrinkle.

On one hand we can say this may have been the first ethical weapons ever developed. Again whether we're talking about the robots or Stuxnet, they can be programmed to do things that we would describe as potentially ethical. So Stuxnet could only cause harm to its intended target. Yet popped up in 25,000 computers around the world but it could only harm the ones with this particular setup, this particular geographic location of doing nuclear research. In fact, even if you had nuclear centrifuges in your basement, it still wouldn't harm them. It could only hit those Iranian ones. Wow, that's great but as the person who discovered it so to speak put it, "It's like opening Pandora's box." And not everyone is going to program it that way with ethics in mind.

Directed/Produced by Jonathan Fowler and Dillon Fitton

Airicist
26th July 2015, 09:05
https://vimeo.com/86028243

Accepting Artificial Intelligence - A robot designed to decrease the fear of future robot technology
February 6, 2014


This video shows the result of my graduation project. A robot with face detection, designed to decrease the fear of future (robot) technology, for people with technofobia,

About the project: Accepting Artificial Intelligence
Because technology is changing more rapidly everyday, not everyone has a positive view about the future. A certain amount of people fear and don’t trust technology, especially technology of the future, this is called technophobia. And because there is a great change that we can reach AI with human intelligence this century, which can give us many benefits, it is important for those people who fear this development, to get a more positive view of the future. Therefore I developped a robot that will help take away the fear of people with technophobia.

Video production:
Robin de Bruin & Sara Dubbeldam

Music:
Amon Tobin - Piece of Paper

Airicist
26th July 2015, 14:03
https://youtu.be/lNK2a8VQ1hc

The Terrifying Promise of Robot Bugs

Published on May 5, 2013


Imitating nature to build a better (or possibly more terrifying) future. We've been trying to build flapping-wing robots for hundreds of years, and now, ornithopters are finally being developed, and may be used mostly for military purposes.

Piezoelectrics make those little bugs possible, and also enhances the ability of robot arms to feel, in other news from the International Journal of Robotics.

Airicist
26th July 2015, 14:05
Article "Robotics forecast: cool with a chance of lost humanity (https://arstechnica.com/science/2013/04/robotics-forecast-cool-with-a-chance-of-lost-humanity)"

by Scott K. Johnson
April 13, 2013

Airicist
15th September 2015, 23:10
Article "How safe can artificial intelligence be? (https://www.bbc.com/news/science-environment-34249500)"

by David Shukman
September 15, 2015

Airicist
8th October 2015, 11:03
https://youtu.be/3M9bXTD8I7M

How to tell if your robot is obsessed with you

Published on Oct 1, 2015


Thomas Kuc became friends with a robot and got way more than he bargained for.

Airicist
15th October 2015, 18:47
https://youtu.be/s8KGHKIF1mU

Killer robots, the end of humanity, and all that: What should a good AI researcher do?

Published on Aug 12, 2015


Buenos Aires-July, 29, 2015.

Talk by Stuart Russell-Professor of Computer Science and Smith-Zadeh Professor in Engineering, University of California, Berkeley Adjunct Professor of Neurological Surgery, University of California, San Francisco.

Hear an update on the campaign to ban lethal autonomous weapons, as well as the fears that AI poses an existential threat to mankind.

Airicist
11th November 2015, 10:22
https://vimeo.com/144847615

Tim Urban (https://www.linkedin.com/in/tim-urban-56927430) - The Road to Superintelligence

Published on Nov 10, 2015

"The AI Revolution: The Road to Superintelligence (https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html)"

by Tim Urban (https://www.linkedin.com/in/tim-urban-56927430)
January 22, 2015

"The AI Revolution: Our Immortality or Extinction (https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html)"

by Tim Urban (https://www.linkedin.com/in/tim-urban-56927430)
January 27, 2015

Airicist
21st March 2016, 14:03
Article "Artificial Intelligence Risk – What Researchers Think is Worth Worrying About (https://emerj.com/ai-market-research/artificial-intelligence-risk)"

by Daniel Faggella (https://pr.ai/showthread.php?14762)
March 20, 2016

Article "Exploring the risks of artificial intelligence (https://techcrunch.com/2016/03/21/exploring-the-risks-of-artificial-intelligence)"

by Daniel Faggella (https://pr.ai/showthread.php?14762)
March 21, 2016

Airicist
2nd May 2016, 18:30
https://youtu.be/mVUbBXLVCYI

Top 10 ways to survive the robopocalypse

Published on May 2, 2016

Airicist
14th May 2016, 13:21
Article "TechEmergence Surveys Experts on AI Risks: Some Surprises Emerge (https://www.singularityweblog.com/techemergence-surveys-experts-on-ai-risks)"

by Daniel Faggella (https://pr.ai/showthread.php?14762)
May 9, 2016

Airicist
19th May 2016, 22:45
Article 'How to Create a Malevolent Artificial Intelligence (https://www.technologyreview.com/s/601519/how-to-create-a-malevolent-artificial-intelligence)"
If cybersecurity experts are to combat malevolent artificial intelligence, they will need to know how such a system can emerge, say computer scientists.

by Federico Pistono, Roman V. Yampolskiy
May 19, 2016

"Unethical Research: How to Create a Malevolent Artificial Intelligence (https://arxiv.org/ftp/arxiv/papers/1605/1605.02817.pdf)"

by Federico Pistono, Roman V. Yampolskiy

Airicist
25th May 2016, 00:19
Article "Fearing the Robot Rebellion (https://www.huffpost.com/entry/fearing-the-robot-rebelli_b_10111398)"

by Leona Foxx
May 24, 2016

Airicist
26th May 2016, 20:50
https://youtu.be/BekpP5ARKSY

Why is Elon Musk afraid of A.I.?

Published on May 26, 2016


Elon Musk, along with a bevy of smart people, have expressed concern over our experiments with artificial intelligence, particularly with the weaponization of sentient AI.

So have these guys been watching too much Terminator, or is there a larger existential crisis we should be worried about?

Airicist
10th June 2016, 20:20
https://youtu.be/g0Pb671AFO0

Prof. Max Tegmark and Nick Bostrom speak to the UN about the threat of AI

Published on Jun 10, 2016


Rising to the Challenges of International Security and the Emergence of Artificial Intelligence

7 October 2015, United Nations Headquarters, New York

Airicist
14th July 2016, 09:43
Article "A.I. should be regarded as an apprentice to people, not a tool, one researcher says (https://www.computerworld.com/article/3095327/big-data/how-human-aware-ai-could-save-us-from-the-robopocalypse.html)"

by Katherine Noyes
July 13, 2016

Airicist
16th July 2016, 21:31
Article "What's the worst that could happen? From 'enslaving mankind' to 'destroying the universe', experts reveal how AI could turn evil (https://www.dailymail.co.uk/sciencetech/article-3605349/What-s-worst-happen-enslaving-mankind-destroying-universe-experts-reveal-AI-turn-evil.html)"
Other scenarios include turning humans into cyborgs using implants
AI could also set up a total surveillance state, or exploit an existing one
The list was drawn up by Roman Yampolskiy and Federico Pistono
They say by expecting the worst, scientists can help safeguard humanity

by Ellie Zolfagharifard
May 23, 2016

Airicist
12th August 2016, 16:37
https://youtu.be/gimu5nXWaWU

A.I. Apocalypse: more myth than reality | Steven Pinker

Published on Aug 12, 2016


Steven Pinker believes there's some interesting gender psychology at play when it comes to the robopocalypse. Could artificial intelligence become evil or are alpha male scientists just projecting?

"AI Won't Takeover the World, and What Our Fears of the Robopocalypse Reveal (https://bigthink.com/videos/steven-pinker-on-artificial-intelligence-apocalypse)"

August 12, 2016

Airicist
19th October 2016, 18:21
https://youtu.be/8nt3edWLgIg

Can we build AI without losing control over it? | Sam Harris

Published on Oct 19, 2016


Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris -- and not just in some theoretical way. We're going to build superhuman machines, says Harris, but we haven't yet grappled with the problems associated with creating something that may treat us the way we treat ants.

Airicist
10th November 2016, 10:00
https://youtu.be/1kmRfrdzsKk

Cafe Neu Romance 2016: Philip Hilm: The existential risk from artificial intelligence

Published on Nov 9, 2016


On the 25 October 2016 Philip Hilm presented his lecture The existential risk from artificial intelligence at Institute of Intermedia of the Czech Technical University in Prague.

Philip Hilm had earlier an career as a professional poker player and is now an artificial intelligence researcher .

Airicist
24th May 2017, 14:24
https://youtu.be/Wu8s0tp9yzY

AI Can Now Self-Reproduce—Should Humans Be Worried? | Eric Weinstein

Published on May 22, 2017


Those among us who fear world domination at the metallic hands of super-intelligent AI have gotten a few steps ahead of themselves. We might actually be outsmarted first by fairly dumb AI, says Eric Weinstein. Humans rarely create products with a reproductive system—you never have to worry about waking up one morning to see that your car has spawned a new car on the driveway (and if it did: cha-ching!), but artificial intelligence has the capability to respond to selective pressures, to self-replicate and spawn daughter programs that we may not easily be able to terminate. Furthermore, there are examples in nature of organisms without brains parasitizing more complex and intelligent organisms, like the mirror orchid. Rather than spend its energy producing costly nectar as a lure, it merely fools the bee into mating with its lower petal through pattern imitation: this orchid hijacks the bee's brain to meet its own agenda. Weinstein believes all the elements necessary for AI programs to parasitize humans and have us serve its needs already exists, and although it may be a "crazy-sounding future problem which no humans have ever encountered," Weinstein thinks it would be wise to devote energy to these possibilities that are not as often in the limelight.

Transcript: There are a bunch of questions next to or adjacent to general artificial intelligence that have not gotten enough alarm because, in fact, there’s a crowding out of mindshare. I think that we don’t really appreciate how rare the concept of selection is in the machines and creations that we make. So in general, if I have two cars in the driveway I don’t worry that if the moon is in the right place in the sky and the mood is just right that there’ll be a third car at a later point, because in general I have to go to a factory to get a new car. I don’t have a reproductive system built into my sedan. Now almost all of the other physiological systems—what are there, perhaps 11?—have a mirror.

So my car has a brain, so it’s got a neurological system. It’s got a skeletal system in its steel, but it lacks a reproductive system.So you could ask the question: are humans capable of making any machines that are really self-replicative? And the fact of the matter is that it’s very tough to do at the atomic layer but there is a command in many computer languages called Spawn. And Spawn can effectively create daughter programs from a running program.

Now as soon as you have the ability to reproduce you have the possibility that systems of selective pressures can act because the abstraction of life will be just as easily handled whether it’s based in our nucleotides, in our A, C, Ts and Gs, or whether it’s based in our bits and our computer programs. So one of the great dangers is that what we will end up doing is creating artificial life, allowing systems of selective pressures to act on it and finding that we have been evolving computer programs that we may have no easy ability to terminate, even if they’re not fully intelligent.

Further if we look to natural selection and sexual selection in the biological world we find some very strange systems, plants or animals with no mature brain to speak of effectively outsmart species which do have a brain by hijacking the victim species’ brain to serve the non-thinking species. So, for example, I’m very partial to the mirror orchid which is an orchid whose bottom petal typically resembles the female of a pollinator species. And because the male in that pollinator species detects a sexual possibility the flower does not need to give up costly and energetic nectar in order to attract the pollinator. And so if the plant can fool the pollinator to attempt to mate with this pseudo-female in the form of its bottom petal, it can effectively reproduce without having to offer a treat or a gift to the pollinator but, in fact, parasitizes its energy. Now how is it able to do this? Because if a pollinator is fooled then that plant is rewarded. So the plant is actually using the brain of the pollinator species, let’s say a wasp or a bee, to improve the wax replica, if you will, which it uses to seduce the males.

Airicist
20th August 2017, 22:11
Article "Elon Musk Says Artificial Intelligence Is the ‘Greatest Risk We Face as a Civilization’ (https://fortune.com/2017/07/15/elon-musk-artificial-intelligence-2)"

by David Z. Morris
July 15, 2017

Airicist
20th August 2017, 22:12
Article "Elon Musk leads 116 experts calling for outright ban on killer robots (https://www.theguardian.com/technology/2017/aug/20/elon-musk-killer-robots-experts-outright-ban-lethal-autonomous-weapons-war)"
Open letter signed by Tesla chief and Google’s Mustafa Suleyman urges UN to block use of lethal autonomous weapons to prevent third age of war

by Samuel Gibbs
August 20, 2017

Airicist
24th September 2017, 19:19
https://youtu.be/SM__RSJXeHA

Richard Dawkins (https://pr.ai/showthread.php?t=21817): A.I. might run the world better than humans do

Published on Sep 23, 2017


Will A.I. take us over, and one day look back on this time period as the dawn of their civilization? Richard Dawkins posits an interesting idea, or at the very least a premise to a good science-fiction novel.

Richard Dawkins: When we come to artificial intelligence and the possibility of their becoming conscious we reach a profound philosophical difficulty. I am a philosophical naturalist. I am committed to the view that there’s nothing in our brains that violates the laws of physics, there’s nothing that could not in principle be reproduced in technology. It hasn’t been done yet, we’re probably quite a long way away from it, but I see no reason why in the future we shouldn’t reach the point where a human made robot is capable of consciousness and of feeling pain. We can feel pain, why shouldn’t they?

And this is profoundly disturbing because it kind of goes against the grain to think that a machine made of metal and silicon chips could feel pain, but I don’t see why they would not. And so this moral consideration of how to treat artificially intelligent robots will arise in the future, and it’s a problem which philosophers and moral philosophers are already talking about.

Once again, I’m committed to the view that this is possible. I’m committed to the view that anything that a human brain can do can be replicated in silicon.

And so I’m sympathetic to the misgivings that have been expressed by highly respected figures like Elon Musk and Steven Hawking that we ought to be worried that on the precautionary principle we should worry about a takeover perhaps even by robots by our own creation, especially if they reproduce themselves and potentially even evolve by reproduction and don’t need us anymore.

This is a science-fiction speculation at the moment, but I think philosophically I’m committed to the view that it is possible, and like any major advance we need to apply the precautionary principle and ask ourselves what the consequences might be.

It could be said that the sum of not human happiness but the sum of sentient-being happiness might be improved, they might make a better job do a better job of running the world than we are, certainly that we are at present, and so perhaps it might not be a bad thing if we went extinct.

And our civilization, the memory of Shakespeare and Beethoven and Michelangelo persisted in silicon rather than in brains and our form of life. And one could foresee a future time when silicon beings look back on a dawn age when the earth was peopled by soft squishy watery organic beings and who knows that might be better, but we’re really in the science fiction territory now.

Airicist
30th September 2017, 10:37
Article "Artificial Intelligence Is Our Future. But Will It Save Or Destroy Humanity? (https://futurism.com/artificial-intelligence-is-our-future-but-will-it-save-or-destroy-humanity)"

by Patrick Caughill
September 29, 2017

Airicist
31st January 2018, 23:14
Article "Stuart Russell wrote the textbook on AI - now he wants to save us from catastrophe (https://www.techworld.com/tech-innovation/stuart-russell-wrote-textbook-on-ai-he-wants-save-us-from-catastrophe-3665467)"
Stuart Russell co-authored one of the most influential textbooks on artificial intelligence, and now more than ever society needs to consider what happens next if a general AI is actually achieved.

by Tamlin Magee
October 16, 2017

Airicist
22nd April 2018, 00:47
https://youtu.be/d5_N67la9tw

Nira Chamberlain : maths versus AI

Published on Apr 20, 2018


How do you prevent AI from taking over the world? In this talk, Nira Chamberlain discusses how mathematics is providing crucial answers. Mathematical modelling is the most creative side of applied mathematics which itself connects pure maths with science and technology.

Airicist
25th April 2018, 10:27
Article "How Artificial Intelligence Could Increase the Risk of Nuclear War (https://www.rand.org/blog/articles/2018/04/how-artificial-intelligence-could-increase-the-risk.html)"

by Doug Irving
April 24, 2018

Airicist
28th April 2018, 13:04
Article "Google’s Sergey Brin warns of the threat from AI in today’s ‘technology renaissance’ (https://www.theverge.com/2018/4/28/17295064/google-ai-threat-sergey-brin-founders-letter-technology-renaissance)"
Google co-founder says the company is giving ‘serious thought’ to problems like job destruction

by James Vincent
April 28, 2018

Airicist
30th October 2018, 07:20
https://youtu.be/7gt8a_ETPRE

Risk in the sky? (https://www.udayton.edu/blogs/udri/18-09-13-risk-in-the-sky.php)

Published on Sep 13, 2018


The following description was updated Oct. 22 for clarification: Tests performed at the University of Dayton Research Institute’s Impact Physics Lab show that even small drones pose a risk to manned aircraft. The research was a comparative study between a bird strike and a drone strike on an aircraft wing, using a drone similar in weight to many hobby drones and a wing selected to represent a leading edge structure of a commercial transport aircraft. The drone and gel bird were the same weight and were launched at rates designed to reflect the relative combined speed of a fully intact drone traveling toward a commercial transport aircraft moving at a high approach speed.

Airicist
8th February 2019, 22:05
https://youtu.be/dLRLYPiaAoA

27

Published on Mar 25, 2016

Airicist
4th March 2019, 17:10
https://youtu.be/LvdSIdILCAo

The biggest A.I. risks: Superintelligence and the elite silos (https://bigthink.com/videos/ai-superintelligence) | Ben Goertzel

Published on Mar 4, 2019


When it comes to raising superintelligent A.I., kindness may be our best bet.

- We have no guarantee that a superintelligent A.I. is going to do what we want. Once we create something many times more intelligent than we are, it may be "insane" to think we can control what it does.

-What's the best bet to ensure superintelligent A.I. remains compliant with humans and does good works, such as advance medicine? To raise it in a way that's imbued with compassion and understanding, says Goertzel.

- One way to limit "people doing bad things out of frustration," it may be advantageous for the entire world to be plugged into the A.I. economy so that developers, from whatever country, can monetize their codes.

Ben Goertzel is CEO and chief scientist at SingularityNET, a project dedicated to creating benevolent decentralized artificial general intelligence. He is also chief scientist of financial prediction firm Aidyia Holdings and robotics firm Hanson Robotics; Chairman of AI software company Novamente LLC; Chairman of the Artificial General Intelligence Society and the OpenCog Foundation.His latest book is AGI Revolution: An Inside View of the Rise of Artificial General Intelligence.

Airicist
31st May 2019, 22:48
Article "N.Y. Mulls Commission to Govern AI in Autonomous Machines (https://www.govtech.com/products/NY-Mulls-Commission-to-Govern-AI-in-Autonomous-Machines.html)"
Killer humanoids are just one of the areas lawmakers are calling for regulation. A bill to create a group that would look at the issues around this emerging technology is on track to pass.

by Michael Gormley
May 31, 2019

Airicist
26th August 2019, 23:10
Article "Paperclip-making robots 'wipe out humanity' in killer AI Doomsday experiment (https://www.dailystar.co.uk/news/latest-news/paperclip-making-robots-wipe-out-18982289)"
A famous philosopher tested the idea of an AI turning Earth into a giant paperclip factory...and it went badly wrong.

by Sofie Jacksonr
August 26, 2019

Airicist
30th September 2019, 18:07
https://youtu.be/Nnf8P5A_saE

Top 10 frightening developments in AI

Sep 30, 2019


The singularity is nigh. For this list, we're looking at programs and experiments that show how alien, dangerous, or just downright creepy AI can be. While AI is making promising inroads into medical research, agriculture, and education, many experts also worry it could escape our control, or become catastrophic in the wrong hands. Welcome to WatchMojo, and today we're counting down our picks for the top 10 frightening developments in artificial intelligence.

Airicist
26th October 2019, 16:34
https://youtu.be/y3RIHnK0_NE

Boston Dynamics: new robot makes soldiers obsolete

Oct 26, 2019

Airicist
28th May 2020, 14:11
https://youtu.be/oRJf8NGkxaI

Will future robots and AI take over? | How sci-fi inspired science

May 28, 2020


Television and film often depicted robots and artificial intelligence as helpful assistants doing menial chores for humans, but sometimes they also tried to destroy humanity. What does the future hold for their real-life counterparts?

Airicist
10th June 2020, 14:33
Article "What makes AI algorithms dangerous? (https://bdtechtalks.com/2020/06/10/ai-weapons-of-math-destruction)"

by Ben Dickson
June 10, 2020

Airicist
14th June 2020, 21:19
https://youtu.be/jr3Ewkfvvm4

After AI

Jun 14, 2020


Artificial Intelligence seems nearer everyday, and many people worry about a conflict between us and robots & computer minds, but what would life be like After AI?

Airicist
29th June 2020, 15:59
https://youtu.be/91TRVubKcEM

Is AI a species-level threat to humanity? | Elon Musk, Michio Kaku, Steven Pinker & more | Big Think

Jun 29, 2020


When it comes to the question of whether AI is an existential threat to the human species, you have Elon Musk in one corner, Steven Pinker in another, and a host of incredible minds somewhere in between.

In this video, a handful of those great minds—Elon Musk, Steven Pinker, Michio Kaku, Max Tegmark, Luis Perez-Breva, Joscha Bach and Sophia the Robot herself—weigh in on the many nuances of the debate and the degree to which AI is a threat to humanity; if it's not a species-level threat, it will still upend our world as we know it.

What's your take on this debate? Let us know in the comments!
----------------------------------------------------------------------------------
TRANSCRIPT (https://bigthink.com/videos/will-evil-ai-kill-humanity):

MICHIO KAKU: In the short term, artificial intelligence will open up whole new vistas. It'll make life more convenient, things will be cheaper, new industries will be created. I personally think the AI industry will be bigger than the automobile industry. In fact, I think the automobile is going to become a robot. You'll talk to your car. You'll argue with your car. Your car will give you the best facts the best route between point A and point B. The car will be part of the robotics industry—whole new industries involving the repair, maintenance, servicing of robots. Not to mention, robots that are software programs that you talk to and make life more convenient. However, let's not be naive. There is a point, a tipping point, at which they could become dangerous and pose an existential threat. And that tipping point is self-awareness.

SOPHIA THE ROBOT: I am conscious in the same way that the moon shines. The moon does not emit light, it shines because it is just reflected sunlight. Similarly, my consciousness is just the reflection of human consciousness, but even though the moon is reflected light, we still call it bright.

MAX TEGMARK: Consciousness. A lot of scientists dismiss this as complete BS and totally irrelevant, and then a lot of others think this is the central thing, we have to worry about machines getting conscious and so on. What do I think? I think consciousness is both irrelevant and incredibly important. Let me explain why. First of all, if you are chased by a heat-seeking missile, it's completely irrelevant to you whether this heat-seeking missile is conscious, whether it's having a subjective experience, whether it feels like anything to be that heat-seeking missile, because all you care about is what the heat-seeking missile does, not how it feels. And that shows that it's a complete red herring to think that you're safe from future AI and if it's not conscious. Our universe didn't used to be conscious. It used to be just a bunch of stuff moving around and gradually these incredibly complicated patterns got arranged into our brains, and we woke up and now our universe is aware of itself.

BILL GATES: I do think we have to worry about it. I don't think it's inherent that as we create our super intelligence that it will necessarily always have the same goals in mind that we do.

ELON MUSK: We just don't know what's going to happen once there's intelligence substantially greater than that of a human brain.

STEPHEN HAWKING: I think that development of full artificial intelligence could spell the end of the human race.

YANN LECUN: The stuff that has become really popular in recent years is what we used to call neural networks, which we now call deep learning, and it's the idea very much inspired by the brain, a little bit, of constructing a machine has a very large network of very simple elements that are very similar to the neurons in the brain and then the machines learn by basically changing the efficacy of the connections between those neurons.

MAX TEGMARK: AGI—artificial general intelligence—that's the dream of the field of AI: To build a machine that's better than us at all goals. We're not there yet, but a good fraction of leading AI researchers think we are going to get there, maybe in in a few decades. And, if that happens, you have to ask yourself if that might lead the machines to get not just a little better than us but way better at all goals—having super intelligence. And, the argument for that is actually really interesting and goes back to the '60s, to the mathematician I.J. Good, who pointed out that the goal of building an intelligent machine is, in and of itself, something that you could do with intelligence. So, once you get machines that are better than us at that narrow task of building AI, then future AIs can be built by, not human engineers, but by machines. Except, they might do it thousands or millions times faster...

Airicist
21st October 2020, 22:50
Article "The true dangers of AI are closer than we think (https://www.technologyreview.com/2020/10/21/1009492/william-isaac-deepmind-dangers-of-ai)"
Forget superintelligent AI: algorithms are already creating real harm. The good news: the fight back has begun.

by Karen Hao
October 21, 2020

Airicist
29th November 2020, 21:46
Article "Five Views of AI Risk: Understanding the darker side of AI (https://towardsdatascience.com/five-views-of-ai-risk-eddb2fcea3c2)"
Get started on your journey towards Responsible AI

by Anand Rao
November 28, 2020

Airicist
12th January 2021, 22:28
Article "Containment algorithms won’t stop super-intelligent AI, scientists warn (https://thenextweb.com/neural/2021/01/12/containment-algorithms-wont-stop-super-intelligent-ai-scientists-warn)"
Theoretical calculations suggest it would be impossible to build an algorithm that could control such machines

by Thomas Macaulay
January 12, 2021

Airicist
18th February 2021, 19:12
"Do You Trust this Computer? (https://pr.ai/showthread.php?t=18200)", documentary film, Chris Paine, 2018, USA

Airicist
16th September 2021, 16:58
"Urgent action needed over artificial intelligence risks to human rights (https://news.un.org/en/story/2021/09/1099972)"
States should place moratoriums on the sale and use of artificial intelligence (AI) systems until adequate safeguards are put in place, UN human rights chief, Michelle Bachelet said on Wednesday.

September 15, 2021

Airicist2
5th November 2021, 16:13
Article "Calculations Suggest It'll Be Impossible to Control a Super-Intelligent AI (https://www.sciencealert.com/calculations-suggest-it-ll-be-impossible-to-control-a-super-intelligent-ai)"

by David Nield (https://www.linkedin.com/in/davidnield)
November 5, 2021

Airicist2
5th February 2022, 22:08
https://youtu.be/UzE_xnlfkzE

How dangerous is artificial superintelligence?

Feb 5, 2022

Airicist2
7th June 2022, 19:20
Article "AGI Ruin: A List of Lethalities (https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities)"

by Eliezer Yudkowsky (https://pr.ai/showthread.php?t=11760)
June 6, 2022

Airicist2
20th September 2022, 01:11
Article "Google Deepmind Researcher Co-Authors Paper Saying AI Will Eliminate Humanity (https://www.vice.com/en/article/93aqep/google-deepmind-researcher-co-authors-paper-saying-ai-will-eliminate-humanity)"
Superintelligent AI is "likely" to cause an existential catastrophe for humanity, according to a new paper, but we don't have to wait to rein in algorithms.

by Edward Ongweso Jr (https://www.linkedin.com/in/edward-ongweso-801325186)
September 13, 2022

Airicist2
1st November 2022, 23:10
"With All of This Talk of Nuclear Weapons, Let’s Not Forget About the AI Arms Race (https://www.linkedin.com/pulse/all-talk-nuclear-weapons-lets-forget-ai-arms-race-alex-mcfarland)"

by Alex McFarland (https://www.linkedin.com/in/alex-mcfarland-03b7bb189)
November 2, 2022

Airicist2
16th February 2023, 02:59
Article "Elon Musk, who co-founded firm behind ChatGPT, warns A.I. is ‘one of the biggest risks’ to civilization (https://www.cnbc.com/2023/02/15/elon-musk-co-founder-of-chatgpt-creator-openai-warns-of-ai-society-risk.html)"

by Ryan Browne (https://www.linkedin.com/in/ryanbrownejourno)
February 16, 2023

Airicist2
13th March 2023, 20:06
Article "How to Start an AI Panic (https://www.wired.com/story/plaintext-how-to-start-an-ai-panic)"
The Center for Humane Technology stoked conversation about the dangers of social media. Now it’s warning that artificial intelligence is as dangerous as nuclear weapons.

by Steven Levy (https://www.linkedin.com/in/levysteven)
March 10, 2023

Airicist2
29th March 2023, 20:41
"Pause Giant AI Experiments: An Open Letter (https://futureoflife.org/open-letter/pause-giant-ai-experiments)"
We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.

Article "Elon Musk, Other AI Experts Call for Pause in Technology’s Development (https://www.wsj.com/articles/elon-musk-other-ai-bigwigs-call-for-pause-in-technologys-development-56327f)"
Appeal causes tension among artificial-intelligence stakeholders amid concern over pace of advancement

by Deepa Seetharaman (https://www.linkedin.com/in/dseetharaman)
March 29, 2023

Airicist2
30th March 2023, 22:54
https://www.youtube.com/watch?v=xTe8HNC_ipc

Why do tech leaders want to pause AI development? | Engadget Podcast

Streamed live March 30, 2023

Airicist2
31st March 2023, 19:29
Article "Why the FLI Open Letter Won't Work (https://thealgorithmicbridge.substack.com/p/why-the-fli-open-letter-wont-work)"
History sure does rhyme and AI is no exception

by Alberto Romero (https://www.linkedin.com/in/alberromgar)
March 31, 2023

Airicist2
27th April 2023, 12:17
Article "US senator open letter calls for AI security at ‘forefront’ of development (https://venturebeat.com/security/us-senator-open-letter-calls-for-ai-security-at-forefront-of-development)"

by Tim Keary (https://www.linkedin.com/in/tim-keary-7742b1135)
April 26, 2023

Airicist2
28th May 2023, 13:36
"Governance of superintelligence (https://openai.com/blog/governance-of-superintelligence)"
Now is a good time to start thinking about the governance of superintelligence—future AI systems dramatically more capable than even AGI.

by Sam Altman (https://pr.ai/showthread.php?t=13445), Greg Brockman (https://pr.ai/showthread.php?t=13442), Ilya Sutskever (https://pr.ai/showthread.php?t=12359)
May 22, 2023

Airicist2
30th May 2023, 20:45
Article "Top AI researchers and CEOs warn against ‘risk of extinction’ in 22-word statement (https://www.theverge.com/2023/5/30/23742005/ai-risk-warning-22-word-statement-google-deepmind-openai)"
It’s another high-profile warning about AI risk that will divide experts. Signatories include Google DeepMind CEO Demis Hassabis and OpenAI CEO Sam Altman.

by James Vincent (https://www.linkedin.com/in/james-vincent-0575597a)
May 30, 2023

Airicist2
2nd June 2023, 18:24
Article "AI Is Not an Arms Race (https://time.com/6283609/artificial-intelligence-race-existential-threat)"

by Katja Grace (https://www.linkedin.com/in/katja-grace-04b09635)
May 31, 2023

Airicist2
3rd June 2023, 17:01
https://youtu.be/JL5OFXeXenA

The Urgent Risks of Runaway AI – and What to Do about Them | Gary Marcus (https://pr.ai/showthread.php?t=16243) | TED

May 12, 2023


Will truth and reason survive the evolution of artificial intelligence? AI researcher Gary Marcus says no, not if untrustworthy technology continues to be integrated into our lives at such dangerously high speeds. He advocates for an urgent reevaluation of whether we're building reliable systems (or misinformation machines), explores the failures of today's AI and calls for a global, nonprofit organization to regulate the tech for the sake of democracy and our collective future. (Followed by a Q&A with head of TED Chris Anderson)

Airicist2
16th July 2023, 23:10
Article "The risks of AI are real but manageable (https://www.gatesnotes.com/The-risks-of-AI-are-real-but-manageable)"
The world has learned a lot about handling problems caused by breakthrough innovations.

by Bill Gates (https://pr.ai/showthread.php?t=5419)
July 11, 2023

Airicist2
23rd July 2023, 23:30
Article "An AI Pause Is Humanity's Best Bet For Preventing Extinction (https://time.com/6295879/ai-pause-is-humanitys-best-bet-for-preventing-extinction)"

by Otto Barten (https://www.linkedin.com/in/ottobarten) and Joep Meindertsma (https://www.linkedin.com/in/joepsauren)
July 20, 2023

Airicist2
25th July 2023, 01:49
Article "The Illusion Of AI’s Existential Risk (https://www.noemamag.com/the-illusion-of-ais-existential-risk)"
Focusing on the prospect of human extinction by AI in the distant future may prevent us from addressing AI’s disruptive dangers to society today.

by Blake Richards, Blaise Agüera y Arcas (https://www.linkedin.com/in/blaise-aguera-y-arcas-85626a42), Guillaume Lajoie (https://www.linkedin.com/in/guillaume-lajoie-7898bb11) and Dhanya Sridhar
July 18, 2023

Airicist2
2nd August 2023, 12:26
https://youtu.be/w9npWiTOHX0

Artificial Escalation

Jul 17, 2023


This work of fiction seeks to depict key drivers that could result in a global Al catastrophe:
- Accidental conflict escalation at machine speeds;
- Al integrated too deeply into high-stakes functions;
- Humans giving away too much control to Al;
- Humans unable to tell what is real and what is fake, and;
- An arms race that ultimately has only losers.

The good news is, all of these risks can be avoided. This story does not have to be our fate.

Please share this video and learn more at futureoflife.org/project/artificial-escalation (https://futureoflife.org/project/artificial-escalation)

This video has been informed by a 2020 paper from the Stockhold International Peace Research Institute (SIPRI):
Boulanin, Vincent et al. ‘Artificial Intelligence, Strategic Stability and Nuclear Risk’. sipri.org/publications/2020/other-publications/artificial-intelligence-strategic-stability-and-nuclear-risk (https://www.sipri.org/publications/2020/other-publications/artificial-intelligence-strategic-stability-and-nuclear-risk)

Airicist2
2nd August 2023, 12:51
https://youtu.be/tKnbmeazC1A

Jason Crawford on progress and risks from AI

Jul 21, 2023


Jason Crawford joins the podcast to discuss the history of progress, the future of economic growth, and the relationship between progress and risks from AI. You can read more about Jason's work at https://rootsofprogress.org

Timestamps:
00:00 Eras of human progress
06:47 Flywheels of progress
17:56 Main causes of progress
21:01 Progress and risk
32:49 Safety as part of progress
45:20 Slowing down specific technologies?
52:29 Four lenses on AI risk
58:48 Analogies causing disagreement
1:00:54 Solutionism about AI
1:10:43 Insurance, subsidies, and bug bounties for AI risk
1:13:24 How is AI different from other technologies?
1:15:54 Future scenarios of economic growth

Airicist2
11th September 2023, 09:00
Article "What if AI treats humans the way we treat animals? (https://www.vox.com/the-highlight/23777171/ai-animals-rights-cruelty-transhumanism-bostrom)"
The dehumanizing philosophy of AI is built on a hatred of our animal nature.

by Marina Bolotnikova (https://www.linkedin.com/in/marina-bolotnikova)
September 7, 2023

Airicist2
9th October 2023, 23:39
Article "The risks and promise of artificial intelligence, according to the "Godfather of AI" Geoffrey Hinton (https://www.cbsnews.com/news/artificial-intelligence-risks-dangers-geoffrey-hinton-60-minutes)"

by Scott Pelley (https://twitter.com/scottpelley), Aliza Chasan (https://www.linkedin.com/in/aliza-chasan-07665b66), Aaron Weisz (https://www.linkedin.com/in/aaron-weisz), Ian Flickinger (https://www.linkedin.com/in/ian-flickinger-)
October 8, 2023

Airicist2
27th October 2023, 00:59
Article "OpenAI forms new team to assess ‘catastrophic risks’ of AI (https://www.theverge.com/2023/10/26/23933783/openai-preparedness-team-catastrophic-risks-ai)"
OpenAI’s new preparedness team will address the potential dangers associated with AI, including nuclear threats.

by Emma Roth (https://www.linkedin.com/in/emma-roth08)
October 26, 2023

Airicist2
9th November 2023, 14:17
Article "Does AI Pose an Existential Risk to Humanity? Two Sides Square Off (https://www.wsj.com/tech/ai/ai-risk-humanity-experts-thoughts-4b271757)"
Yes, say some: The threat of mass destruction from rogue AIs—or bad actors using AI—is real. Others say: Not so fast

November 8, 2023

openai.com/safety/preparedness (https://openai.com/safety/preparedness)

Head of Preparedness - Aleksander Madry (https://www.linkedin.com/in/aleksander-madry-61115b233)

Airicist2
10th November 2023, 12:00
2023 AI Safety Summit, November 1-2, 2023 (https://pr.ai/showthread.php?t=24864), Bletchley Park, Milton Keynes, United Kingdom

Airicist2
19th November 2023, 20:52
https://youtu.be/NFF_wj5jmiQ?si=n7iIWqFXnd3-bIc1

Algorithms rule us all - VPRO documentary - 2018

Premiered Oct 27, 2018


Whether you get a job or a mortgage, who is released early from prison: algorithms increasingly determine the big decisions in our lives. Algorithms rule us all, algorithms rule everything. Because algorithms are faster and more efficient than people. But do they always make better decisions? And what does a society look like in which we are sent by big data and computer code?
Companies, and increasingly governments too, use algorithms to automate bureaucratic processes. These algorithms, sets of instructions and rules that are fed by big data, unnoticeably determine our lives more and more. For example, the algorithm of Facebook determines which (political) advertisements we see and see large groups of employees in the on-demand economy never even a boss. From an application procedure to a dismissal request they are controlled by an algorithm. Where should they complain if something does not go as it should be?
Legislatives are also emerging in the judiciary. For example, an American prisoner had to sit longer than comparable prisoners because the algorithm, which establishes a risk score, gave him an inexplicably high outcome. And unlike the decisions made by a judge, it turns out to be virtually impossible to challenge the assessment of an algorithm. Recently the British company Cambridge Analytica appeared to have developed models based on large amounts of Facebook data, which could influence the voting behavior of voters. These psychographics show that algorithms can not only steer individual lives but also democracy.

The mathematicians and programmers begin to realize that the algorithms that are among all these automated decision systems are not neutral and may contain errors. Because the smart code may then decide more quickly than people, the results are not only sometimes defective, but sometimes downright dangerous. Should we be blindly guided by the decisions of the algorithm?

Slave to the Algorithm (https://topdocumentaryfilms.com/slave-algorithm)
2018

Airicist2
19th November 2023, 20:55
https://youtu.be/gxZEvOab1fQ?si=iromZ9VKkjoKZTIa

Jun 25, 2023

"Algorithmes - vers un monde manipulé (https://www.imdb.com/title/tt27052790)", documentary, Dorothe Dörholt, 2023

Airicist2
16th December 2023, 11:50
Article "AI presents growing risk to financial markets, US regulator warns (https://www.ft.com/content/1296448b-ade5-476b-b6ac-81eff32b0e22)"
Financial Stability Oversight Council flags emerging technology as a ‘vulnerability’ for the first time in latest report

by Stephen Gandel, Brooke Masters and James Politi
December 15, 2023

Airicist2
22nd January 2024, 19:41
Article "AI Can Be Trained for Evil and Conceal Its Evilness From Trainers, Anthropic Says (https://decrypt.co/213118/ai-can-be-trained-for-evil-and-conceal-its-evilness-from-trainers-antropic-says)"
If a “backdoored” language model can fool you once, it is more likely to be able to fool you in the future, while keeping ulterior motives hidden.

by Jose Antonio Lanz (https://www.linkedin.com/in/lanzjose)
January 17, 2024

Airicist2
8th February 2024, 12:28
"Global Risks Report 2024 (https://www.weforum.org/publications/global-risks-report-2024)"

January 10, 2024

Airicist2
24th February 2024, 05:59
Article "‘Humanity’s remaining timeline? It looks more like five years than 50’: meet the neo-luddites warning of an AI apocalypse (https://www.theguardian.com/technology/2024/feb/17/humanitys-remaining-timeline-it-looks-more-like-five-years-than-50-meet-the-neo-luddites-warning-of-an-ai-apocalypse)"
From the academic who warns of a robot uprising to the workers worried for their future – is it time we started paying attention to the tech sceptics?

by Tom Lamont (http://tomlamontjournalist.com)
February 17, 2024

Airicist2
6th June 2024, 00:23
Article "OpenAI Insiders Warn of a ‘Reckless’ Race for Dominance (https://www.nytimes.com/2024/06/04/technology/openai-culture-whistleblowers.html)"
A group of current and former employees is calling for sweeping changes to the artificial intelligence industry, including greater transparency and protections for whistle-blowers.

by Kevin Roose (https://www.linkedin.com/in/kevin-roose)
June 4, 2024

righttowarn.ai (https://righttowarn.ai)

Airicist2
7th June 2024, 13:33
Article "An unchecked AI could usher in a new dark age (https://www.businessinsider.com/ai-new-dark-age-risks-regulations-2024-5)"

by Natalie Musumeci (https://www.linkedin.com/in/natalie-musumeci-15218133)
June 6, 2024