Article "How to Start an AI Panic"
The Center for Humane Technology stoked conversation about the dangers of social media. Now it’s warning that artificial intelligence is as dangerous as nuclear weapons.
by Steven Levy
March 10, 2023
Article "How to Start an AI Panic"
The Center for Humane Technology stoked conversation about the dangers of social media. Now it’s warning that artificial intelligence is as dangerous as nuclear weapons.
by Steven Levy
March 10, 2023
"Pause Giant AI Experiments: An Open Letter"
We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.
Article "Elon Musk, Other AI Experts Call for Pause in Technology’s Development"
Appeal causes tension among artificial-intelligence stakeholders amid concern over pace of advancement
by Deepa Seetharaman
March 29, 2023
Last edited by Airicist2; 29th March 2023 at 21:01.
Why do tech leaders want to pause AI development? | Engadget Podcast
Streamed live March 30, 2023
Article "Why the FLI Open Letter Won't Work"
History sure does rhyme and AI is no exception
by Alberto Romero
March 31, 2023
Article "US senator open letter calls for AI security at ‘forefront’ of development"
by Tim Keary
April 26, 2023
"Governance of superintelligence"
Now is a good time to start thinking about the governance of superintelligence—future AI systems dramatically more capable than even AGI.
by Sam Altman, Greg Brockman, Ilya Sutskever
May 22, 2023
Last edited by Airicist2; 28th May 2023 at 13:40.
Article "Top AI researchers and CEOs warn against ‘risk of extinction’ in 22-word statement"
It’s another high-profile warning about AI risk that will divide experts. Signatories include Google DeepMind CEO Demis Hassabis and OpenAI CEO Sam Altman.
by James Vincent
May 30, 2023
Article "AI Is Not an Arms Race"
by Katja Grace
May 31, 2023
The Urgent Risks of Runaway AI – and What to Do about Them | Gary Marcus | TED
May 12, 2023
Will truth and reason survive the evolution of artificial intelligence? AI researcher Gary Marcus says no, not if untrustworthy technology continues to be integrated into our lives at such dangerously high speeds. He advocates for an urgent reevaluation of whether we're building reliable systems (or misinformation machines), explores the failures of today's AI and calls for a global, nonprofit organization to regulate the tech for the sake of democracy and our collective future. (Followed by a Q&A with head of TED Chris Anderson)
Социальные закладки