Miscellaneous


Regulating the rise of Artificial General Intelligence

Jun 1, 2020

As research around the world proceeds to improve the power, the scope, and the generality of AI systems, should developers adopt regulatory frameworks to help steer progress?

What are the main threats that such regulations should be guarding against? In the midst of an intense international race to obtain better AI, are such frameworks doomed to be ineffective? Might such frameworks do more harm than good, hindering valuable innovation? Are there good examples of precedents, from other fields of technology, of international agreements proving beneficial? Or is discussion of frameworks for the governance of AGI (Artificial General Intelligence) a distraction from more pressing issues, given the potential long time scales ahead before AGI becomes a realistic prospect?

This 90 minute London Futurists live Zoom webinar featured a number of panellists with deep insight into the issues of improving AI:

*) Joanna Bryson, Professor of Ethics and Technology at the Hertie School, Berlin
*) Dan Faggella, CEO and Head of Research, Emerj Artificial Intelligence Research
*) Nell Watson, tech ethicist, machine learning researcher, and social reformer
 

Musing on understanding & AI - Hugo de Garis, Adam Ford, Michel de Haan

July 27, 2020

Started out as an interview ended up being a discussion between Hugo de Garis and (off camera) Adam Ford + Michel de Haan.
00:11 The concept of understanding under-recognised as an important aspect of developing AI
00:44 Re-framing perspectives on AI - the Chinese Room argument - and how can consciousness or understanding arise from billions of seemingly discreet neurons firing? (Should there be a binding problem of understanding similar to the binding problem of consciousness?)
04:23 Is there a difference between generality in intelligence and understanding? (and extentionally between AGI and artificial understanding?)
05:08 Ah Ha! moments - where the penny drops - what's going on when this happens?
07:48 Is there an ideal form of understanding? Coherence & debugging - ah ha moments
10:18 Webs of knowledge - contextual understanding
12:16 Early childhood development - concept formation and navigation
13:11 The intuitive ability for concept navigation isn't complete
Is the concept of understanding a catch all?
14:29 Is it possible to develop AGI that doesn't understand? Is generality and understanding the same thing?
17:32 Why is understanding (the nature of) understanding important?
Is understanding reductive? Can it be broken down?
19:52 What would be the most basic primitive understanding be?
22:11 If (strong) AI is important, and understanding is required to build (strong) AI, what sorts of things should we be doing to make sense of understanding?
Approaches - engineering, and copy the brain
24:34 Is common sense the same thing as understanding? How are they different?
26:24 What concepts do we take for granted around the world - which when strong AI comes about will dissolve into illusions, and then tell us how they actually work under the hood?
27:40 Compression and understanding
29:51 Knowledge, Gettier problems and justified true belief. Is knowledge different from understanding and if so how?
31:07 A hierarchy of intel - data, information, knowledge, understanding, wisdom
33:37 What is wisdom? Experience can help situate knowledge in a web of understanding - is this wisdom? Is the ostensible appearance of wisdom necessarily wisdom? Think pulp remashings of existing wisdom in the form of trashy self-help literature.
35:38 Is understanding mapping knowledge into a useful framework? Or is it making accurate / novel predictions?
36:00 Is understanding like high resolution carbon copy like models that accurately reflect true nature or a mechanical process?
37:04 Does understanding come in gradients of topologies? Is there degrees or is it just on or off?
38:37 What comes first - understanding or generality?
40:47 Minsky's 'Society of Mind'
42:46 Is vitalism alive in well in the AI field? Do people actually think there are ghosts in the machines?
48:15 Anthropomorphism in AI literature
50:48 Deism - James Gates and error correction in super-symmetry
52:16 Why are the laws of nature so mathematical? Why is there so much symmetry in physics? Is this confusing the map with the territory?
52:35 The Drake equation, and the concept of the Artilect - does this make Deism plausible? What about the Fermi Paradox?
55:06 Hyperintelligence is tiny - the transcention hypothesis - therefore civs go tiny - an explanation for the fermi paradox
56:36 Why would *all* civs go tiny? Why not go tall, wide and tiny? What about selection pressures that seem to necessitate cosmic land grabs?
01:01:52 The Great Filter and the The Fermi Paradox
01:02:14 Is it possible for an AGI to have a deep command of knowledge across a wide variety of topics/categories without understanding being an internal dynamic? Is the turing test good enough to test for understanding? What kinds of behavioral tests could reliably test for understanding? (Of course without the luxury of peering under the hood)
01:03:09 Does AlphaGo understand Go, or DeepBlue understand chess? Revisiting the Chinese Room argument.
01:04:23 More on behavioral tests for AI understanding.
01:06:00 Zombie machines - David Chalmers Zombie argument
01:07:26 Complex enough algorithms - is there a critical point of complexity beyond which general intelligence likely emerges? Or understanding emerges?
01:08:11 Revisiting behavioral 'turing' tests for understanding
01:13:05 Shape sorters and reverse shape sorters
01:14:03 Would slightly changing the rules of Go confuse AlphaGo (after it had been trained)? Need for adaptivity - understanding concept boundaries, predicting where they occur, and the ability to mine outwards from these boundaries...
01:15:11 Neural nets and adaptivity
01:16:41 AlphaGo documentary - worth a watch. Progresses in AI challenges human dignity which is a concern, but the DeepMind and the AlphaGo documentary seemed to be respectful. Can we manage a transition from human labor to full on automation while preserving human dignity?

Filmed in the Dandenong Ranges in Victoria, Australia.
 

ETA Artificial General Intelligence V2

Jan 16, 2021

What we usually think of as Artificial Intelligence (AI) today, when we see human-like robots and holograms in our fiction, talking and acting like real people and having human-level or even superhuman intelligence and capabilities, is actually called Artificial General Intelligence (AGI), and it does NOT exist anywhere on earth yet.

What we actually have for AI today is much simpler and much more narrow Deep Learning (DL) that can only do some very specific tasks better than people. It has fundamental limitations that will not allow it to become AGI, so if that is our goal, we need to innovate and come up with better networks and better methods for shaping them into an artificial brain.

This is a proposed approach to developing an AGI - code named 'Eta'

US Provisional Patent Application Number 63138058, filed 15 Jan 2021, EFS ID 41663980

orbai.ai/artificial-general-intelligence.htm
 
Article "Microsoft Says New A.I. Shows Signs of Human Reasoning"
A provocative paper from researchers at Microsoft claims A.I. technology shows the ability to understand the way people do. Critics say those scientists are kidding themselves.

by Cade Metz
May 16, 2023

"Sparks of Artificial General Intelligence:Early experiments with GPT-4"

by Sebastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg,
Harsha Nori, Hamid Palangi, Marco Tulio Ribeiro, Yi Zhang
April 13, 2023
 
Last edited:
"Levels of AGI: Operationalizing Progress on the Path to AGI"

by Meredith Ringel Morris, Jascha Sohl-dickstein, Noah Fiedel, Tris Warkentin, Allan Dafoe, Aleksandra Faust, Clement Farabet, Shane Legg
November 4, 2023


We propose a framework for classifying the capabilities and behavior of Artificial General Intelligence (AGI) models and their precursors. This framework introduces levels of AGI performance, generality, and autonomy. It is our hope that this framework will be useful in an analogous way to the levels of autonomous driving, by providing a common language to compare models, assess risks, and measure progress along the path to AGI. To develop our framework, we analyze existing definitions of AGI, and distill six principles that a useful ontology for AGI should satisfy. These principles include focusing on capabilities rather than mechanisms; separately evaluating generality and performance; and defining stages along the path toward AGI, rather than focusing on the endpoint. With these principles in mind, we propose 'Levels of AGI' based on depth (performance) and breadth (generality) of capabilities, and reflect on how current systems fit into this ontology. We discuss the challenging requirements for future benchmarks that quantify the behavior and capabilities of AGI models against these levels. Finally, we discuss how these levels of AGI interact with deployment considerations such as autonomy and risk, and emphasize the importance of carefully selecting Human-AI Interaction paradigms for responsible and safe deployment of highly capable AI systems.
 
Back
Top