Page 3 of 5 FirstFirst 12345 LastLast
Results 21 to 30 of 41

Thread: Miscellaneous

  1. #21

  2. #22

  3. #23


    Regulating the rise of Artificial General Intelligence

    Jun 1, 2020

    As research around the world proceeds to improve the power, the scope, and the generality of AI systems, should developers adopt regulatory frameworks to help steer progress?

    What are the main threats that such regulations should be guarding against? In the midst of an intense international race to obtain better AI, are such frameworks doomed to be ineffective? Might such frameworks do more harm than good, hindering valuable innovation? Are there good examples of precedents, from other fields of technology, of international agreements proving beneficial? Or is discussion of frameworks for the governance of AGI (Artificial General Intelligence) a distraction from more pressing issues, given the potential long time scales ahead before AGI becomes a realistic prospect?

    This 90 minute London Futurists live Zoom webinar featured a number of panellists with deep insight into the issues of improving AI:

    *) Joanna Bryson, Professor of Ethics and Technology at the Hertie School, Berlin
    *) Dan Faggella, CEO and Head of Research, Emerj Artificial Intelligence Research
    *) Nell Watson, tech ethicist, machine learning researcher, and social reformer

  4. #24


    Musing on understanding & AI - Hugo de Garis, Adam Ford, Michel de Haan

    July 27, 2020

    Started out as an interview ended up being a discussion between Hugo de Garis and (off camera) Adam Ford + Michel de Haan.
    00:11 The concept of understanding under-recognised as an important aspect of developing AI
    00:44 Re-framing perspectives on AI - the Chinese Room argument - and how can consciousness or understanding arise from billions of seemingly discreet neurons firing? (Should there be a binding problem of understanding similar to the binding problem of consciousness?)
    04:23 Is there a difference between generality in intelligence and understanding? (and extentionally between AGI and artificial understanding?)
    05:08 Ah Ha! moments - where the penny drops - what's going on when this happens?
    07:48 Is there an ideal form of understanding? Coherence & debugging - ah ha moments
    10:18 Webs of knowledge - contextual understanding
    12:16 Early childhood development - concept formation and navigation
    13:11 The intuitive ability for concept navigation isn't complete
    Is the concept of understanding a catch all?
    14:29 Is it possible to develop AGI that doesn't understand? Is generality and understanding the same thing?
    17:32 Why is understanding (the nature of) understanding important?
    Is understanding reductive? Can it be broken down?
    19:52 What would be the most basic primitive understanding be?
    22:11 If (strong) AI is important, and understanding is required to build (strong) AI, what sorts of things should we be doing to make sense of understanding?
    Approaches - engineering, and copy the brain
    24:34 Is common sense the same thing as understanding? How are they different?
    26:24 What concepts do we take for granted around the world - which when strong AI comes about will dissolve into illusions, and then tell us how they actually work under the hood?
    27:40 Compression and understanding
    29:51 Knowledge, Gettier problems and justified true belief. Is knowledge different from understanding and if so how?
    31:07 A hierarchy of intel - data, information, knowledge, understanding, wisdom
    33:37 What is wisdom? Experience can help situate knowledge in a web of understanding - is this wisdom? Is the ostensible appearance of wisdom necessarily wisdom? Think pulp remashings of existing wisdom in the form of trashy self-help literature.
    35:38 Is understanding mapping knowledge into a useful framework? Or is it making accurate / novel predictions?
    36:00 Is understanding like high resolution carbon copy like models that accurately reflect true nature or a mechanical process?
    37:04 Does understanding come in gradients of topologies? Is there degrees or is it just on or off?
    38:37 What comes first - understanding or generality?
    40:47 Minsky's 'Society of Mind'
    42:46 Is vitalism alive in well in the AI field? Do people actually think there are ghosts in the machines?
    48:15 Anthropomorphism in AI literature
    50:48 Deism - James Gates and error correction in super-symmetry
    52:16 Why are the laws of nature so mathematical? Why is there so much symmetry in physics? Is this confusing the map with the territory?
    52:35 The Drake equation, and the concept of the Artilect - does this make Deism plausible? What about the Fermi Paradox?
    55:06 Hyperintelligence is tiny - the transcention hypothesis - therefore civs go tiny - an explanation for the fermi paradox
    56:36 Why would *all* civs go tiny? Why not go tall, wide and tiny? What about selection pressures that seem to necessitate cosmic land grabs?
    01:01:52 The Great Filter and the The Fermi Paradox
    01:02:14 Is it possible for an AGI to have a deep command of knowledge across a wide variety of topics/categories without understanding being an internal dynamic? Is the turing test good enough to test for understanding? What kinds of behavioral tests could reliably test for understanding? (Of course without the luxury of peering under the hood)
    01:03:09 Does AlphaGo understand Go, or DeepBlue understand chess? Revisiting the Chinese Room argument.
    01:04:23 More on behavioral tests for AI understanding.
    01:06:00 Zombie machines - David Chalmers Zombie argument
    01:07:26 Complex enough algorithms - is there a critical point of complexity beyond which general intelligence likely emerges? Or understanding emerges?
    01:08:11 Revisiting behavioral 'turing' tests for understanding
    01:13:05 Shape sorters and reverse shape sorters
    01:14:03 Would slightly changing the rules of Go confuse AlphaGo (after it had been trained)? Need for adaptivity - understanding concept boundaries, predicting where they occur, and the ability to mine outwards from these boundaries...
    01:15:11 Neural nets and adaptivity
    01:16:41 AlphaGo documentary - worth a watch. Progresses in AI challenges human dignity which is a concern, but the DeepMind and the AlphaGo documentary seemed to be respectful. Can we manage a transition from human labor to full on automation while preserving human dignity?

    Filmed in the Dandenong Ranges in Victoria, Australia.

  5. #25

  6. #26

  7. #27


    ETA Artificial General Intelligence V2

    Jan 16, 2021

    What we usually think of as Artificial Intelligence (AI) today, when we see human-like robots and holograms in our fiction, talking and acting like real people and having human-level or even superhuman intelligence and capabilities, is actually called Artificial General Intelligence (AGI), and it does NOT exist anywhere on earth yet.

    What we actually have for AI today is much simpler and much more narrow Deep Learning (DL) that can only do some very specific tasks better than people. It has fundamental limitations that will not allow it to become AGI, so if that is our goal, we need to innovate and come up with better networks and better methods for shaping them into an artificial brain.

    This is a proposed approach to developing an AGI - code named 'Eta'

    US Provisional Patent Application Number 63138058, filed 15 Jan 2021, EFS ID 41663980
    orbai.ai/artificial-general-intelligence.htm

  8. #28
    Article "DeepMind scientists: Reinforcement learning is enough for general AI"

    by Ben Dickson
    June 7, 2021

    "Reward is enough"

    by David Silver, Satinder Singh, Doina Precup, Richard S.Sutton

  9. #29

  10. #30

Page 3 of 5 FirstFirst 12345 LastLast

Социальные закладки

Социальные закладки

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •