Results 1 to 5 of 5

Thread: Marcus Hutter

  1. #1

  2. #2


    Marcus Hutter - What is Intelligence? AIXI & Induction

    Published on Sep 17, 2012

    What is Intelligence?
    Intelligence is a very difficult concept (maybe thats the reason why many people try to avoid it or narrow it down). I've worked on this question for many many years now, and we went through the literature; psychology literature, philosophy literature; AI literature) what individuals, researches, and also groups came up with definitions, they are very diverse. But there seems to be one redcurrant theme and if you wnat to put it in one sentence, then you could define intelligence as:
    "an agents ability to achieve goals in a wide range of environments", or to succeed in a wide range of environments.
    If you now look at this sentence and ask, "wow, how can this single sentence capture the complexity of intelligence?" There are two answers to that. First: many aspect of that are emergent properties of intelligence, like being able to learn - if I want to succeed or solve a problem I need to acquire new knowledge, so learning is an emergent phenomenon this definition.
    And the second answer is: this is just a sentence that contains a few words, what you really have to do, and that's the hard part, is to transform it into meaningful equations and then study these equations. And that's what I have done in the last 12 years.

    Bounded Rationality:
    It is an interesting question whether resource bounds should be included in any definition of intelligence or not, and the natural answer is of course they should. Well there are several problems, the first one is that nobody ever came up with a reasonable theory of bounded rationality (people have tried), so it seems to be very hard. And this is not specific to AI or intelligence, but it seems to be symptomatic in science, so if you look at the several fields (physics the crown discipline) theories have been developed: Neuton's mechanics, General Relativity Theory, Quantum Field theory, the Standard Model of Particle Physics. there are more and more precise, but they get less and less computable, and having a computable theory is not a principle in developing these theories, of course at some point you have to test these theories and you want to do something with them, and then you need a computable theory - this is a very difficult issue (and you have to approximate them or do something about it). But having computational resources built into the fundamentals of the theories, that is at least in physics, and if you look at other disciplines, that is not how things work.
    You design theories so that they describe your phenomenon as well as possible and the computational aspect is secondary. Of course if it is incomputable and you can't do anything with it, you have to come up with another theory, but this always comes second. And only in computer science (and this comes naturally) computer scientists try to think about how they can design an efficient algorithm to solve my problem, and since AI is sitting in the computer science department traditionally, the mainstream thought is "how can I build a resource bounded artificial intelligent system". And I agree that ultimately this is what we want. But the problem is so hard I think, that we should take (or a large fraction of the scientists) this approach, model the problem first, define the problem first, and once we are confident that we have solved this problem, then go to the second phase, try to approximate the theory, try to make a computational theory out of it. And then there are many many possibilities, you could still try to create a resource bounded theory of intelligence, which will be very very hard if you want to have it very principled, or you do some heuristics... or .. or .. or... many options. Or the short answer may be I am not smart enough to come up with a resource bounded theory of intelligence, therefore I only developed one without resource constraints (that would be the short answer).

    AIXI:
    Ok so now we have this informal definition that intelligence is an agents ability to succeed or achieve goals in a wide range of environments. The point is you can formalize this theory, and we have done that and it is called AIXI. Or Universal AI is the general field theory and AIXI is the particular agent which acts optimally in this sense.
    So that works as follows: it has a planning component, and it has a learning component. What the learning component does is, think about a robot walking around in the environment, and at the beginning it has no data/knowledge about the world, so what it has to do is acquire data/knowledge of the world and then build its own model of the world, how the world works. And it does that, so there are very powerful general theories on how to learn a model from data, from very complex scenarios. This theory is rooted in Kolomogrov complexity, algorithmic information theory - the basic idea is you look for the simplest model which describe your data efficiently well.

  3. #3


    Marcus Hutter - The essence of Artificial General Intelligence

    Published on Jan 27, 2018

    The Essence of Artificial General Intelligence
    - A theoretical approach to Artificial General Intelligence
    - Approximating Universal Intelligence
    - Future directions in approximating UAI

    Filmed at IJCAI in Melbourne.

  4. #4


    Marcus Hutter - Advances in Universal Artificial Intelligence - AGI17

    Published on Dec 22, 2018

    Abstract: There is great interest in understanding and constructing generally
    intelligent systems approaching and ultimately exceeding human
    intelligence. Universal AI is such a mathematical theory of machine super-intelligence. More precisely, AIXI is an elegant parameter-free theory of an optimal reinforcement learning agent embedded in an arbitrary unknown environment that possesses essentially all aspects of rational intelligence. The theory reduces all conceptual AI problems to pure computational questions. After a brief discussion of its philosophical, mathematical, and computational ingredients, I will give a formal definition and measure of intelligence, which is maximized by AIXI. AIXI can be viewed as the most powerful Bayes-optimal sequential
    decision maker, for which I will present general optimality results. This also motivates some variations such as knowledge-seeking and optimistic agents, and feature reinforcement learning. Finally I present some recent approximations, implementations, and applications of this modern top-down approach to AI.

  5. #5


    Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI | AI Podcast #75 with Lex Fridman

    Feb 26, 2020

    Marcus Hutter is a senior research scientist at DeepMind and professor at Australian National University. Throughout his career of research, including with Jürgen Schmidhuber and Shane Legg, he has proposed a lot of interesting ideas in and around the field of artificial general intelligence, including the development of the AIXI model which is a mathematical approach to AGI that incorporates ideas of Kolmogorov complexity, Solomonoff induction, and reinforcement learning.

    0:00 - Introduction
    3:32 - Universe as a computer
    5:48 - Occam's razor
    9:26 - Solomonoff induction
    15:05 - Kolmogorov complexity
    20:06 - Cellular automata
    26:03 - What is intelligence?
    35:26 - AIXI - Universal Artificial Intelligence
    1:05:24 - Where do rewards come from?
    1:12:14 - Reward function for human existence
    1:13:32 - Bounded rationality
    1:16:07 - Approximation in AIXI
    1:18:01 - Godel machines
    1:21:51 - Consciousness
    1:27:15 - AGI community
    1:32:36 - Book recommendations
    1:36:07 - Two moments to relive (past and future)

Социальные закладки

Социальные закладки

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •