I really appreciate Matt's comments about this even though I am wary
of the field.  It is important to have some ideas about why the AI
problem is so hard, and that insight is best told with some
descriptive information like Matt's message.  Of course, if no one is
asking why then the poster has to wonder if he should explain it.

However, I do not believe that the proposition that the shortest
program that can produce the trial results would establish a solution
to an AI problem is a sound philosophical basis for AGI.  We need to
be able to show that the program can learn about new things.  Since
this question has to be expressed as open ended statement using some
vague general form, it is impossible or at least very hard to define a
definitive test basis that could be used to establish the shortest
program that can achieve the goal.  Instead we use techniques that
seem to do be adaptable and then try to figure out how to
systematically deal with all of the errors that these methods tend to
produce.

Jim Bromer

On Mon, Dec 1, 2008 at 12:04 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> --- On Sun, 11/30/08, Philip Hunt <[EMAIL PROTECTED]> wrote:
>
>> Can someone explain AIXI to me?
>
> AIXI models an intelligent agent interacting with an environment as a pair of 
> interacting Turing machines. At each step, the agent outputs a symbol to the 
> environment, and the environment outputs a symbol and a numeric reward signal 
> to the agent. The goal of the agent is to maximize the accumulated reward.
>
> Hutter proved that the optimal solution is for the agent to guess, at each 
> step, that the environment is simulated by the shortest program that is 
> consistent with the interaction observed so far.
>
> Hutter also proved that the optimal solution is not computable because the 
> agent can't know which of its guesses are halting Turing machines. The best 
> it can do is pick numbers L and T, try all 2^L programs up to length L for T 
> steps each in order of increasing length, and guess the first one that is 
> consistent. If there are no matches, then it needs to choose larger L and T 
> and try again. That solution is called AIXI^TL. It's time complexity is O(T 
> 2^L). In general, it may require L up to the length of the observed 
> interaction (because there is a fast program that outputs the agent's 
> observations from a list of length L).
>
> In a separate paper ( http://www.vetta.org/documents/ui_benelearn.pdf ), Legg 
> and Hutter propose defining universal intelligence as the expected reward of 
> an AIXI agent in random environments.
>
> The value of AIXI is not that it solves the general intelligence problem, but 
> rather it explains why the problem is so hard. It also justifies a general 
> principle that is already used in science and in practical machine learning 
> algorithms: to choose the simplest hypothesis that fits the data. It formally 
> defines "simple" as the length of the shortest program that outputs a 
> description of the hypothesis.
>
> For example, to avoid overfitting in neural networks, you should use the 
> smallest number of connections and the least amount of training needed to fit 
> the training data, then stop. In this case, the complexity of your neural 
> network is the length of the shortest program that outputs the configuration 
> of your network and its weights. Even if you don't know what that program is, 
> and haven't chosen a programming language, you may reasonably expect that 
> fewer connections, smaller weights, and coarser weight quantization will 
> result in a shorter program.
>
> -- Matt Mahoney, [EMAIL PROTECTED]
>
>
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com

Reply via email to