--- Richard Loosemore <[EMAIL PROTECTED]> wrote:

> Matt Mahoney wrote:
> > --- Stan Nilsen <[EMAIL PROTECTED]> wrote:
> > 
> >> Matt,
> >>
> >> Thanks for the links sent earlier.  I especially like the paper by Legg 
> >> and Hutter regarding measurement of machine intelligence.  The other 
> >> paper I find difficult, probably it's deeper than I am.
> > 
> > The AIXI paper is essentially a proof of Occam's Razor.  The proof uses a
> > formal model of an agent and an environment as a pair of interacting
> Turing
> > machines exchanging symbols.  In addition, at each step the environment
> also
> > sends a "reward" signal to the agent.  The goal of the agent is to
> maximize
> > the accumulated reward.  Hutter proves that if the environment is
> computable
> > or has a computable probability distribution, then the optimal behavior of
> the
> > agent is to guess at each step that the environment is simulated by the
> > shortest program consistent with all of the interaction observed so far. 
> This
> > optimal behavior is not computable in general, which means there is no
> upper
> > bound on intelligence.
> 
> Nonsense.  None of this follows from the AIXI paper.  I have explained 
> why several times in the past, but since you keep repeating these kinds 
> of declarations about it, I feel obliged to repeat that these assertions 
> are speculative extrapolations that are completeley unjustified by the 
> paper's actual content.

Yes it does.  Hutter proved that the optimal behavior of an agent in a
Solomonoff distribution of environments is not computable.  If it was
computable, then there would be a finite solution that was maximally
intelligent according to Hutter and Legg's definition of universal
intelligence.



-- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=78395068-9af1e2

Reply via email to