--- Shane Legg <[EMAIL PROTECTED]> wrote:

> Matt,
> 
> Shane Legg's definition of universal intelligence requires (I believe)
> complexity but not adaptability.
> 
> 
> In a universal intelligence test the agent never knows what the environment
> it is facing is.  It can only try to learn from experience and adapt in
> order to
> perform well.  This means that a system which is not adaptive will have a
> very low universal intelligence.  Even within a single environment, some
> environments will change over time and thus the agent must adapt in order
> to keep performing well.
> 
> Shane

I was thinking of your other paper, which showed that a Turing machine cannot
learn to predict an environment of higher algorithmic complexity, thus the
requirement of complexity.  But I did not see any formal definition of
"adaptability" or any requirement for it.  An obvious counterexample would be
AIXI.

I realize that there are no known *efficient* intelligent systems that aren't
adaptive, in the sense of being an iterative process of test and incremental
update.  Examples include evolution, the human brain, and software
development.

In another post I mentioned Kauffman's observation that complex systems tend
to reside on the boundary between stability and chaos.  I believe this is
because stable systems are not complex and chaotic systems are not adaptive.


-- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to