> Perhaps someone can clarify some issues for me.
>
> I'm not good at math -- I can't follow the AIXI materials and I don't
> know what Solomonoff induction is.  So it's unclear to me how a
> certain goal is mathematically defined in this uncertain, fuzzy
> universe.
>
> What I'm assuming, at this point, is that AIXI and Solomonoff
> induction depend on operation in a "somehow predictable" universe -- a
> universe with some degree of entropy, so that its data is to some
> extent "compressible".  Is that more or less correct?
>
> And in that case, "goals" can be defined by feedback given to the
> system, because the desired behaviour patterns it induces from the
> feedback *predictably* lead to the desired outcomes, more or less?
>
> I'd appreciate if someone could tell me if I'm right or wrong on this,
> or point me to some plain english resources on these issues, should
> they exist.  Thanks.
>
> --
> Cliff

The theorems about the AIXItl system tell you about how the system
learns to behave according to a computable reward function.

They say that the AIXItl system can learn to maximize reward as well as any
other system, if it's given a certain (large) amount more resources than
that system.

If the universe is totally random, then NO AI system can display significant
intelligence in it.

In a random universe, the theorems just tell you that AIXItl doesn't do any
worse than other AI systems -- because they all suck.

But if a universe displays probabilistic rather than deterministic patterns,
then AIXItl (and other AI systems such as Novamente) can do quite well.

-- Ben




-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to