Tuesday, February 11, 2003, 11:05:04 PM, Cliff Stabbert wrote:

SL> However even within this scenario the concept of "fixed goal" is
SL> something that we need to be careful about.  The only real goal
SL> of the AIXI system is to get as much reward as possible from its
SL> environment.  A "goal" is just our description of what that means.
SL> If the AI gets reward for winning at chess then quickly it will get
SL> very good at chess.  If it then starts getting punished for winning
SL> it will then quickly switch to losing at chess.  Has the goal of
SL> the system changed?  Perhaps not.  Perhaps the goal always was:
SL> Win at chess up to point x in time and then switch to losing.
SL> So we could say that the goal was always fixed, it's just that up
SL> to point x in time the AI thought the goal was to alway win and it
SL> wasn't until after point x in time that it realised that the real
SL> goal was actually slightly more complex.  In which case does it make
SL> any sense to talk about AIXI as being limited by having fixed goals?
SL> I think not.

I should add, the example you gave is what raised my questions: it
seems to me an essentially untrainable case because it presents a
*non-repeatable* scenario.

If I were to give to an AGI a 1,000-page book, and on the first 672
pages was written the word "Not", it may predict that on the 673d page
will be the word "Not.".  But I could choose to make that page blank,
and in that scenario, as in the above, I don't see how any algorithm,
no matter how clever, could make that prediction (unless it included
my realtime brainscans, etc.)

--
Cliff

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to