On 25/07/2013 16:04, Matt Mahoney wrote:

AIXI won't solve AGI because language, vision, robotics, and art
are not goal directed optimization processes. Reinforcement learning
is responsible for a very tiny fraction of what you know because it is
a low bandwidth signal.

AIXI has sensory channels besides its reward signal, though.

AIXI is not a good model of reinforcement learning in humans and
other animals. A positive (negative) reinforcement signal makes you
more (less) likely to repeat behavior that preceded it. That is not
the same as rationally changing your behavior in a way that increases
expected reinforcement. If they were the same thing, then your desire
to use a drug would not depend on whether you have already tried it.

It seems as though the AIXI approach is better.  That's what we might expect 
from
an idealised model.

Not all reinforcement signals are the same. Nausea will make you
less likely to eat something you ate an hour earlier. Electric shock
would have a different effect.

That's true, but it's modeled well as "instinctive" priors.  AIXI has sensory 
chanels to provide
the relevant metadata.  The relevant knowledge could be built in - or it could 
be learned
from scratch - as in AIXI itself.
--
__________
 |im |yler  http://timtyler.org/  [email protected]  Remove lock to reply.




-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to