1. AIXI won't solve AGI because AIXI is not computable, and good
approximations are intractable beyond toy problems.

2. AIXI won't solve AGI because language, vision, robotics, and art
are not goal directed optimization processes. Reinforcement learning
is responsible for a very tiny fraction of what you know because it is
a low bandwidth signal.

3. AIXI is not a good model of reinforcement learning in humans and
other animals. A positive (negative) reinforcement signal makes you
more (less) likely to repeat behavior that preceded it. That is not
the same as rationally changing your behavior in a way that increases
expected reinforcement. If they were the same thing, then your desire
to use a drug would not depend on whether you have already tried it.

4. Not all reinforcement signals are the same. Nausea will make you
less likely to eat something you ate an hour earlier. Electric shock
would have a different effect.

5. AIXI apparently can't even play Pac Man very well.

-- 
-- Matt Mahoney, [email protected]


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to