(moved to the magic list:
https://groups.google.com/forum/?hl=en#!topic/magic-list/mJXerSgHE4Q
and your post there:
https://groups.google.com/forum/?hl=en#!topic/magic-list/J1GTdRFgV1k
)


On Thu, May 1, 2014 at 2:03 AM, Alex Miller <[email protected]>wrote:

> Laurent,
>
> I agree that the Solomonoff induction should do a good job predicting the
> next bit (especially if the programming language is symmetric with respect
> to 1s and 0s).
>
> However, if I'm not mistaken, AIXI relies on SI being able to model all
> possible futures optimally, rather than just the next bit. Is this wrong?
> (I haven't yet come up with a full reinforcement learning counterexample
> that would demonstrate this presumed failure though. Perhaps it could be a
> game against an intelligent opponent with the 2x2 identity matrix as the
> payoff matrix)
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to