Hey Alex,

I'm not so familiar with where Solomonoff induction ends and AIXI begins,
so necessarily what follows may suffer from a conflation.

It looks to me like your question is making a huge philosophical assumption
-- you are assuming it is possible to have a Bernoulli distribution that
can't be simulated on a Turing machine.

If such a thing can exist, and you feed it to AIXI, I agree with you about
what you will get.

If such a thing can't exist, then AIXI (run for long enough) will end up
with a perfect predictor of your Bernoulli draws.


To put things another way: If you feel like some things are inherently
non-modelable, you could extend AIXI to work in that world view.
Specifically, you could allow agents that predict floats (eg 0.5) for the
next bit. If this was allowed, you would get the right answer here.


Cheers,
David


On Mon, Apr 28, 2014 at 3:35 AM, Anastasios Tsiolakidis <[email protected]>wrote:

> I think all three previous posts confirm my intuition that AIXI is a
> madman's AGI and would just not work, period. Surely your expectation Alex
> is unjustified, any particular pattern could be hiding in any length of
> distribution samples, and deciding a phenomenon is random is the least
> common denominator, it is self-defeating for an intelligence. Anyway, AIXI
> is the last thing that will have an impact, if any, to applied AGI, let's
> not waste our time with it.
>
> Of course being able to investigate where some random digits are coming
> from, that's a whole different ball game and I wholeheartedly support
> epistemological investigations even of the most abstract and mathematical
> kind. But AIXI, ....
>
> AT
>
> On 28.04.2014, at 11:39, "Tim Tyler" <[email protected]> wrote:
>
> On 28/04/2014 03:48, Alex Miller wrote:
>
>  Suppose we give a long sample from the Bernoulli distribution to AIXI.
> I would expect an intelligent agent to figure out that it cannot see
> any patterns there and be able to continue the sequence accordingly.
>
>
> If it is trying to accurately predict the bits, consistently betting on 0
> (if p <= 0.5)
> or 1 (if p >= 0.5) is actually the best strategy against the Bernoulli
> distribution.
> --
> __________
>  |im |yler  http://timtyler.org/  [email protected]  Remove lock to reply.
>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/14050631-7d925eb1> |
> Modify <https://www.listbox.com/member/?&;> Your Subscription
> <http://www.listbox.com>
>
>   *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/10303681-18428b2c> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to