Quick argument for the same point: AIXI is uncomputable, but only
considers computable models. The anthropic principle requires a
rational entity to include itself in all models that are given nonzero
probability. AIXI obviously cannot do so.

Such an argument fails for computable approximations of AIXI, however.
But they might fail for similar reasons. (Strict AIXI approximations
are approximations of an entity that can't reason about itself,
therefore any ability to do so is an artifact of the approximation.)

On Fri, Jun 20, 2008 at 8:09 PM, Wei Dai <[EMAIL PROTECTED]> wrote:
> Eliezer S. Yudkowsky pointed out in a 2003 "agi" post titled "Breaking
> Solomonoff induction... well, not really" [1] that
> Solomonoff Induction is flawed because it fails to incorporate anthropic
> reasoning. But apparently he thought this doesn't "really" matter because in
> the long run Solomonoff Induction will converge with the correct reasoning.
> Here I give two counterexamples to show that this convergence does not
> necessarily occur.
>
> The first example is a thought experiment where an induction/prediction
> machine is first given the following background information: Before
> predicting each new input symbol, it will be copied 9 times. Each copy will
> then receive the input "1", while the original will receive "0". The 9
> copies that received "1" will be put aside, while the original will be
> copied 9 more times before predicting the next symbol, and so on. To a human
> upload, or a machine capable of anthropic reasoning, this problem is
> simple: no matter how many "0"s it sees, it should always predict "1" with
> probability 0.9, and "0" with probability 0.1. But with Solomonoff
> Induction, as the number of "0"s it receives goes to infinity, the
> probability it predicts for "1" being the next input must converge to 0.
>
> In the second example, an intelligence wakes up with no previous memory and
> finds itself in an environment that apparently consists of a set of random
> integers and some of their factorizations. It finds that whenever it outputs
> a factorization for a previously unfactored number, it is rewarded. To a
> human upload, or a machine capable of anthropic reasoning, it would be
> immediately obvious that this cannot be the true environment, since such an
> environment is incapable of supporting an intelligence such as itself.
> Instead, a more likely explanation is that it is being used by another
> intelligence as a codebreaker. But Solomonoff Induction is incapable of
> reaching such a conclusion no matter how much time we give it, since it
> takes fewer bits to algorithmically describe just a set of random numbers
> and their factorizations, than such a set embedded within a universe capable
> of supporting intelligent life. (Note that I'm assuming that these numbers
> are truly random, for example generated using quantum coin flips.)
>
> A different way to "break" Solomonoff Induction takes advantage of the fact
> that it restricts Bayesian reasoning to computable models. I wrote about
> this in "is induction unformalizable?" [2] on the "everything" mailing list.
> Abram Demski also made similar points in recent posts on this mailing list.
>
> [1] http://www.mail-archive.com/agi@v2.listbox.com/msg00864.html
> [2]
> http://groups.google.com/group/everything-list/browse_frm/thread/c7442c13ff1396ec/804e134c70d4a203
>
>
>
>
> -------------------------------------------
> agi
> Archives: http://www.listbox.com/member/archive/303/=now
> RSS Feed: http://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> http://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to