--- On Fri, 6/20/08, Wei Dai <[EMAIL PROTECTED]> wrote:

> Eliezer S. Yudkowsky pointed out in a 2003 "agi"
> post titled "Breaking
> Solomonoff induction... well, not really" [1] that
> Solomonoff Induction is flawed because it fails to
> incorporate anthropic reasoning. But apparently he
> thought this doesn't "really" matter because in
> the long run Solomonoff Induction will converge with the
> correct reasoning.
> Here I give two counterexamples to show that this
> convergence does not necessarily occur.

I disagree. AIXI says that the optimal behavior of an agent for maximizing an 
accumulated reward from a Turing-computable environment while exchanging 
symbols with it is to guess at each step that the environment is simulated by 
the shortest program consistent with the interaction so far. AIXI assumes the 
agent is immortal because it may postpone reward arbitrarily long. The 
anthropic principle says that events that would have led to the agent's 
non-existence could not have occurred, and therefore had zero probability. This 
is inconsistent with Solomonoff induction except in the limit where the agent 
lives forever.

> The first example is a thought experiment where an
> induction/prediction
> machine is first given the following background
> information: Before
> predicting each new input symbol, it will be copied 9
> times. Each copy will
> then receive the input "1", while the original
> will receive "0". The 9
> copies that received "1" will be put aside, while
> the original will be
> copied 9 more times before predicting the next symbol, and
> so on. To a human
> upload, or a machine capable of anthropic reasoning, this
> problem is
> simple: no matter how many "0"s it sees, it
> should always predict "1" with
> probability 0.9, and "0" with probability 0.1.
> But with Solomonoff
> Induction, as the number of "0"s it receives goes
> to infinity, the
> probability it predicts for "1" being the next
> input must converge to 0.

Eliezer asked a similar question on SL4. If an agent flips a fair quantum coin 
and is copied 10 times if it comes up heads, what should be the agent's 
subjective probability that the coin will come up heads? By the anthropic 
principle, it should be 0.9. That is because if you repeat the experiment many 
times and you randomly sample one of the resulting agents, it is highly likely 
that will have seen heads about 90% of the time.

AIXI is not computable, so humans use the following heuristic approximation: if 
an experiment is performed N times and a certain outcome occurs R times, and N 
is large, then the probability of this outcome is estimated to be R/N on the 
next trial. This is not the "right" answer in this case. Rather, it is the way 
we are programmed to think. Remember that probability is just a mathematical 
approximation of uncertainty. In reality, we cannot assign numerical values to 
uncertainty. A Solomonoff universal prior is just another model, which depends 
on a choice of universal Turing machine (and happens to be uncomputable as 
well).

In your example, "putting aside" an agent is the same as killing it. So the 
probability of observing "1" correctly converges to 0 for an agent applying the 
R/N heuristic. AIXI/Solomonoff induction does not apply because this is not a 
limit case (life expectancy approaching infinity).

> In the second example, an intelligence wakes up with no
> previous memory and
> finds itself in an environment that apparently consists of
> a set of random
> integers and some of their factorizations. It finds that
> whenever it outputs
> a factorization for a previously unfactored number, it is
> rewarded. To a
> human upload, or a machine capable of anthropic reasoning,
> it would be
> immediately obvious that this cannot be the true
> environment, since such an
> environment is incapable of supporting an intelligence such
> as itself.
> Instead, a more likely explanation is that it is being used
> by another
> intelligence as a codebreaker. But Solomonoff Induction is
> incapable of
> reaching such a conclusion no matter how much time we give
> it, since it 
> takes fewer bits to algorithmically describe just a set of
> random numbers 
> and their factorizations, than such a set embedded within a
> universe capable 
> of supporting intelligent life. (Note that I'm assuming
> that these numbers 
> are truly random, for example generated using quantum coin
> flips.)

A human upload has more information than the other intelligence because its 
memories are preserved. Under AIXI it can never guess the simpler model because 
it would be inconsistent with its past observations. There is no contradiction.

> A different way to "break" Solomonoff Induction
> takes advantage of the fact
> that it restricts Bayesian reasoning to computable models.
> I wrote about
> this in "is induction unformalizable?" [2] on the
> "everything" mailing list.
> Abram Demski also made similar points in recent posts on
> this mailing list.
> 
> [1]
> http://www.mail-archive.com/[email protected]/msg00864.html
> [2] 
> http://groups.google.com/group/everything-list/browse_frm/thread/c7442c13ff1396ec/804e134c70d4a203
>

It is true that we can't prove that the universe is Turing computable. But so 
far we have not observed any uncomputable physics. If the universe is not 
Turing computable, then AIXI would not apply. But again, Occam's razor appears 
to work in practice. Both of these observations suggest, but don't prove, that 
the universe is a simulation.


-- Matt Mahoney, [EMAIL PROTECTED]




-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to