Re: [agi] Breaking Solomonoff induction (really)

2008-06-22 Thread Kaj Sotala
On 6/21/08, Matt Mahoney [EMAIL PROTECTED] wrote:

 Eliezer asked a similar question on SL4. If an agent flips a fair quantum 
 coin and is copied 10 times if it comes up heads, what should be the agent's 
 subjective probability that the coin will come up heads? By the anthropic 
 principle, it should be 0.9. That is because if you repeat the experiment 
 many times and you randomly sample one of the resulting agents, it is highly 
 likely that will have seen heads about 90% of the time.

That's the wrong answer, though (as I believe I pointed out when the
question was asked over on SL4). The copying is just a red herring, it
doesn't affect the probability at all.

Since this question seems to confuse many people, I wrote a short
Python program simulating it:
http://www.saunalahti.fi/~tspro1/Random/copies.py

Set the number of trials to whatever you like (if it's high, you might
want to comment out the A randomly chosen agent has seen... lines to
make it run faster) - the ratio will converge to 1:1 on any higher
amount of trials.




-- 
http://www.saunalahti.fi/~tspro1/ | http://xuenay.livejournal.com/

Organizations worth your time:
http://www.singinst.org/ | http://www.crnano.org/ | http://www.mfoundation.org/


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Breaking Solomonoff induction (really)

2008-06-22 Thread Matt Mahoney
--- On Sun, 6/22/08, Kaj Sotala [EMAIL PROTECTED] wrote:

 On 6/21/08, Matt Mahoney [EMAIL PROTECTED] wrote:
 
  Eliezer asked a similar question on SL4. If an agent
 flips a fair quantum coin and is copied 10 times if it
 comes up heads, what should be the agent's subjective
 probability that the coin will come up heads? By the
 anthropic principle, it should be 0.9. That is because if
 you repeat the experiment many times and you randomly
 sample one of the resulting agents, it is highly likely
 that will have seen heads about 90% of the time.
 
 That's the wrong answer, though (as I believe I pointed out when the
 question was asked over on SL4). The copying is just a red
 herring, it doesn't affect the probability at all.
 
 Since this question seems to confuse many people, I wrote a
 short Python program simulating it:
 http://www.saunalahti.fi/~tspro1/Random/copies.py

The question was about subjective anticipation, not the actual outcome. It 
depends on how the agent is programmed. If you extend your experiment so that 
agents perform repeated, independent trials and remember the results, you will 
find that on average agents will remember the coin coming up heads 99% of the 
time. The agents have to reconcile this evidence with their knowledge that the 
coin is fair.

It is a tricker question without multiple trials. The agent then needs to model 
its own thought process (which is impossible for any Turing computable agent to 
do with 100% accuracy). If the agent knows that it is programmed so that if it 
observes an outcome R times out of N that it would expect the probability to be 
R/N, then it would conclude I know that I would observe heads 99% of the time 
and therefore I would expect heads with probability 0.99. But this programming 
would not make sense in a scenario with conditional copying.

Here is an equivalent question. If you flip a fair quantum coin, and you are 
killed with 99% probability conditional on the coin coming up tails, then, when 
you look at the coin, what is your subjective anticipation of seeing heads?


-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Breaking Solomonoff induction (really)

2008-06-21 Thread William Pearson
2008/6/21 Wei Dai [EMAIL PROTECTED]:
 A different way to break Solomonoff Induction takes advantage of the fact
 that it restricts Bayesian reasoning to computable models. I wrote about
 this in is induction unformalizable? [2] on the everything mailing list.
 Abram Demski also made similar points in recent posts on this mailing list.


I think this is a lot stronger objection when you actually implement
an implementable variant of Solomonoff Induction (it has started to
make me chuckle that a model of induction makes assumptions about the
universe that would have to be broken to have it implemented). When
you restrict the the memory space of a system a lot more functions
become uncomputable with respects to that system. It is not a safe
assumption that the world is computable in this restricted notion of
computable, i.e. computable with respect to a finite system.

Also solomonoff induction ignores any potential physical affects of
the computation, as does all probability theory. See section 5 of this
attempted paper by me of an formalised example of where things could
go wrong.

http://codesoup.sourceforge.net/easa.pdf

It is not quite an anthropic problem, but it is closely related.  I'll
tentatively label the observer-world interaction problem. That is the
exact nature of the world you see is altered dependent upon the type
of system you happen to be.

All these are problem with tacit (a la Dennet) representations of
beliefs embedded within the Solomonoff induction formalism.

  Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Breaking Solomonoff induction (really)

2008-06-21 Thread Abram Demski
Quick argument for the same point: AIXI is uncomputable, but only
considers computable models. The anthropic principle requires a
rational entity to include itself in all models that are given nonzero
probability. AIXI obviously cannot do so.

Such an argument fails for computable approximations of AIXI, however.
But they might fail for similar reasons. (Strict AIXI approximations
are approximations of an entity that can't reason about itself,
therefore any ability to do so is an artifact of the approximation.)

On Fri, Jun 20, 2008 at 8:09 PM, Wei Dai [EMAIL PROTECTED] wrote:
 Eliezer S. Yudkowsky pointed out in a 2003 agi post titled Breaking
 Solomonoff induction... well, not really [1] that
 Solomonoff Induction is flawed because it fails to incorporate anthropic
 reasoning. But apparently he thought this doesn't really matter because in
 the long run Solomonoff Induction will converge with the correct reasoning.
 Here I give two counterexamples to show that this convergence does not
 necessarily occur.

 The first example is a thought experiment where an induction/prediction
 machine is first given the following background information: Before
 predicting each new input symbol, it will be copied 9 times. Each copy will
 then receive the input 1, while the original will receive 0. The 9
 copies that received 1 will be put aside, while the original will be
 copied 9 more times before predicting the next symbol, and so on. To a human
 upload, or a machine capable of anthropic reasoning, this problem is
 simple: no matter how many 0s it sees, it should always predict 1 with
 probability 0.9, and 0 with probability 0.1. But with Solomonoff
 Induction, as the number of 0s it receives goes to infinity, the
 probability it predicts for 1 being the next input must converge to 0.

 In the second example, an intelligence wakes up with no previous memory and
 finds itself in an environment that apparently consists of a set of random
 integers and some of their factorizations. It finds that whenever it outputs
 a factorization for a previously unfactored number, it is rewarded. To a
 human upload, or a machine capable of anthropic reasoning, it would be
 immediately obvious that this cannot be the true environment, since such an
 environment is incapable of supporting an intelligence such as itself.
 Instead, a more likely explanation is that it is being used by another
 intelligence as a codebreaker. But Solomonoff Induction is incapable of
 reaching such a conclusion no matter how much time we give it, since it
 takes fewer bits to algorithmically describe just a set of random numbers
 and their factorizations, than such a set embedded within a universe capable
 of supporting intelligent life. (Note that I'm assuming that these numbers
 are truly random, for example generated using quantum coin flips.)

 A different way to break Solomonoff Induction takes advantage of the fact
 that it restricts Bayesian reasoning to computable models. I wrote about
 this in is induction unformalizable? [2] on the everything mailing list.
 Abram Demski also made similar points in recent posts on this mailing list.

 [1] http://www.mail-archive.com/agi@v2.listbox.com/msg00864.html
 [2]
 http://groups.google.com/group/everything-list/browse_frm/thread/c7442c13ff1396ec/804e134c70d4a203




 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


[agi] Breaking Solomonoff induction (really)

2008-06-20 Thread Wei Dai

Eliezer S. Yudkowsky pointed out in a 2003 agi post titled Breaking
Solomonoff induction... well, not really [1] that
Solomonoff Induction is flawed because it fails to incorporate anthropic
reasoning. But apparently he thought this doesn't really matter because in
the long run Solomonoff Induction will converge with the correct reasoning.
Here I give two counterexamples to show that this convergence does not
necessarily occur.

The first example is a thought experiment where an induction/prediction
machine is first given the following background information: Before
predicting each new input symbol, it will be copied 9 times. Each copy will
then receive the input 1, while the original will receive 0. The 9
copies that received 1 will be put aside, while the original will be
copied 9 more times before predicting the next symbol, and so on. To a human
upload, or a machine capable of anthropic reasoning, this problem is
simple: no matter how many 0s it sees, it should always predict 1 with
probability 0.9, and 0 with probability 0.1. But with Solomonoff
Induction, as the number of 0s it receives goes to infinity, the
probability it predicts for 1 being the next input must converge to 0.

In the second example, an intelligence wakes up with no previous memory and
finds itself in an environment that apparently consists of a set of random
integers and some of their factorizations. It finds that whenever it outputs
a factorization for a previously unfactored number, it is rewarded. To a
human upload, or a machine capable of anthropic reasoning, it would be
immediately obvious that this cannot be the true environment, since such an
environment is incapable of supporting an intelligence such as itself.
Instead, a more likely explanation is that it is being used by another
intelligence as a codebreaker. But Solomonoff Induction is incapable of
reaching such a conclusion no matter how much time we give it, since it 
takes fewer bits to algorithmically describe just a set of random numbers 
and their factorizations, than such a set embedded within a universe capable 
of supporting intelligent life. (Note that I'm assuming that these numbers 
are truly random, for example generated using quantum coin flips.)


A different way to break Solomonoff Induction takes advantage of the fact
that it restricts Bayesian reasoning to computable models. I wrote about
this in is induction unformalizable? [2] on the everything mailing list.
Abram Demski also made similar points in recent posts on this mailing list.

[1] http://www.mail-archive.com/agi@v2.listbox.com/msg00864.html
[2] 
http://groups.google.com/group/everything-list/browse_frm/thread/c7442c13ff1396ec/804e134c70d4a203





---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Breaking Solomonoff induction (really)

2008-06-20 Thread Matt Mahoney
--- On Fri, 6/20/08, Wei Dai [EMAIL PROTECTED] wrote:

 Eliezer S. Yudkowsky pointed out in a 2003 agi
 post titled Breaking
 Solomonoff induction... well, not really [1] that
 Solomonoff Induction is flawed because it fails to
 incorporate anthropic reasoning. But apparently he
 thought this doesn't really matter because in
 the long run Solomonoff Induction will converge with the
 correct reasoning.
 Here I give two counterexamples to show that this
 convergence does not necessarily occur.

I disagree. AIXI says that the optimal behavior of an agent for maximizing an 
accumulated reward from a Turing-computable environment while exchanging 
symbols with it is to guess at each step that the environment is simulated by 
the shortest program consistent with the interaction so far. AIXI assumes the 
agent is immortal because it may postpone reward arbitrarily long. The 
anthropic principle says that events that would have led to the agent's 
non-existence could not have occurred, and therefore had zero probability. This 
is inconsistent with Solomonoff induction except in the limit where the agent 
lives forever.

 The first example is a thought experiment where an
 induction/prediction
 machine is first given the following background
 information: Before
 predicting each new input symbol, it will be copied 9
 times. Each copy will
 then receive the input 1, while the original
 will receive 0. The 9
 copies that received 1 will be put aside, while
 the original will be
 copied 9 more times before predicting the next symbol, and
 so on. To a human
 upload, or a machine capable of anthropic reasoning, this
 problem is
 simple: no matter how many 0s it sees, it
 should always predict 1 with
 probability 0.9, and 0 with probability 0.1.
 But with Solomonoff
 Induction, as the number of 0s it receives goes
 to infinity, the
 probability it predicts for 1 being the next
 input must converge to 0.

Eliezer asked a similar question on SL4. If an agent flips a fair quantum coin 
and is copied 10 times if it comes up heads, what should be the agent's 
subjective probability that the coin will come up heads? By the anthropic 
principle, it should be 0.9. That is because if you repeat the experiment many 
times and you randomly sample one of the resulting agents, it is highly likely 
that will have seen heads about 90% of the time.

AIXI is not computable, so humans use the following heuristic approximation: if 
an experiment is performed N times and a certain outcome occurs R times, and N 
is large, then the probability of this outcome is estimated to be R/N on the 
next trial. This is not the right answer in this case. Rather, it is the way 
we are programmed to think. Remember that probability is just a mathematical 
approximation of uncertainty. In reality, we cannot assign numerical values to 
uncertainty. A Solomonoff universal prior is just another model, which depends 
on a choice of universal Turing machine (and happens to be uncomputable as 
well).

In your example, putting aside an agent is the same as killing it. So the 
probability of observing 1 correctly converges to 0 for an agent applying the 
R/N heuristic. AIXI/Solomonoff induction does not apply because this is not a 
limit case (life expectancy approaching infinity).

 In the second example, an intelligence wakes up with no
 previous memory and
 finds itself in an environment that apparently consists of
 a set of random
 integers and some of their factorizations. It finds that
 whenever it outputs
 a factorization for a previously unfactored number, it is
 rewarded. To a
 human upload, or a machine capable of anthropic reasoning,
 it would be
 immediately obvious that this cannot be the true
 environment, since such an
 environment is incapable of supporting an intelligence such
 as itself.
 Instead, a more likely explanation is that it is being used
 by another
 intelligence as a codebreaker. But Solomonoff Induction is
 incapable of
 reaching such a conclusion no matter how much time we give
 it, since it 
 takes fewer bits to algorithmically describe just a set of
 random numbers 
 and their factorizations, than such a set embedded within a
 universe capable 
 of supporting intelligent life. (Note that I'm assuming
 that these numbers 
 are truly random, for example generated using quantum coin
 flips.)

A human upload has more information than the other intelligence because its 
memories are preserved. Under AIXI it can never guess the simpler model because 
it would be inconsistent with its past observations. There is no contradiction.

 A different way to break Solomonoff Induction
 takes advantage of the fact
 that it restricts Bayesian reasoning to computable models.
 I wrote about
 this in is induction unformalizable? [2] on the
 everything mailing list.
 Abram Demski also made similar points in recent posts on
 this mailing list.
 
 [1]
 http://www.mail-archive.com/agi@v2.listbox.com/msg00864.html
 [2]