On 6/21/08, Matt Mahoney [EMAIL PROTECTED] wrote:
Eliezer asked a similar question on SL4. If an agent flips a fair quantum
coin and is copied 10 times if it comes up heads, what should be the agent's
subjective probability that the coin will come up heads? By the anthropic
principle, it
--- On Sun, 6/22/08, Kaj Sotala [EMAIL PROTECTED] wrote:
On 6/21/08, Matt Mahoney [EMAIL PROTECTED] wrote:
Eliezer asked a similar question on SL4. If an agent
flips a fair quantum coin and is copied 10 times if it
comes up heads, what should be the agent's subjective
probability that the
2008/6/21 Wei Dai [EMAIL PROTECTED]:
A different way to break Solomonoff Induction takes advantage of the fact
that it restricts Bayesian reasoning to computable models. I wrote about
this in is induction unformalizable? [2] on the everything mailing list.
Abram Demski also made similar points
Quick argument for the same point: AIXI is uncomputable, but only
considers computable models. The anthropic principle requires a
rational entity to include itself in all models that are given nonzero
probability. AIXI obviously cannot do so.
Such an argument fails for computable approximations
Eliezer S. Yudkowsky pointed out in a 2003 agi post titled Breaking
Solomonoff induction... well, not really [1] that
Solomonoff Induction is flawed because it fails to incorporate anthropic
reasoning. But apparently he thought this doesn't really matter because in
the long run Solomonoff
--- On Fri, 6/20/08, Wei Dai [EMAIL PROTECTED] wrote:
Eliezer S. Yudkowsky pointed out in a 2003 agi
post titled Breaking
Solomonoff induction... well, not really [1] that
Solomonoff Induction is flawed because it fails to
incorporate anthropic reasoning. But apparently he
thought this