2008/6/21 Wei Dai <[EMAIL PROTECTED]>:
> A different way to "break" Solomonoff Induction takes advantage of the fact
> that it restricts Bayesian reasoning to computable models. I wrote about
> this in "is induction unformalizable?" [2] on the "everything" mailing list.
> Abram Demski also made similar points in recent posts on this mailing list.
>

I think this is a lot stronger objection when you actually implement
an implementable variant of Solomonoff Induction (it has started to
make me chuckle that a model of induction makes assumptions about the
universe that would have to be broken to have it implemented). When
you restrict the the memory space of a system a lot more functions
become uncomputable with respects to that system. It is not a safe
assumption that the world is computable in this restricted notion of
computable, i.e. computable with respect to a finite system.

Also solomonoff induction ignores any potential physical affects of
the computation, as does all probability theory. See section 5 of this
attempted paper by me of an formalised example of where things could
go wrong.

http://codesoup.sourceforge.net/easa.pdf

It is not quite an anthropic problem, but it is closely related.  I'll
tentatively label the observer-world interaction problem. That is the
exact nature of the world you see is altered dependent upon the type
of system you happen to be.

All these are problem with tacit (a la Dennet) representations of
beliefs embedded within the Solomonoff induction formalism.

  Will Pearson


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to