I'm an undergrad who's been lurking here for about a year. It seems to me
that many people on this list take Solomonoff Induction to be the ideal
learning technique (for unrestricted computational resources). I'm wondering
what justification there is for the restriction to turing-machine models of
the universe that Solomonoff Induction uses. Restricting an AI to computable
models will obviously make it more realistically manageable. However,
Solomonoff induction needs infinite computational resources, so this clearly
isn't a justification.

My concern is that humans make models of the world that are not computable;
in particular, I'm thinking of the way physicists use differential
equations. Even if physics itself is computable, the fact that humans use
incomputable models of it remains. Solomonoff Induction itself is an
incomputable model of intelligence, so an AI that used Solomonoff Induction
(even if we could get the infinite computational resources needed) could
never understand its own learning algorithm. This is an odd position for a
supposedly universal model of intelligence IMHO.

My thinking is that a more-universal theoretical prior would be a prior over
logically definable models, some of which will be incomputable.

Any thoughts?

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com

Reply via email to