On 29/02/2008, Abram Demski <[EMAIL PROTECTED]> wrote:
> I'm an undergrad who's been lurking here for about a year. It seems to me that
> many people on this list take Solomonoff Induction to be the ideal learning
> technique (for unrestricted computational resources). I'm wondering what
> justification there is for the restriction to turing-machine models of the 
> universe
> that Solomonoff Induction uses. Restricting an AI to computable models will
> obviously make it more realistically manageable. However, Solomonoff 
> induction > needs infinite computational resources, so this clearly isn't a 
> justification.

There is a gotcha here, at least when you are trying to go to a
computable solution (that doesn't require infinite memory).

When you go to a FSM (which all our computers are ), then there opens
up a whole range of things that are uncomputable for the FSM in
question. Including a whole raft of FSM more complex than it.

Keeping the same general shape of the system (trying to account for
all the detail) means we are likely to overfit, due to trying to model
systems that are are too complex for us to be able to model,  whilst
trying to model for the noise in our systems.

This would make the most probable TM more complex than it needs to be,
without actually improving its predictive power.

Not quite what you worried about, but might add weight to your call to
have uncomputablility included in general models of intelligence.

  Will

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com

Reply via email to