Re: [agi] Solomonoff Induction Question

2008-03-01 Thread William Pearson
On 29/02/2008, Abram Demski [EMAIL PROTECTED] wrote: I'm an undergrad who's been lurking here for about a year. It seems to me that many people on this list take Solomonoff Induction to be the ideal learning technique (for unrestricted computational resources). I'm wondering what justification

Re: [agi] Solomonoff Induction Question

2008-03-01 Thread Jey Kottalam
On Sat, Mar 1, 2008 at 3:10 AM, William Pearson [EMAIL PROTECTED] wrote: Keeping the same general shape of the system (trying to account for all the detail) means we are likely to overfit, due to trying to model systems that are are too complex for us to be able to model, whilst trying

Re: [agi] Solomonoff Induction Question

2008-03-01 Thread William Pearson
On 01/03/2008, Jey Kottalam [EMAIL PROTECTED] wrote: On Sat, Mar 1, 2008 at 3:10 AM, William Pearson [EMAIL PROTECTED] wrote: Keeping the same general shape of the system (trying to account for all the detail) means we are likely to overfit, due to trying to model systems that are

Re: [agi] Solomonoff Induction Question

2008-03-01 Thread daniel radetsky
On Fri, Feb 29, 2008 at 1:37 PM, Abram Demski [EMAIL PROTECTED] wrote: However, Solomonoff induction needs infinite computational resources, so this clearly isn't a justification. see http://www.hutter1.net/ai/paixi.htm The major drawback of the AIXI model is that it is uncomputable. To

Re: [agi] Solomonoff Induction Question

2008-03-01 Thread Abram Demski
On Sat, Mar 1, 2008 at 5:23 PM, daniel radetsky [EMAIL PROTECTED] wrote: [...] My thinking is that a more-universal theoretical prior would be a prior over logically definable models, some of which will be incomputable. I'm not exactly sure what you're talking about, but I assume that