On Fri, Oct 31, 2008 at 10:26 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote:

> --- On Fri, 10/31/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> > The question that worries me is: **What does it matter if AIXI __is__
> optimal, given that it uses infinitely many resources**??
>
> Because it puts machine learning research on a firmer theoretical
> foundation. For example, we know from experimental results that the longer
> you train a neural network on a data set, the lower the training error will
> get. But when you test it on a different set, there is an optimal amount of
> training, after which results get worse. What AIXI does is explain this
> observation. As the network is trained, it grows in algorithmic complexity.
> The proper stopping point is when it is just complex enough to be consistent
> with the training data, and no more.


I agree that Occam heuristics are valuable in machine learning ... and we
use them in OpenCog and Novamente as well ... but I don't agree that this
pragmatic value is mathematically implied by the AIXI theorems Hutter proved
...


ben g



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to