2008/12/26 Ben Goertzel <[email protected]>:
>
> 3)
> There are theorems stating that if you have a great compressor, then by
> wrapping a little code around it, you can get a system that will be highly
> intelligent according to the algorithmic info. definition.  The catch is
> that this system (as constructed in the theorems) will use insanely,
> infeasibly much computational resource.

Which is rather pointless.

> However, Marcus Hutter, Juergen Schmidhuber and others are working on
> methods of "scaling down" the approaches mentioned in 3 above (AIXItl, the
> Godel Machine, etc.) to as to yield feasible techniques.  So far this has
> led to some nice machine learning algorithms (e.g. the parameter-free
> temporal difference reinforcement learning scheme in part of Legg's thesis,
> and Hutter's new work on Feature Bayesian Networks and so forth), but
> nothing particularly AGI-ish.  But personally I wouldn't be harshly
> dismissive of this research direction, even though it's not the one I've
> chosen.

I'm not dismissive of it either -- once you have algorithms that can
be practically realised, then it's possible for progress to be made.

But I don't think that a small number of clever algorithms will in
itself create intelligence -- if that was possible then the secret to
AI would have been discovered by now. I think some people get seduced
by the beauty and clarity of maths and want to make their programs
like that, but I don't think human intelligence is like that.

Instead it's a series of hacks and kludges that together barely manage
to do the job -- imagine a legacy computer program written in a large
corporation that's been worked on by many people over decades, and has
bits of it written in COBOL, SQL, VB, C#, Java, etc, is mostly
undocumented, many of the original authors have left the company, but
basically works and is too big to replace -- that's what the code
running the human brain looks like. (Actually it's worse than that,
since programs written by humans have to make sense at least to the
person writing them at the time, but evolution doesn't have to
"understand" the programs it writes).

An AGI written by humans would hopefully be a lot more nicely
structured than this, but I think it would still consist of large
number of modules, none of which was intelligent in itself. How big
would it be? The human genome is 750 MB so intelligence could
presumably be coded in less than that. I'd guess an AGI could be
written in about a tenth that, say 75 MB. Then it would have to
undergo lots of learning via well-chosen training sets which would
bulk up its code and data to many times that (I'm assuming a model
where an AI stores the results of learning in additions to its source
code).

By way of comparison, the Linux kernel is about 60 MB; the Copycat
program is about 0.4 MB (not including graphics code).

-- 
Philip Hunt, <[email protected]>
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com

Reply via email to