The issue of general versus specialized intelligence has been visited on
this list before!!

My thoughts on this, as I said when the topic last came up, are roughly
that:

1)
No finite system can have truly general intelligence.  There will always be
possible environments too complex for it to adapt to, possible problems too
hard for it to solve.  (Under reasonable assumptions, one can show this
using algorithmic information theory.)

2)
Any successful AGI system is going to have some subcomponent C that has
"truly general intelligence capability", in the sense that: For any problem
P, there is some level of resources R, so that if the system were given R
resources, C could solve P.  This could be referred to as the system having
"the potential for truly general intelligence, if its hardware were beefed
up enough."

3)
Any successful AGI system is also going to have components in two other
categories:

a) specialized-intelligence components that solve particular problems in
ways having little or nothing to do with truly general intelligence
capability

b) specialized-intelligence components that are explicitly built on top of
components having truly general intelligence capability


To me, the weaving-together of components with truly general intelligence
capability, and specialized-intelligence components built on top of these,
is the essence of AGI design.

-- Ben Goertzel



> -----Original Message-----
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On
> Behalf Of Eliezer S. Yudkowsky
> Sent: Wednesday, February 05, 2003 11:42 PM
> To: [EMAIL PROTECTED]
> Subject: Re: [agi] A thought.
>
>
> James Rogers wrote:
> >>Just as there is
> >>no "general" environment, there is no "general" intelligence.  A mind
> >>must be matched to its environment.
> >
> > Huh?  The point of a generally intelligent mind is that it CAN match
> > itself to its environment.  You don't want to design an intelligence
> > that is matched to a particular environment, you want a general
> > intelligence that will match itself to ANY environment.
>
> You do have to specify that the environment is a low-entropy one.
>
> --
> Eliezer S. Yudkowsky                          http://singinst.org/
> Research Fellow, Singularity Institute for Artificial Intelligence
>
> -------
> To unsubscribe, change your address, or temporarily deactivate
> your subscription,
> please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
>

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to