I agree with your qualitative point that a computationally efficient
intelligence has got to consist of a combination of specialized systems
(operating tightly coupled togetherin a common framework, and with many
commonalities and overlaps).

However, I don't agree with your quantitative estimate that an AGI has to be
orders of magnitude bigger than any software project ever attempted.

I agree that many people underestimate the problem, but I think you
overestimate the problem.  And mis-estimate it.  I think you overestimate
the bulk of the problem and underestimate the subtlety of finding the right
framework and the right algorithms.

The brain is a hugely complex tangled mess of structures and processes, but
that doesn't mean that an AGI has to be.  AGI does not mean brain emulation.
Legs are vastly more complex than wheels, yet wheels are good at moving
around too.  (And wheels can't help you invent artificial legs, whereas a
nonhuman AGI can potentially help you figure out how to make a more human
AGI if you want to).

You  mention the vast amount of work that's gone into computer vision and
audition.  That is true, but I think that those disciplines would be a lot
more tractable if they were carried out together with AGI cognition, rather
than separately.  Pursuing them "standalone" may make them harder in many
ways, rather than easier.

My guess, not surprisingly, is that the Novamente design is close to the
minimal level of complexity needed ;)  Dozens of node and link types, a few
dozen mental processes, and a couple dozen functionally-specialized units
combining node and link types and processes in appropriate ways.  This is a
lot more complexity than the typical AI program but a lot less complexity
than you seem to be alluding to.

But of course, none of us *really know*.  Eliezer Yudkowsky in the past has
partially agreed with you, in that he's proposed the Novamente design is
significantly too simple.

-- Ben



> -----Original Message-----
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On
> Behalf Of Billy Brown
> Sent: Tuesday, February 18, 2003 2:54 PM
> To: [EMAIL PROTECTED]
> Subject: AGI Complexity (WAS: RE: [agi] "doubling time" watcher.)
>
>
> >From recent comments here I can see there are still a lot of people out
> there who think that building an AGI is a relatively modest-size project,
> and the key to success is simply uncovering some new insight or technique
> that has been overlooked thus far. IMHO this is partly a matter
> of necessary
> optimism (i.e. "we can only afford a 4-man-year project, so let's
> hope that
> will be enough"), and partly a sort of bleedover from the view of human
> minds that dominated the social sciences for most of the 20th
> century (i.e.
> "infants are a blank slate, and blank slates sound pretty simple, so a
> newly-written AGI must be a relatively simple program"). Unfortunately for
> AI optimists, all the evidence points in the opposite direction.
>
> If we have learned nothing else about the nature of Mind in the last 50
> years, we should at least have learned this: complex adaptive behavior
> requires a complex, specialized implementation. Always. No exceptions, no
> free lunches, no magic connectoplasmic shortcuts.
>
> We know from the biology folks that the human mind contains at
> least dozens,
> and probably hundreds of specialized subsystems. The ones that computer
> scientists have tried to replicate, like vision and hearing, have
> turned out
> to contain massive amounts of complexity - computer vision alone is
> apparently the kind of problem that takes a good, well-funded team several
> decades to solve.
>
> Now, it may be that some particular subsystems can be omitted from an AGI
> that isn't intended to be very humanlike. An AGI with no body may
> not need a
> kinesthetic sense or motor skills, an AGI without cameras may not need
> vision, and so on. But anyone who thinks there is some tiny
> kernel of "pure
> thought" in there waiting to be duplicated, and all the rest can be safely
> ignored, is just kidding themselves. Every part of the mind that
> we have any
> understanding of at all has turned out to be a tangle of complex
> algorithms
> interacting in very complex ways. There is no reason to believe
> the parts we
> don't understand are any different.
>
> What this means for AI research is that any serious attempt to
> create an AGI
> by duplicating the way human minds work would be a massive
> effort, at least
> one and probably two orders of magnitude larger than any software
> development effort ever attempted. That makes it much too big for current
> software engineering methods, so the effort would almost certainly fail.
>
> For projects that intend to implement a completely novel design, the
> implication is that you can't realistically expect anything like
> human-equivalent performance on unrestricted tasks. Evolution
> wouldn't have
> given us the equivalent of hundreds of millions of lines of specialized
> software if there were some easy shortcut waiting to be found.
> So, if you're
> just trying to build a specialized AI, or to solve a few of the problems
> between here and AGI, that's great. But if you think your 50 KLOC
> system is
> going to somehow bootstrap itself into human-equivalence, you
> need to take a
> break and go catch up on what's been happening in cognitive science in the
> last 20 years.
>
> In other words, building a human-equivalent AGI is like sending a manned
> mission to Alpha Centauri, and current AI technology is on about the level
> of a V2 rocket. It's a long road from here to there, and we're never going
> to get anywhere until we admit that fact. The next step is the nasty,
> challenging problem of getting into space at all, not the nigh-impossible
> feat of reaching another solar system.
>
> Billy Brown
>
> -------
> To unsubscribe, change your address, or temporarily deactivate
> your subscription,
> please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
>

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to