Ben Goertzel wrote:
> I like to distinguish two kinds of specialized mechanisms:
>
> 1) those that are autonomous
>
> 2) those that build specialized functionality on a foundation of
> general-intelligence-oriented structures and dynamics
>
> The AI field, so far, has focused mainly on Type 1.  But I think Type 2 is
> more important.

Hmm. Well, using your terminology, I would say that:

1) Type 2 mechanisms are only possible once you have the proper set of type
1 mechanisms (i.e. the ones that implement thought in the first place).
2) Type 2 mechanisms that are not supported by the proper type 1 mechanisms
for a particular problem domain tend to be astronomically inefficient.
3) Achieving a human-like generality of intelligence is likely to require a
human-like assortment of Type 1 mechanisms, except in areas where you can
afford astronomical inefficiency.

An obvious example of 2 is the world-model problem in robotics. If a dumb AI
doesn't have a specialized mechanism for dealing with physical objects
interacting in 3-D space, it just gets stuck. A smart AGI might be able to
fake it by reasoning about the same data in a more abstract fashion, but
this is like a human trying to aim a tennis serve with a physics book and a
calculator - slow and error prone.

One interesting prediction of this view is that it should be very easy to
build an AI that seems promising in a domain much broader than those
addressed by expert systems (like "data analysis" or even "logical
reasoning"), and yet fails miserably when you try to introduce it to some
other challenge humans consider routine (like predicting where a tennis ball
will go after it gets hit). In other words, the brittleness problem may be
intractable.

> I think the "general intelligence" mechanisms for vision occurs at a much
> lower level than your example suggests.
>
> I think that object recognition and world-model-building, for example, use
> Type 2 specialization, not Type 1

In the case of object recognition, that would be possible but amazingly
inefficient compared to a type 1 approach. For a world model I don't see how
it is possible at all, unless you artificially limit what kinds of facts
about the world you need to work with.

> I think that is what the bulk of academic AI researchers are doing.  The
> folks on this list who are actively working on AI tend to be exceptions,
> with more ambitious goals.

> Again, the contemporary mainstream AI field is really very conservative,
> concerned entirely with taking small steps in a risk-averse way.

> Nearly all contemporary AI researchers are not actively seeking AGI at
all;
> by and large, they think it's hundreds of years off, and are working on
> highly specialized algorithms attacking subproblems of intelligence.
Which
> seems to be exactly what you think they should be doing!

Not exactly. It isn't that I think we should give up on AGI, but rather that
we should be consciously planning for it to take several decades to get
there. We should still tackle the problems in front of us, instead of giving
up on real AI work altogether. But we need to get past the idea that every
AI project should start from scratch and end up delivering a
human-equivalent AGI, because that isn't going to happen. We just aren't
that close yet.

The way the software industry has solved big challenges in the past is to
break them up into sub-problems, figure out which sub-problems can be solved
right now, solve them as thoroughly as possible, and offer the resulting
solutions as black boxes that can then become inputs into the next round of
problem solving. That's what happened with operating systems, and
development environments, and database systems. If we want to see real
progress in AI, the same thing needs to happen to problems like NLP,
computer vision, memory, attention, etc.

Too bad there isn't much of a market for most of those partial solutions...

Billy Brown

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to