> Do you see another option for simplification?

I am not starting from a foundational concept of "brain emulation", so I'm
not really faced with the problem of simplifying the brain.

> Maybe. Maybe not. To be honest, I think most people in this field
> have a bad
> habit of using "general intelligence" as a magic wand to gloss over hard
> problems that are going to require specialized mechanisms no matter how
> smart the overall system is.

I like to distinguish two kinds of specialized mechanisms:

1) those that are autonomous

2) those that build specialized functionality on a foundation of
general-intelligence-oriented structures and dynamics

The AI field, so far, has focused mainly on Type 1.  But I think Type 2 is
more important.

> For example, in the case of computer vision, just getting from a
> 2D array of
> pixels to a possible set of object geometries takes a heck of a
> lot of work,
> and it has to be done by fast, dumb code for performance reasons.
> After that
> you have to recognize objects (a narrow problem), build a useful
> world-model
> (another narrow problem), detect and fix visual illusions and other data
> corruptions (yet another narrow problem), and so on. Once you
> have all these
> mechanisms you might be able to improve the results a bit by having the AI
> think about the output ("Hmm, no, I'm sure that can't really be
> Santa Clause
> on that rooftop. It must be a Christmas display."). But you can't avoid
> building the specialized mechanisms in the first place.

I think the "general intelligence" mechanisms for vision occurs at a much
lower level than your example suggests.

I think that object recognition and world-model-building, for example, use
Type 2 specialization, not Type 1

I agree that edge detection, for example, is pure Type 1 specialization

> But I do think there would be a lot more progress in AI if more
> people were
> building systems designed merely to solve the next obvious obstacle on the
> path to AGI, or to provide a platform for future work.

I think that is what the bulk of academic AI researchers are doing.  The
folks on this list who are actively working on AI tend to be exceptions,
with more ambitious goals.

> What we have now is
> like a football team where the quarterback won't throw a pass unless the
> receiver is standing next to the goal post. Lots of long shots, little
> progress.

Again, the contemporary mainstream AI field is really very conservative,
concerned entirely with taking small steps in a risk-averse way.

> OTOH, at least Novamente has enough internal complexity to reach territory
> that hasn't already been explored by classical AI research. I don't expect
> it to "wake up", but I expect it will be a lot more productive than those
> "One True Simple Formula For Intelligence"-type projects.

Well I certainly hope Novamente will be more productive than that type of
projec ;)  However, the type of project you cite is more characteristic of
AI of the 60's and 70's than of modern mainstream AI.

Nearly all contemporary AI researchers are not actively seeking AGI at all;
by and large, they think it's hundreds of years off, and are working on
highly specialized algorithms attacking subproblems of intelligence.  Which
seems to be exactly what you think they should be doing!

-- Ben G

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to