> The current (November 2002) issue of Johns Hopkins Magazine has
> an article
> about research on computerized language translation, available at
> http://www.jhu.edu/~jhumag/1102web/language.html .


The article contains the quote:

**
"David Yarowsky, associate professor of computer science, co-leads the
Natural Language Processing, or NLP, research group. "A lot of people in
computer science don't worry about whether computers think, or what
qualifies as intelligence," says Yarowsky. "That is a philosophical question
in the realm of Sartre or Kierkegaard, up there with the question of 'What
is the meaning of life?' After a while, what does it matter? If the computer
gets so good at something that it looks like it's intelligence, maybe you
can just call it that. "
**

This is a very typical attitude in the academic AI community.  I feel it's
partly right.

Yes, the exact true definition of "intelligence" is merely a subject for a
philosophical debate.  In fact, it's a fairly pointless philosophical
debate, much more so in my view than the issues pursued by Kierkegaard or
Sartre, who were considering more essential things.

On the other hand, this doesn't mean that  making ANY distinctions regarding
intelligence is meaningless.

I continue to believe that "degree of generality of scope" is a meaningful
qualifier to apply to intelligent system, so that we can speak about narrow
AI vs. general AI.

A computer translation program that can do nothing but translate from one
language to another, is what I call narrow AI for sure.

One difference between narrow AI and general AI is: If a general AI is good
at a lot of things, including

A) building new machines and other things
B) communicating with humans and
C) improving its own intelligence,

then it's going to accelerate its own intelligence vastly and make a lot of
changes in the world.  An AI specialized for any one thing -- including any
of these three things A, B or C -- just won't make as much of an impact...

The other key quote in the article is:

***
"It sort of understands," says Yarowsky. "It partially understands some of
the ambiguities, some of the meanings when words can mean multiple things.
It can understand a lot of the structures of language, but it won't
understand deeper subtleties. Some languages, for example Chinese, don't
distinguish the male and female pronoun. He or she is the same word, so it
can be ambiguous who something refers to. And sometimes there's a subtle
metaphor."
***

So, yeah.  Since we don't have a linguistically-savvy AGI yet, this stuff is
useful.  No argument there.  It's worthwhile work, just like building
databases is, and bioinformatic data analysis programs, and all kinds of
other work.

The novelty of the approach described in this article is that they're using
statistical, machine-learning methods rather than programming in hard
linguistic rules. This is in line with the recent movement in comp. ling.
toward statistical methods.  The bible here is "Statistical Natural Language
Processing" by Chris Manning and Hinrich Schutze.

I feel that eventually, once the comp. ling. community beats the statistics
and machine learning approach to death, they'll start to get a little
interested in experiential learning -- i.e. in having language analysis
programs learn thru interaction with the world, so as to capture more of the
nuances.  And this will lead them into the interfacing of artificial
cognition with statistical experiential learning, interaction of
cognition/perception/action.

How long?  10 years or so, I guess.

So I think the  mainstream AI community will get to AGI, it will just take
them a while....  The overall body of narrow-AI work is moving in the right
direction, just sloooooowly and meanderingly....

-- Ben G

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/

Reply via email to