I claim to be an intelligent entity (if not a real AI programmer), and one
of my more valuable tools is a common dictionary, whether paper or
electronic.  There are many words I know, learned from the dictionary and/or
from context, that I cannot pronounce since I did not learn them verbally.
This occasionally comes to light when one of my English students here in
Tianjin asks me for an English word and I have to reply, "I know the word,
but I don't know how to pronounce it."

This intelligent entity (me) HAS an essential natural language DB in its
system.  I also WANT to have a better, more complete natural language DB.
It would be great to have the contents of a good dictionary implanted in my
brain if it were possible.

It would seem to me that words are a good starting place for symbols that
represent things.  Why reinvent the wheel?  Especially since we would like
to communicate with the AI entities we create, if only to hear or see
"king's pawn to king four."  Certainly natural language is rather messy with
its synonyms, homonyms, and multiple meanings per word.  The WordNet
mappings to synonym sets seems to be a good way to start approaching the
problem, especially with its inclusion of small word groups such as "part of
speech."

I realize that there is a big conceptual difference between data and the
ability to process it.  But one cannot be useful without the other.
Sometimes data and processing ability can be closely combined.  Perhaps word
definitions can be considered data.  Then grammar and parts of speech that
construct sentences may be considered a type of functionality.  Several
words in a definition map to a single word (or synonym set).  Words are
linked with grammar rules to form a sentence.  One sentence, or perhaps a
few in a paragraph map to a "thought."  This is a system that works well for
US, so has potential application to an AGI.

Of course, "grounding" the words used to define other words is a big
challenge.  But this difficulty does not lead to the conclusion that natural
language DB's should not be a very important part of an AGI.  Perhaps the
"grounding" and "real understanding" we instinctively strive for is a bit of
an illusion.  Maybe intertwined but ultimately circular and imperfect
definitions are all we need.  Sensory grounding would be nice, but perhaps
not necessary.  I can certainly understand things with very little sensory
grounding, such as the general theory of relativity (well, a little
understanding anyway).  And does it really matter that I perceive red the
same way as everyone else, so long as the perception is consistent?

Things like mathematics and chemistry have their own "languages" that at
least to some extent can be approached and explained using the "basic"
natural languages.  While this type of career plan is not to be recommended,
I hope to draw on my varied background as practical engineer, research
engineer, lawyer, patent lawyer, and import/export business (not to mention
a variety of hobbies) to guide my thinking about thinking.

We often intelligently use things we do not understand.  Computers,
automobiles, our brains, quarks, and so on.  Why can't an AGI use words it
does not actually understand, so long as it uses the word properly and
accomplishes the desired result?  I have seen expert systems and databases
do truly amazing things in my various experiences.  But nothing so amazing
as seeing EllaZ deal blackjack, or place the entire contents of Kant's
Critique of Pure Reason into a single scrolling browser textbox :-)

Catch you all later . . . Kevin Copple

P.S.  I have one of the reduced Loebner contest versions of the EllaZ system
(a/k/a Kip) running the Oddcast animated head now at www.EllaZ.com.  The
implementation is still a bit rough, but the AT&T Natural Voices TTS is
quite good.

[EMAIL PROTECTED]

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to