What is "meaning" to a computer? Some people would say that no machine
can
know the meaning of text because only humans can understand language.
Nope. I am *NOT* willing to do the Searle thing. Machines will know the
meaning of text (i.e. understand it) when they have a coherent world model
that they ground their usage of text in.
The terms "meaning" and "understanding" are not well defined for machines.
Then rigorously define them for your purposes and stop complaining. If you
have an effective, coherent world model and if you can ground an input in
this model then you "understand" that input (i.e. that input has "meaning"
relative to your world model).
Humans are good at predicting text. For example, I think you could guess
the
next word in this _________. If I wanted to test if you know Chinese, I
could
ask you to predict successive ideographs in a Chinese newspaper.
Obviously
you would have to understand Chinese to pass the test. If a machine could
predict text with the same accuracy rate as a 7 year old child, would you
say
the machine understood the text as well as the child?
Absolutely not UNLESS the machine was doing so based upon having a world
model and "understanding" the text. Predicting does not equal intelligence.
It is, at most, a required subset.
Most people would say "no, it's not human" and then think up other tests
until
it flunked. But let's be fair. This is not a Turing test. If I want to
test
if you understand numbers, I would give you arithmetic problems. Most
people
would agree this is a reasonable test. But by this test, a calculator
also
understands numbers.
Nope. The calculator "understands" numbers only as far as your text
predictor "understands" text. It can predict the answer but it doesn't
understand the answer because it doesn't understand/have a coherent model of
the real world of numbers, counting, infinity, etc. Understanding also
means that you can build upon your model. A calculator can't grow.
Humans understand words by associating them both with other words and with
nonverbal sensorimotor experience. Some words like "blue" are grounded,
but
other abstract words like "abstract" are only associated with other words.
But this is still understanding. If you ask a blind man if he understands
"blue" he might say, "yes, in the same way you understand 'ultraviolet'".
Yes, but it's also important to note that *all* words are built into a
coherent model whose foundations are grounded.
So I would say that an information retrieval system understands words in
the
abstract sense.
And I would say, absolutely *NOT* because the information retrieval system
lacks the coherent model (like the text predictor and the calculator). This
is why you and I have a fundamental disagreement about how to get to AGI. I
insist that you need this coherent world model for intelligence and
understanding and expect that this model is going to need some serious
high-level structuring. I perceive you as thinking that a random uncollated
mass of statistics with no high-level structure/ordering will get you there.
----- Original Message -----
From: "Matt Mahoney" <[EMAIL PROTECTED]>
To: <[email protected]>
Sent: Tuesday, May 01, 2007 7:00 PM
Subject: Re: [agi] rule-based NL system
--- Mark Waser <[EMAIL PROTECTED]> wrote:
Hmmm. I think there's a problem with your use of the word semantics
. .
. . There is a huge difference between labelling an object, which young
children do quite early, and dealing with concepts (even fairly concrete
ones). There is an even larger difference between correlating
co-occurrences, which is all that information retrieval and text
classification systems do, and actually dealing with meaning.
What is "meaning" to a computer? Some people would say that no machine
can
know the meaning of text because only humans can understand language. The
terms "meaning" and "understanding" are not well defined for machines.
Humans are good at predicting text. For example, I think you could guess
the
next word in this _________. If I wanted to test if you know Chinese, I
could
ask you to predict successive ideographs in a Chinese newspaper.
Obviously
you would have to understand Chinese to pass the test. If a machine could
predict text with the same accuracy rate as a 7 year old child, would you
say
the machine understood the text as well as the child?
Most people would say "no, it's not human" and then think up other tests
until
it flunked. But let's be fair. This is not a Turing test. If I want to
test
if you understand numbers, I would give you arithmetic problems. Most
people
would agree this is a reasonable test. But by this test, a calculator
also
understands numbers.
Humans understand words by associating them both with other words and with
nonverbal sensorimotor experience. Some words like "blue" are grounded,
but
other abstract words like "abstract" are only associated with other words.
But this is still understanding. If you ask a blind man if he understands
"blue" he might say, "yes, in the same way you understand 'ultraviolet'".
So I would say that an information retrieval system understands words in
the
abstract sense.
-- Matt Mahoney, [EMAIL PROTECTED]
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936