Edward,

I think that Storrs-Hall's post threw unnecessary confusion onto what was, in fact, a very clear statement that Pei Wang originally made on the matter.

Harnad's original idea had something relatively simple at its core: if you see an AGI system using concepts with names attached to them (e.g. "cat" or "2"), ask the following pair of questions:

1) Did some programmer decide to insert symbols in the system, and label with words like "cat" or "2"?

2) Do the results of the system's processing have to be interpreted by a human being who *relies* on the (programmer-chosen) labels when interpreting those results? (Or, is it the case that the system itself cannot *always* do its own interpretation of the results of its processing?)

If the answers are "yes" to both of these, the system is not grounded.

What Harnad was trying to point out was that many AI systems had labels attached to their symbols, but the meaningfulness of the systems' behavior was a result of programmers attaching the labels in the first place, and others imposing their interpretation on those symbols later.

If an AI (or AGI) system can build its own symbols, and if its use of those symbols is a result of the building process, NOT purely a result of the imposition of external labels, then we would say that the system is grounded.

But bear in mind, in all of this, that Harnad posed a question, he did not precisely answer it, nor specify exactly how it could be answered. It is easier to point to a lack of grounding than it is to say when a system really is grounded.

There are, as yet, no AGI systems that are both proven to be fully functional intelligences AND definitely grounded.

We can ground any system easily enough, you see, just by making sure that the system develops in such a way that its symbols are the result of world-interaction. The problem is not so much getting grounding to happen, it is getting to happen in a system that is also intelligent.

AI programmers, in their haste to get something working, often simply write some code and then label certain symbols as if they are meaningful, when in fact they are just symbols-with-labels.

Richard Loosemore




Edward W. Porter wrote:
In response to you below post, I have responded in all-cap to certain
quoted portions of it.

“I'm arguing that its meaning makes an assumption about the nature of
semantics that obscures rather than informing some important questions”

WHAT EXACTLY DO YOU MEAN?

“I'd just say that for the 2 in my calculator, the answer is
no, in Harnad's fairly precise sense of grounding. Whereas the calculator
clearly does have the appropriate semantics for arithmetic.”

I JUST READ THE ABSTRACT OF Harnad, S. (1990) The Symbol Grounding
Problem. Physica D 42: 335-346. ON THE WEB, AND IT SEEMS HE IS TALKING
ABOUT USING SOMETHING LIKE A GEN/COMP HIERARCHY OF REPRESENTATION HAVING
AS A BOTTOM LAYER SIMPLE SENSORY PATTERNS, AS A BASIS OF GROUNDING.

SO HOW DOES THE CALCULATOR HAVE SIGNIFICANTLY MORE OF THIS TYPE OF
GROUNDING THAN  “10” IN BINARY.

ALTHOUGH THE HARNAD TYPE OF GROUNDING IS THE GENERAL TYPE I SPEND MOST OF
MY TIME THINKING ABOUT, I THINK IT IS POSSIBLE FOR A SYSTEM TO BE CREATED,
SUCH AS CYC, THAT WOULD HAVE SOME LEVEL (ALTHOUGH RELATIVELY LOW)
GROUNDING IN THE SENSE OF SEMANTICS, YET NOT HAVE HARNAD GROUNDING (AS I
UNDERSTOOD IT FROM HIS ABSTRACT)

“Typically one assumes that experience means the experience of the person,
AI,
or whatever that we're talking about...”

IF THAT IS TRUE, MUCH OF MY UNDERSTANDING OF SCIENCE AND AI IS NOT
GROUNDED, SINCE IT HAS BEEN LEARNED LARGELY BY READING, HEARING LECTURES,
AND WATCHING DOCUMENTARIES.  THESE ARE ALL FORMS OF LEARNING WHERE THE
IMPORTANT CONTENT OF THE INFORMATION HAS NOT BEEN SENSED BY ME DIRECTLY.

“I claim that we can talk about a more proximate
criterion for semantics, which is that the system forms a model of some
phenomenon of interest. It may well be that experience, narrowly or
broadly
construed, is often the best way of producing such a system (and in fact I

believe that it is), but the questions are logically separable.”

THIS MAKES SENSE, BUT THIS WOULD COVER A LOT OF SYSTEM THAT ARE NOT
“GROUNDED” IN THE WAY MOST OF USE US THAT WORD

“It's conceivable to have a system that has the appropriate semantics that
was just
randomly produced...”

I ASSUME THAT BY RANDOMLY PRODUCED, YOU DON’T MEAN THAT THE SYSTEM WOULD
BE TOTALLY RANDOM, IN WHICH CASE IT WOULD SEEM THE CONCEPT OF A MODEL
WOULD BE MEANINGLESS.

I WOULD PICK AS A GOOD EXAMPLE OF A SEMANTIC SYSTEM THAT IS SOMEWHAT
INDEPENDENT OF PHYSICAL REALITY, BUT YET HAS PROVED USEFUL, AT LEAST FOR
ENTERTAINMENT, IS THE HARRY POTTER SERIES, OR SOME OTHER FICTIONAL WORLD
WHICH CREATES A FICTIONAL REALITY IN WHICH THERE IS A CERTAIN REGULARITY
TO THE BEHAVIOR AND CHARACTERISTICS OF THE FICTITIOUS PEOPLE AND PLACES IT
DESCRIBES.

Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-----Original Message-----
From: J Storrs Hall, PhD [mailto:[EMAIL PROTECTED]
Sent: Monday, October 15, 2007 11:29 AM
To: agi@v2.listbox.com
Subject: Re: [agi] "symbol grounding" Q&A


On Monday 15 October 2007 10:21:48 am, Edward W. Porter wrote:
Josh,

Also a good post.

Thank you!

You seem to be defining "grounding" as having meaning, in a semantic
sense.

Certainly it has meaning, as generally used in the philosophical
literature.
I'm arguing that its meaning makes an assumption about the nature of
semantics that obscures rather than informing some important questions.

If so, why is it a meaningless question to ask if "2" in your
calculator has grounding, since you say the calculator has limited but
real semantics.  Would not the relationships "2" has to other numbers in
the semantics of that system be a limited form of semantics.

Not meaningless -- I'd just say that for the 2 in my calculator, the
answer is
no, in Harnad's fairly precise sense of grounding. Whereas the calculator
clearly does have the appropriate semantics for arithmetic.

And what other source besides experience can grounding come from,
either directly or indirectly?  The semantic model of arithmetic in
you calculator was presumably derived from years of human experience
that found the generalities of arithmetic to be valid and useful in
the real world of things like sheep, cows, and money.

I'd claim that this is a fairly elastic use of the term "experience".
Typically one assumes that experience means the experience of the person,
AI,
or whatever that we're talking about, in this case the calculator. The 2
in
the calculator clearly does not get its semantics from the calculator's
experience.

If we allow an expanded meaning of experience as including the experience
of
the designer of the system, we more or less have to allow it to mean any
feedback in the evolutionary process that produced the low-level semantic
mechanisms in our own brains. This strains my concept of the word a bit.

Whether we allow that or not, I claim that we can talk about a more
proximate
criterion for semantics, which is that the system forms a model of some
phenomenon of interest. It may well be that experience, narrowly or
broadly
construed, is often the best way of producing such a system (and in fact I

believe that it is), but the questions are logically separable. It's
conceivable to have a system that has the appropriate semantics that was
just
randomly produced, for example, whereas the reverse, a system basedon
experience that DOESN'T model the phenomenon, wouldn't have the semantics
in
my view.

The most common case of a randomly-created semantic model that didn't
arise
from experience is the creation of social realities by fiat, as in the
classic case of money. We (somebody) made up what money is and how it
should
work, and the reality that system models followed because we built the
reality to match the system, rather than the other way around.

Josh


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=53783032-d771c8

Reply via email to