Josh,  your Tue 10/16/2007 8:58 AM post was a very good one.  I have just
a few comments in all-caps.

“The view I suggest instead is that it's not the symbols per se, but the
machinery that manipulates them, that provides semantics.”

MACHINERY WITHOUT REPRESENTATION TO COMPUTE FROM IS OF AS LITTLE VALUE AS
REPRESENTATION WITHOUT MACHINERY TO COMPUTE FROM IT.

Harnad would say that you understand the words you read and hear because,
as a
human body, you have already grounded them in experience or can make use
of a
definition in terms that are already grounded, avoiding circular
definitions.

I would say that you can understand sentences and arguments you hear
because
you have an internal model that can make predictions based on the
sentences
and inferences based on the arguments.

YES, I BELIEVE THAT IN HUMANS, AND IN AGI’S THAT ARE BUILT TO UNDERSTAND
US, THE ROLE OF HARNAD GROUNDING IS VERY IMPORTANT.  SUCH GROUNDING
PROBABLY PLAYS A ROLE IN SOME FORM IN MUCH OF OUR THINKING ABOUT EVEN
ABSTRACT REALITIES WITH WHICH WE HAVE NO DIRECT RELATIONSHIP, BUT I THINK
THAT IN SUCH ABSTRACT REASONING, MANY OF THE DOMINANT INFERENCES COME FROM
ASPECTS OF THOSE REALITIES WITH WHICH WE HAVE NO DIRECT EXPERIENCE.

SO, YES, I BELIEVE THAT WE ARE CAPABLE OF CREATING MODELS IN OUR MINDS OF
ABTRACT REALITIES, SUCH AS QUANTUM BEHAVIOR WITH WHICH WE HAVE VERY LITTLE
DIRECT EXPERIENCE, AND THAT WE CAN REASON FROM SUCH MODELS.  BUT I THINK
OUR DIRECT EXPERIENTIAL KNOWLEDGE PLAYS SOME ROLE EVEN OF MUCH OF THIS
ABSTRACT OF THINKING, SUCH AS EXPERIENTIALLY DERIVED KNOWLEDGE OF CONCEPTS
SUCH AS CAUSE AND EFFECT OR THREE DIMENSIONAL SPACE.

The only reason the distinction makes much of a difference is that the
grounding issue is used as an argument that an AI must be embodied, having

direct sensory experience. It's part of an effort to understand why
classical
AI faltered in the 80's and thus what must be done differently to make it
go
again. I give a good overview of the arguments in Beyond AI chapters 5 and
7.

I DON’T THINK POWERFUL AGI’S HAVE TO BE EMBODIED, BUT IF YOU WANT THEM TO
THINK LIKE US AND HAVE THE SAME TYPE OF COMMON SENSE KNOWLEDGE WE HAVE, IT
WOULD BE HELPFUL THAT THEY -- OR SOME OF THE SYSTEMS FROM WHICH THEIR
KNOWLEDGE HAS BEEN DERIVED -- HAVE HAD EMBODIED EXPERIENCES.

“Richard is right in that if a system formed its own symbols from sensory
experience, they would be grounded in Harnad's sense.”

RICHARD WAS SAYING SOMETHING MORE – THAT ACCORDING TO HARNAD A SYSTEM THAT
COULD INTERPRET ITS OWN SYMBOLS MIGHT BE CONSIDERED GROUNDED.

THIS IS SIMILAR TO YOUR DISCUSSION ABOVE ABOUT AN “INTERNAL MODEL THAT CAN
MAKE PREDICTIONS BASED ON THE SENTENCES AND INFERENCES BASED ON THE
ARGUMENTS.”

“there's plenty of relations specified between symbols in Harnad's
ungrounded dictionary.

IN THE BOOK “WORDNET: AN ELECTRONIC LEXICAL DATABASE” BY CHRISTIANE
FELLBAUM, IN FIGURE 16.10 AND RELATED TEXT THESE IS A DESCRIPTION HOW ONE
CAN ACTUALLY DO SOME INTERESTING INFERENCING FROM THE WORDNET DATABASE.
(A RADIAL SEARCH IN A SEMANTIC NET FORMED BY WORDNET’S REPRESENTATION
IMPLIES THAT IF SOMEONE OPENS A REFRIGERATOR, THEY MIGHT BE DOING IT TO
GET FOOD)   WORDNET IS ARGUABLY A DICTIONARY THAT ALSO PLACES WORDS IN A
GENERALIZATION HIERARCHY.

MUCH OF WORDNET’S KNOWLEDGE IS A FORM OF DISTILLED EXPERIENTIAL KNOWLEDGE.
WHETHER OR NOT IT CONSTITUTES GROUNDING IS A DEFINITIONAL ISSUE.  IN MY
MIND IT PROVIDES A TYPE AND DEGREE OF GROUNDING, BUT IT CLEARLY PROVIDES
SEMANTICS.

I would distinguish between relations that were merely a static structure,
as in the dictionary, and ones that were part of a mechanism (which could
be had by adding say an inference procedure to the definitions).

AS I SAID ABOVE, KNOWLEDGE WITHOUT SOMETHING TO COMPUTE FROM IT CAN’T DO
ANYTHING.  THE TYPE OF AGI’S I IMAGINE WILL HAVE MASSIVE COMPUTATIONAL
POWER AND WOULD CREATE A MASSIVE DYNAMIC STATE THAT WOULD BE CONSTANTLY
CHANGING.  ITS MIND WOULD BE VERY LIVELY.

And it's the structures that we need tobe thinking about, not the symbols.

AS I SAID ABOVE, I AM THINKING OF LARGE COMPLEX WEBS OF COMPOSITIONAL AND
GENERALIZATIONAL HIERARCHIES, ASSOCIATIONS, EPISODIC EXPERIENCES, ETC, OF
SUFFICIENT COMPLEXITY AND DEPTH TO REPRESENT THE EQUIVALENT OF HUMAN WORLD
KNOWLEDGE.

SO, IS THAT WHAT YOU MEAN BY “STRUCTURES?

“> It seems ridiculous to say that one could have two identical large
> knowledge bases of experiential knowledge each containing millions of
> identically interconnected symbols and patterns in two AGI having
> identical hardware, and claim that the symbols in one were grounded
> but those in the other were not because of the purely historical
> distinction that the sensing to learn such a knowledge was performed
> on only one of the two identical systems.

“Again, exactly my point. It wouldn't matter if one was copied from the
other,
or reverse-engineered, or produced by a random-number generator (as
unlikely
as that would be).”

I AM GLAD SOMEONE AGREES WITH ME ON THAT.


Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-----Original Message-----
From: J Storrs Hall, PhD [mailto:[EMAIL PROTECTED]
Sent: Tuesday, October 16, 2007 8:58 AM
To: agi@v2.listbox.com
Subject: Re: [agi] "symbol grounding" Q&A


On Monday 15 October 2007 04:45:22 pm, Edward W. Porter wrote:
> I mis-understood you, Josh.  I thought you were saying semantics could
> be a type of grounding.  It appears you were saying that grounding
> requires direct experience, but that grounding is only one (although
> perhaps the
> best) possible way of providing semantic meaning.  Am I correct?

That's right as far as it goes. The term "grounding" is very commonly
associated with "symbol" in such a way as to imply that semantics only
arise
from the fact that symbols have referents in the real world (or whatever).

This is the view Harnad espoused with his dictionary example. The view I
suggest instead is that it's not the symbols per se, but the machinery
that
manipulates them, that provides semantics. Dictionaries have no machinery.

Turing machines, on the other hand, do -- so the symbols used by a Turing
machine may have meaning in a sense even though there is nothing in the
external world that they map to. (A case in point would be the individual
bits that your calculator manipulates.)

> I would tend to differ with the concept that grounding only relates to
> what you directly experience.  (Of course it appears to be a
> definitional issue, so there is probably no theoretical right or
> wrong.)  I consider what I read, hear in lectures, and see in videos
> about science or other abstract fields such as patent law to be
> experience, even though the operative content in such experiences is
> derived second, third, fourth, or more handed.

Harnad would say that you understand the words you read and hear because,
as a
human body, you have already grounded them in experience or can make use
of a
definition in terms that are already grounded, avoiding circular
definitions.

I would say that you can understand sentences and arguments you hear
because
you have an internal model that can make predictions based on the
sentences
and inferences based on the arguments.

The only reason the distinction makes much of a difference is that the
grounding issue is used as an argument that an AI must be embodied, having

direct sensory experience. It's part of an effort to understand why
classical
AI faltered in the 80's and thus what must be done differently to make it
go
again. I give a good overview of the arguments in Beyond AI chapters 5 and
7.

> In Richard Loosemore’s above mentioned informative post he implied
> that according to Harnad a system that could interpret its own symbols
> is grounded.  I think this is more important to my concept of
> grounding than from where the information that lets the system do such
> important interpretation comes.  To me the important distinction is
> are we just dealing with realtively naked symbols, or are we dealing
> with symbols that have a lot of the relations with other symbols and
> patterns, something like those Pei Wang was talking about, that lets
> the system use the symbols in an intelligent way.

Richard is right in that if a system formed its own symbols from sensory
experience, they would be grounded in Harnad's sense. In the case of the
relations between the symbols, it isn't clear -- there's plenty of
relations
specified between symbols in Harnad's ungrounded dictionary.

I would distinguish between relations that were merely a static structure,
as
in the dictionary, and ones that were part of a mechanism (which could be
had
by adding say an inference procedure to the definitions).

> Usually for such relations and patterns to be useful in a world, they
> have to have come directly or indirectly from experience of that
> world.  But again, it is not clear to me that they has to come first
> handed.

Exactly my point. The vast majority of what we learn is second- (or nth-)
hand, mediated by symbol structures. And it's the structures that we need
to
be thinking about, not the symbols.

> It seems ridiculous to say that one could have two identical large
> knowledge bases of experiential knowledge each containing millions of
> identically interconnected symbols and patterns in two AGI having
> identical hardware, and claim that the symbols in one were grounded
> but those in the other were not because of the purely historical
> distinction that the sensing to learn such a knowledge was performed
> on only one of the two identical systems.

Again, exactly my point. It wouldn't matter if one was copied from the
other,
or reverse-engineered, or produced by a random-number generator (as
unlikely
as that would be).

Or imagine that you had a robot who built its own symbols from physical
experience until it was intelligent, and then was cut off from the sensors

and was only connected thru a tty, doing Turing tests. The symbols didn't
lose meaning -- the words of someone blinded in an accident are not
suddenly
meaningless! So if we built an AI de novo that had the same program as the

robot, it would be ridiculous to say that its symbols had no meaning, as
well.

Josh

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=54275868-6bb1eb

Reply via email to