On Monday 15 October 2007 04:45:22 pm, Edward W. Porter wrote:
> I mis-understood you, Josh.  I thought you were saying semantics could be
> a type of grounding.  It appears you were saying that grounding requires
> direct experience, but that grounding is only one (although perhaps the
> best) possible way of providing semantic meaning.  Am I correct?

That's right as far as it goes. The term "grounding" is very commonly 
associated with "symbol" in such a way as to imply that semantics only arise 
from the fact that symbols have referents in the real world (or whatever). 
This is the view Harnad espoused with his dictionary example. The view I 
suggest instead is that it's not the symbols per se, but the machinery that 
manipulates them, that provides semantics. Dictionaries have no machinery. 
Turing machines, on the other hand, do -- so the symbols used by a Turing 
machine may have meaning in a sense even though there is nothing in the 
external world that they map to. (A case in point would be the individual 
bits that your calculator manipulates.)

> I would tend to differ with the concept that grounding only relates to
> what you directly experience.  (Of course it appears to be a definitional
> issue, so there is probably no theoretical right or wrong.)  I consider
> what I read, hear in lectures, and see in videos about science or other
> abstract fields such as patent law to be experience, even though the
> operative content in such experiences is derived second, third, fourth, or
> more handed.

Harnad would say that you understand the words you read and hear because, as a 
human body, you have already grounded them in experience or can make use of a 
definition in terms that are already grounded, avoiding circular definitions. 

I would say that you can understand sentences and arguments you hear because 
you have an internal model that can make predictions based on the sentences 
and inferences based on the arguments. 

The only reason the distinction makes much of a difference is that the 
grounding issue is used as an argument that an AI must be embodied, having 
direct sensory experience. It's part of an effort to understand why classical 
AI faltered in the 80's and thus what must be done differently to make it go 
again. I give a good overview of the arguments in Beyond AI chapters 5 and 7.

> In Richard Loosemore’s above mentioned informative post he implied that
> according to Harnad a system that could interpret its own symbols is
> grounded.  I think this is more important to my concept of grounding than
> from where the information that lets the system do such important
> interpretation comes.  To me the important distinction is are we just
> dealing with realtively naked symbols, or are we dealing with symbols that
> have a lot of the relations with other symbols and patterns, something
> like those Pei Wang was talking about, that lets the system use the
> symbols in an intelligent way.

Richard is right in that if a system formed its own symbols from sensory 
experience, they would be grounded in Harnad's sense. In the case of the 
relations between the symbols, it isn't clear -- there's plenty of relations 
specified between symbols in Harnad's ungrounded dictionary. 

I would distinguish between relations that were merely a static structure, as 
in the dictionary, and ones that were part of a mechanism (which could be had 
by adding say an inference procedure to the definitions). 

> Usually for such relations and patterns to be useful in a world, they have
> to have come directly or indirectly from experience of that world.  But
> again, it is not clear to me that they has to come first handed.

Exactly my point. The vast majority of what we learn is second- (or nth-) 
hand, mediated by symbol structures. And it's the structures that we need to 
be thinking about, not the symbols.
 
> It seems ridiculous to say that one could have two identical large
> knowledge bases of experiential knowledge each containing millions of
> identically interconnected symbols and patterns in two AGI having
> identical hardware, and claim that the symbols in one were grounded but
> those in the other were not because of the purely historical distinction
> that the sensing to learn such a knowledge was performed on only one of
> the two identical systems.

Again, exactly my point. It wouldn't matter if one was copied from the other, 
or reverse-engineered, or produced by a random-number generator (as unlikely 
as that would be).

Or imagine that you had a robot who built its own symbols from physical 
experience until it was intelligent, and then was cut off from the sensors 
and was only connected thru a tty, doing Turing tests. The symbols didn't 
lose meaning -- the words of someone blinded in an accident are not suddenly 
meaningless! So if we built an AI de novo that had the same program as the 
robot, it would be ridiculous to say that its symbols had no meaning, as 
well.
 
Josh

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=54057676-07fd31

Reply via email to