This is in response to Josh Storrs  Monday, October 15, 2007 3:02 PM post
and Richard Loosemore’s Mon 10/15/2007 1:57 PM post.

I mis-understood you, Josh.  I thought you were saying semantics could be
a type of grounding.  It appears you were saying that grounding requires
direct experience, but that grounding is only one (although perhaps the
best) possible way of providing semantic meaning.  Am I correct?

I would tend to differ with the concept that grounding only relates to
what you directly experience.  (Of course it appears to be a definitional
issue, so there is probably no theoretical right or wrong.)  I consider
what I read, hear in lectures, and see in videos about science or other
abstract fields such as patent law to be experience, even though the
operative content in such experiences is derived second, third, fourth, or
more handed.

In Richard Loosemore’s above mentioned informative post he implied that
according to Harnad a system that could interpret its own symbols is
grounded.  I think this is more important to my concept of grounding than
from where the information that lets the system do such important
interpretation comes.  To me the important distinction is are we just
dealing with realtively naked symbols, or are we dealing with symbols that
have a lot of the relations with other symbols and patterns, something
like those Pei Wang was talking about, that lets the system use the
symbols in an intelligent way.

Usually for such relations and patterns to be useful in a world, they have
to have come directly or indirectly from experience of that world.  But
again, it is not clear to me that they has to come first handed.

Presumably if the AGI equivalents of personal computers are being mass
produced  by  the millions 10 to 20 years from now, and if they come out
of the box with significant world knowledge that has been copied into
their non-volatile memory bit-for-bit from world knowledge that came from
the direct experience from many learning machines and indirectly from
massive sophisticated NL readings of large bodies of text and visual
recognition of large image and video data bases.  I would consider most of
the symbols in such a brand new personal AGI to be grounded -- even though
they have not been derived from any experience of a particular personal
AGI, itself -- if they had meaning to the personal AGI itself.

It seems ridiculous to say that one could have two identical large
knowledge bases of experiential knowledge each containing millions of
identically interconnected symbols and patterns in two AGI having
identical hardware, and claim that the symbols in one were grounded but
those in the other were not because of the purely historical distinction
that the sensing to learn such a knowledge was performed on only one of
the two identical systems.

Of course, going forward each system would have to be able to do its own
learning from its own experience if it were to be able to respond to the
unique aspects and events in its own environment.


Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=53840000-4f4f95

Reply via email to