Josh,

Also a good post.

You seem to be defining "grounding" as having meaning, in a semantic
sense.  If so, why is it a meaningless question to ask if "2" in your
calculator has grounding, since you say the calculator has limited but
real semantics.  Would not the relationships "2" has to other numbers in
the semantics of that system be a limited form of semantics.

And what other source besides experience can grounding come from, either
directly or indirectly?  The semantic model of arithmetic in you
calculator was presumably derived from years of human experience that
found the generalities of arithmetic to be valid and useful in the real
world of things like sheep, cows, and money.  Of course there could be
semantics in an imaginary world, but they would come from experiences of
imagination.

Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-----Original Message-----
From: J Storrs Hall, PhD [mailto:[EMAIL PROTECTED]
Sent: Saturday, October 13, 2007 12:50 PM
To: agi@v2.listbox.com
Subject: Re: [agi] "symbol grounding" Q&A


This is a very nice list of questions and makes a good framework for
talking
about the issues. Here are my opinions...

On Saturday 13 October 2007 11:29:16 am, Pei Wang wrote:

> *. When is a symbol "grounded"?

"Grounded" is not a good way of approaching what we're trying to get at,
which
is semantics. The term implies that meanings are inherent in words, and
this
obscures the fact that semantics are a property of systems of which words
are
only a part.
Example: is the symbol 2 grounded in my calculator? there's no pointer
from
the bit pattern to an actual pair of anything. However, when I type in 2+2
it
tells me 4. There is a system implemented that is a semantic model of
arithmetic, and 2 is connected into the system in such a way that I get
the
right answer when I use it. Is 2 grounded? meaningless question. Does the
calculator have a limited but real semantics of arithmetic? Definitely.

> *. What is wrong in traditional "symbolic AI" on this topic?

These systems didn't come close to implementing a competent semantics of
the
parts of the world they were claimed to "understand".

> *. What is the "experience" needed for symbol grounding?

Experience per se isn't strictly necessary, but you have get the semantics

from somewhere, and experience is a good source. The scientific method
relies
heavily on experience in the form of experiment to validate theories, for
example.

> *. For the symbols in an AGI to be grounded, should the experience of
> the system be the same, or very similar, to human sensory experience?

No, as long as it can form coherent predictive models. On the other hand,
some
overlap may be necessary to use human language with much proficiency.

> *. Is vision necessary for symbol grounding in AGI?

No, but much of human modelling is based on spatial metaphors, and thus
the
communication issue is particularly salient.

> *. Is vision important in deciding the meaning of human concepts?

Many human concepts are colored with visual connotations, pun intended.
You're
clearly missing something if you don't have it; but I would guess that
with
only moderate exceptions, you could capture the essence without it.

> *. In that case, if an AGI has no vision, how can it still understand
> a human concept?

The same way it can understand anything: it has a model whose semantics
match
the semantics of the real domain.

> *. Can a blind person be intelligent?

Yes.

> *. How can a sensorless system like NARS have grounded symbol?

Forget "grounded". Can it *understand* things? Yes, if  it has a model
whose
semantics match the semantics of the real domain.

> *. If NARS always uses symbols differently from typical human usage,
> can we still consider it intelligent?

Certainly, if the symbols it uses for communication are close enough to
the
usages of whoever it's communicating with to be comprehensible. Internally
it
can use whatever symbols it wants any way it wants.

> *. Are you saying that vision has nothing to do with AGI?

Personally I think that vision is fairly important in a practical sense,
because I think we'll get a lot of insights into what's going on in there
when we try to unify the higher levels of the visual and natural language
interpretive structures. And of course, vision will be of immense
practical
use in a robot.

But I think that once we do know what's going on, it will be possible to
build
a Turing-test-passing AI without vision.

Josh


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=53686175-b02541

Reply via email to