Derek Zahn wrote:
Richard Loosemore writes:
> This becomes a problem because when we say of another person that they
> "meant" something by their use of a particular word (say "cat"), what we
> actually mean is that that person had a huge amount of cognitive
> machinery connected to that word "cat" (reaching all the way down to the
> sensory perception mechanisms that allow the person to recognise an
> instance of a cat, and motor output mechanisms that let them interact
> with a cat).
>
> What Stephen Harnad said in his original paper was "Hang on a second:
> if the AI system does not have all that other machinery inside it when
> it uses a word like "cat", surely it does not really "mean" the same
> thing by "cat" as a person would?"
>
> [...]
Thanks, Richard. That post was a terrific bit of writing.
On a related note, I think those that are uneasy with the idea of
grounding symbols in experience with a "virtual" world wonder whether
the (current) thin and skewed "sensory experiece" of cats or any other
concept-friendly regularities in such worlds are sufficiently similar to
provide enough of the "same meaning" for communication with humans using
the resulting concepts.
For that matter, one wonders even when concepts are grounded in the real
world whether the resulting concepts and their meanings can be similar
enough for communication if the concept formation machinery is not quite
similar to our own.... sometimes even individual human
conceptualizations are barely similar enough to allow conversation.
That is a very good point, and one to which I don't have a ready answer.
This question will attract a good deal of attention when we get nearer
to the point of being able to test real candidate AGI systems.
It is another reason to stay close to the human design, I believe.
Richard Loosemore
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73905368-2fdc72