Dennis Gorelik wrote:
Richard,
It seems that under "Real Grounding Problem" you mean "Communication
Problem".
Basically your goal is to make sure that when two systems communicate
with each other -- they understand each other correctly.
Right?
If that's the problem -- I'm ready to give you my solution.
BTW, I had to read your explanation 3 times to get it [if I got it].
:-)
Don't feel bad: my explanation was horribly compressed, and not
necessarily very well articulated, and the actual claim is extremely
abstract and susceptible to misinterpretation (about 95% of the
literature on the SGP is a complete misinterpretation!).
I don't think it is quite a "communication problem", though. The issue
is much more like the error that destroyed that NASA Mars spacecraft
several years ago (can't remember which one: they busted so many of
them). The one that had one software module calculating in kilometers
and the other module calculating in miles, so the results passed from
one to the other became meaningless.
This could be called a communcation problem, but it is internal, and in
the AGI case it is not so simple as just miscalculated numbers.
So here is a revised version of the problem: suppose that a system
keeps some numbers stored internally, but those numbers are *used* by
the system in such a way that their "meaning" is implicit in the entire
design of the system. When the system uses those numbers to do things,
the numbers are fed into the "using" mechanisms in such a way that you
can only really tell what the numbers "mean" by looking at the overall
way in which they are used.
Now, with that idea in mind, now imagine that programmers came along and
set up the *values* for a whole bunch of those numbers, inside the
machine, ON THE ASSUMPTION that those numbers "meant" something that the
programmers had decided they meant. So the programmers were really
definite and explicit about the meaning of the numbers.
Question: what if those two sets of meanings are in conflict?
This is effectively what the SGP (symbol grounding problem) is all
about. Some AI folks start out by building a program in which they
decide ahead of time what the "symbols" mean, and they insert a whole
bunch of actual symbols (AND mechanisms that operate on symbols) into
the system on the assumption that their chosen meanings are valid.
This becomes a problem because when we say of another person that they
"meant" something by their use of a particular word (say "cat"), what we
actually mean is that that person had a huge amount of cognitive
machinery connected to that word "cat" (reaching all the way down to the
sensory perception mechanisms that allow the person to recognise an
instance of a cat, and motor output mechanisms that let them interact
with a cat).
What Stephen Harnad said in his original paper was "Hang on a second:
if the AI system does not have all that other machinery inside it when
it uses a word like "cat", surely it does not really "mean" the same
thing by "cat" as a person would?"
In effect, he was saying that the very limited machinery inside a simple
AI system will have an *implicit* meaning for "cat" which is very crude
because it does not have all that other stuff that we have inside our
heads, connected to the "cat" concept. When you ask the AI "Are cats
fussy?" it will only be able to do something crude like see if it has a
memory item recording a fact about cats and fussiness. A person on the
other hand (if they know cats) will be able to deploy a huge amount of
knowledge about both the [cat] concept and the [fussy] concept, and come
to a sophisticated conclusion. What Harnad would say is that the AI
does not really have the same "meaning" attached to "cat" as people do.
He then went on to say that the only way to resolve this problem is to
make sure that the system is connected to the real world so it can pick
up its own symbols, and only when it has all that real-world connection
machinery, and building symbols in the way that we do, will the system
really be able to get the meaning of a word like "cat". Harnad
summarized that by saying that AI systems need to have their symbols
"grounded" in the real world.
Now this is where the confusion starts. Lots of people heard him
suggest this, and then thought: "No problem: we'll attach some video
cameras and robot arms to our AI and then it will be grounded!"
This is a disatrous misunderstanding of the problem. If the AI system
starts out with a design in which symbols are designed and stocked by
programmers, this part of the machine has ONE implicit meaning for its
symbols ..... but then if a bunch of peripheral machinery is stapled on
the back end of the system, enabling it see the world and use robot
arms, the processing and "symbol" building that goes on in that
part of the system will have ANOTHER implicit meaning for the symbols.
There is no reason why these two sets of symbols should have the same
meaning! In fact, it turns out (when you think about it a little
longer) that all of the problem has to do with the programmers going in
and building any symbols using THEIR idea of what the symbols should
mean: the system has to be allowed to build its own symbols from the
ground up, without us necessarily being able to interpret those symbols
completely at all. We might nevcer be able to go in and look at a
system-built symbol and say "That means [x]", because the real meaning
of that symbol will be implicit in the way the system uses it.
In summary: the symbol grounding problem is that systems need to have
only one interpretation of their symbols, and it needs to be the one
built by the system itself as a result of a connection to the external
world.
Does that make more sense?
Richard Loosemore
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73611055-3bb067