Edward W. Porter wrote:
RICHARD LOOSEMORE WROTE IN HIS Tue 10/16/2007 9:25 AM POST.
“So if someone tries to talk about what the grounding problem is by
defining it in terms of semantics, I start to wonder what they're
putting on their cornflakes in the morning. The trivial sense of
"semantics" don't apply, and the deeper senses are so vague that they
are almost synonymous with grounding.”
I AM PERHAPS NOT AS SCHOOLED IN THE MEANING OF “SEMANTICS” AS YOU, BUT I
THINK OF IT AS MEANING “MEANING”, AND I THINK MEANING COMES LARGELY FROM
ASSOCIATIONS, IN HUMANS LARGELY DERIVED FROM EXPERIENCE: OUR OWN DIRECT
EXPERIENCES; EXPERIENCES OF THINGS WE HAVE READ OR BEEN TOLD; AND
EXPERIENCE THAT HAS BEEN DISTILLED BY EVOLUTION FROM OUR ANIMAL
ANCESTORS (SUCH AS THE ASSOCIATIONS THAT MAKE SEX SEEM SO IMPORTANT TO
MANY OF US).
YES, IT IS A VAGUE TERM. BUT THAT DOES NOT MEAN IT IS WITHOUT USE. IF,
AS YOU SUGGEST, THE DEEPER SENSES OF “SEMANTIC” ARE ALMOST SYNONYMOUS
WITH GROUNDING, WHY WOULD THAT MAKE THE TERM ANY LESS USEFUL THAN
“GROUNDING.”
Edward,
To understand what is going on here, it might help to know that
underneath the surface there is a philosophical divide between two views
of what "semantics" means.
Some people, who have a deep desire to make the world as orderly and
crystalline as possible, have decided that the meaning of any symbol
that is located inside an intelligent system should be something to do
with an ideal, mathematical relationship between the symbol and the
thing in the world.
This is subtle, but what they are trying to do is eliminate any
vagueness and (especially) eliminate the idea that meaning might depend
on the system that is doing the understanding. They are extremely
antagonistic to the idea that there might not be one unified meaning
underneath the surface, which is the "perfect" or "canonical" meaning of
a symbol. They would readily agree that actual understanding systems -
humans or AI systems - will never completely reach this ideal meaning,
but their stance is that there IS an ideal meaning underneath
everything, and that real-world understanding systems just do some kind
of approximation to that real meaning.
One way this group have tried to pursue their agenda is through an idea
due to Montague and others, in which meanings of terms are related to
something called "possible worlds". They imagine infinite numbers of
possible worlds, in which all the possible variations of every
conceivable parameter are allowed to vary, and then they define the
meanings of actual things in our world in terms of functions across
those possible worlds. Such an idea is, of course, not usable in any
computer program, since it requires unthinkably large infinities [sic!],
but the idea is that the "real" definition of meaning gets pinned down
precisely, and then they can talk about the imperfect kind of meaning
that ordinary brains and computers (with their non-infinite resources)
have to use, as a mere approximation or limited form of the more
perfect, ideal, mathematical sense of meaning.
This idea has not been fully worked out, but like all mathematical
formalisms, the fact that it might not actually be consistent with
reality or usable in any meaningful way has not stopped mathematical
logicians from running with it.
In particular, the idea begs all kinds of questions that the
mathematical logicians ignore or evade: for example, the fact that
human beings use concepts that do not appear to have sharp boundaries at
all, is, to them, just evidence that human beings are sloppy
implementations of the ideal. The possibility that, on the contrary,
humans may be the best possible implementation of an understanding
system, and their ideal mathematisation of "meaning" might be useless,
or have no relevance to actual understanding systems, or just be
downright wrong, is not a possibility that they even consider.
The bizarre thing is that AI researchers (like my good old favorites,
Russell and Norvig) talk as if the idea of "semantics" has some deep
mathematics to back it up, referring to the possible-worlds
interpretation, but don't tell you that possible worlds idea buys you
absolutely nothing except some excuses to kid yourself that you are
using a meaningful word when you say "semantics".
What is the alternative to this clean, idealistic interpretatoin of
semantics?
The alternative is simply that the correspondence between symbols inside
an understander and the "things" in the outside world is DEFINED by what
the understander does. The understanding system does not try to
approximate to some ideal, it IMPOSES its own technique for breaking up
the world, and shares that technique with other understandings systems
that are built along the same lines.
In this conception, it makes to sense to insist on the "absolute" or
ideal meaning of a symbol, because every understanding system can do it
in a different way. The fact that a community of similarly-constructed
understanders does it the same way is only an indication of the fact
that .... well, that they are similarly constructed!
In light of this second view of "semantics", it would make sense to say
that a properly grounded system had a proper semantics, because when it
is properly grounded it builds its symbols in a consistent, usable way -
and the very fact that it can build symbols in such a way as to use them
intelligently is BY ITSELF a statement that it has a viable "semantics".
Looked at that way, the grounding issue and the question of whether it
has a proper (well-formed, viable, self consistent....?) semantics, are
closely related and interdependent. Not identical, but very close.
But by the same token, if someone says that "grounding" is really a
vague and inappropriate term, and that we should really be talking about
whether the system has a viable semantics, that distinction is somewhat
nonsensical: "semantics" is no better defined than "grounding".
I have only been able to sketch the situation here. What I was
previously referring to was just a reflection of this deeper division
among those who consider the question of what symbols in a symbol system
"mean".
In my view, the only consistent resolution of the question of what
"meaning" is (or what "semantics" is) is "Meaning is what minds just
happen to build when then apply their internal mechanisms to the job of
organizing their internal representation of the world". Why is this the
only consistent resolution? Because you cannot ask the question "What
is the meaning of 'meaning'?" without begging the question itself: you
have to *implicitly* answer the question the very moment that you open
your mouth to start answering the question, because of course you cannot
say what the meaning of anything is unless we already agree what
"meaning" entails. The only way to resolve this paradox is to use a sui
generis approach: "Meaning is a consensus of what all minds do when
they construct models of the world" allows you to go straight ahead and
define "meaning" in more detail because the definition is
self-consistent, and it is designed to converge on the consensus.
That means that possible-worlds semantics are nonsensical (because they
do not say WHY that interpretation of meaning is better than any other),
but the other view of semantics is perfectly consistent with itself and
with our experience.
Richard Loosemore.
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=54551452-6a12c7