Edward,

I can only make a blanket response.

The differences between the two sets of ideas that I described run deeper than you are taking them, alas. This is not easy to discuss, because any attempt to describe the situation in just one short post is doomed to oversimplify. I hope you understand.

It is not a matter of condemning School 1 out of hand, or being dogmatic: there are very real consequences of taking the two different approaches, and if I criticise School 1 it is because of long experience of thinking about these issues and their ramifications.

The core issue is this. What does the symbol "cup" denote in the world? And ditto for the symbol "bowl"? If I make the statement "A cup is a drinking vessel with a handle", is this statement true in the real world, or false, or somewhere in between?

To someone in School 1, the "things in the world" that correspond to "cup" and "bowl" really do relate to functions across possible worlds that capture these symbols. To (some of) them, the truth of the statement "A cup is a drinking vessel with a handle" can really be evaluated (or, at least, a probability of its truth, or a range of probabilities).

The problem with all of this is that School 1 says that when real people show signs of not agreeing with each other on the truth of that statement (as they do), this is because people are imperfect understanders of the "correct" truth. As if there really is a "correct" truth. As you can imagine, those who criticise School 1 say "How do you tell the difference between a situation in which there is a "real" truth about such statements, but nobody can EVER build a system that expresses that real truth, and a situation in which the idea of a 'real" truth underlying the workings of intelligent systems is just bunkum?"

The position of the School 1 folks can be made to unravel and become pure speculative fantasy, if you push at it hard enough.

I think I had better stop: the depth in this issue is huge, and it would take an entire book to do them justice.


Richard Loosemore





Edward W. Porter wrote:
RESPONSE TO RICHARD LOOSEMORE'S POST OF Wed 10/17/2007 10:25 AM, WITH MY COMMENTS IN ALL CAPS SANS THE “>” THANG.

FIRST, LET ME SAY I FOUND YOUR POST VERY INTERESTING. I AGREE WITH MUCH OF WHAT IT IS SAYING.

Edward W. Porter wrote:
 RICHARD LOOSEMORE WROTE IN HIS Tue 10/16/2007 9:25 AM POST.

 “So if someone tries to talk about what the grounding problem is by
 defining it in terms of semantics, I start to wonder what they're
 putting on their cornflakes in the morning.  The trivial sense of
 "semantics" don't apply, and the deeper senses are so vague that they
 are almost synonymous with grounding.”

 I AM PERHAPS NOT AS SCHOOLED IN THE MEANING OF “SEMANTICS” AS YOU, BUT
 I
 THINK OF IT AS MEANING “MEANING”, AND I THINK MEANING COMES LARGELY FROM
 ASSOCIATIONS, IN HUMANS LARGELY DERIVED FROM EXPERIENCE: OUR OWN DIRECT
 EXPERIENCES; EXPERIENCES OF THINGS WE HAVE READ OR BEEN TOLD; AND
 EXPERIENCE THAT HAS BEEN DISTILLED BY EVOLUTION FROM OUR ANIMAL
 ANCESTORS (SUCH AS THE ASSOCIATIONS THAT MAKE SEX SEEM SO IMPORTANT TO
 MANY OF US).

YES, IT IS A VAGUE TERM. BUT THAT DOES NOT MEAN IT IS WITHOUT USE. IF,
 AS YOU SUGGEST, THE DEEPER SENSES OF “SEMANTIC” ARE ALMOST SYNONYMOUS
 WITH GROUNDING, WHY WOULD THAT MAKE THE TERM ANY LESS USEFUL THAN
 “GROUNDING.”

Edward,

To understand what is going on here, it might help to know that
underneath the surface there is a philosophical divide between two views
of what "semantics" means.

Some people, who have a deep desire to make the world as orderly and
crystalline as possible, have decided that the meaning of any symbol
that is located inside an intelligent system should be something to do
with an ideal, mathematical relationship between the symbol and the
thing in the world.

SO FAR, EXCEPT POSSIBLY FOR THE WORD “IDEAL”, THIS DOES NOT CONFLICT WITH THE SECOND SCHOOL YOU DESCRIBE BELOW. PRESUMABLY A WELL GROUNDED AGI WILL HAVE MATHEMATICAL (DEFINED BROADLY) RELATIONSHIPS BETWEEN ITS INTERNAL REPRESENTATION AND THINGS IN THE WORLD. IF BY “IDEAL” YOU MEAN “OPTIMAL”, THEN IT WOULD NOT APPEAR AT ALL WRONG FOR THE DESIGNER OF AN AGI TO STRIVE TO HAVE HIS SYSTEM AT LEAST HAVE “SOMETHING TO DO WITH” SUCH AN IDEAL. IF BY “IDEAL” YOU ARE TALKING PLATONIC IDEALS, YES, THAT IS DUMB (BUT FOR A GUY LIVING 2,500 YEARS AGO HIS “SHAWDOWS IN THE CAVE” THING WAS PRETTY COOL).

This is subtle, but what they are trying to do is eliminate any
vagueness and (especially) eliminate the idea that meaning might depend
on the system that is doing the understanding.  They are extremely
antagonistic to the idea that there might not be one unified meaning
underneath the surface, which is the "perfect" or "canonical" meaning of
a symbol.  They would readily agree that actual understanding systems -
humans or AI systems - will never completely reach this ideal meaning,
but their stance is that there IS an ideal meaning underneath
everything, and that real-world understanding systems just do some kind
of approximation to that real meaning.

BUT I BOTH AGREE AND DISAGREE WITH THIS FIRST SCHOOL AS SO DESCRIBED. MY INTUITION DERIVED FROM MY OWN GROUNDING TELLS ME THAT THERE IS AN EXTERNAL WORLD THAT HAS ITS OWN “TRUTH” INDEPENDENT OF HUMAN OBSERVERS. (SUCH AS THE REALITIES ON THE MOONS ON THE OTHER PLANETS IN OUR SOLAR SYSTEM, THAT I BELIEVED HAD MUCH OF THE SAME COMPLEX DYNAMICS BEFORE WE RECENTLY OBSERVED THEM WITH OUR SPACE PROBES AS THEY DO NOW THAT WE HAVE OBSERVED THEM.) SO IF ONE WANT TO ASCRIBE “MEANING” TO SUCH EXTERNAL TRUTHS, I DON’T THINK THAT IS PER SE STUPID. (UNLESS, AS A DEFINITIONAL MATTER, THERE IS A CLEAR CONSENSUS THAT SUCH USAGE WOULD BE CONTRARY TO THE CLEAR MEANING OF “MEANING.” AND FROM YOU POST THAT DOES NOT SEEM TO BE THE CASE.)

ONE CAN THINK OF EXTERNAL REALITY AS A SHANNON INFORMATION SOURCE FROM WHICH WE RECEIVED SIGNALS. OVER TIME WE BUILD A CODE BOOK FROM PARTS OF SUCH SIGNALS, AND WE ADD TO THE CODE BOOK PATTERNS BETWEEN SUCH INITIAL CODES, AND SO ON. WE CONDUCT EXPERIMENTS TESTING WHICH CODES PRODUCE OUTPUTS FROM OUR OWN BODIES THAT TEND TO CAUSE THE EXTERNAL SOURCE TO GENERATE SIGNALS WE WANT. BY THIS ANALOGY, WHY IS IT THAT I AM TO ASSUME THAT "MEANING" ONLY EXISTS IN ME THE DECODER AND IN MY CODEBOOKS? WHY CAN IT NOT ALSO EXIST, ALTHOUGH DIRECTLY UNKNOWABLE, IN THE SOURCE THAT SENDS ME THE SIGNALS I AM DECODING AND THAT ALSO RESPONDS TO THE SIGNALS I SEND IT?

I DEEPLY BELIEVE THERE ARE VERY REAL LIMITS TO HOW MUCH, OR HOW WELL, I OR ANY HUMAN OR MACHINE INTELLIGENCE WILL BE ABLE TO KNOW THAT EXTERNAL TRUTH. EVEN THE UNIVERSE ITSELF, IF VIEWED AS A COMPUTING ENTITY HAS LIMITATIONS AS TO WHAT CAN BE KNOWN BY WHICH OF ITS PARTS AT WHICH TIMES (UNLESS THERE IS SOME DEEP-DISH EVERY-THING-IS-ENTANGLED QUANTUM COMPUTING JU-JU GOING ON THAT IS WAY BEYOND MY CURRENT UNDERSTANDING).

One way this group have tried to pursue their agenda is through an idea
due to Montague and others, in which meanings of terms are related to
something called "possible worlds".  They imagine infinite numbers of
possible worlds, in which all the possible variations of every
conceivable parameter are allowed to vary, and then they define the
meanings of actual things in our world in terms of functions across
those possible worlds.  Such an idea is, of course, not usable in any
computer program, since it requires unthinkably large infinities [sic!],
but the idea is that the "real" definition of meaning gets pinned down
precisely, and then they can talk about the imperfect kind of meaning
that ordinary brains and computers (with their non-infinite resources)
have to use, as a mere approximation or limited form of the more
perfect, ideal, mathematical sense of meaning.

I AM NOT AN EXPERT ON PROBABILITIES, BUT IT SEEMS TO ME THAT THINKING IN TERMS OF MULTIPLE WORLD CAN BE USEFUL FOR AT LEAST HELPING ONE TO UNDERSTAND CERTAIN PROBABILISTIC CONCEPTS. OF COURSE ATTEMPTS TO FULLY UNDERSTAND ANY BUT QUITE SIMPLE WORLDS, OF WHICH THERE ARE SUCH MULTIPLES, ARE DOOMED TO FAILURE (BARRING SOMETHING LIKE THE TYPE JU-JU MENTIONED ABOVE).

This idea has not been fully worked out, but like all mathematical
formalisms, the fact that it might not actually be consistent with
reality or usable in any meaningful way has not stopped mathematical
logicians from running with it.

In particular, the idea begs all kinds of questions that the
mathematical logicians ignore or evade:  for example, the fact that
human beings use concepts that do not appear to have sharp boundaries at
all, is, to them, just evidence that human beings are sloppy
implementations of the ideal.  The possibility that, on the contrary,
humans may be the best possible implementation of an understanding
system, and their ideal mathematisation of "meaning" might be useless,
or have no relevance to actual understanding systems, or just be
downright wrong, is not a possibility that they even consider.

The bizarre thing is that AI researchers (like my good old favorites,
Russell and Norvig) talk as if the idea of "semantics" has some deep
mathematics to back it up, referring to the possible-worlds
interpretation, but don't tell you that possible worlds idea buys you
absolutely nothing except some excuses to kid yourself that you are
using a meaningful word when you say "semantics".

A FEW YEARS AGO I ATTENDED A VERY GOOD LECTURE AT MIT BY A MACHING VISION GUY. HIS MAJOR THESIS WENT WAY BEYOND MACHINE VISITION. IT WAS THAT MANY OF THE MATHEMATICALLY APPROACHES THAT HAVE BEEN MADE IN AI -- THE ONES THAT HAVE CAUSED PEOPLE SO SAY FOR DECADES, I HAVE NO IDEA HOW A COMPUTER COULD EVERY SOLVE THESE PROBLEMS NO MATTER HOW BIG THEY ARE -– ARE THEORETICALLY CORRECT, BUT INCREDIBLY INEFFICIENT. AND THAT MANY OF THE MAJOR ADVANCES IN RECENT YEARS IN MANY FIELDS OF AI, HAVE BEEN BASED ON APPLYING SOMETHING VERY ROUGHLY LIKE AN 80-20 RULE TO ACCOMPLISH THE MOST IMPORTANT PART OF WHAT THE EARLIER APPROACHES WERE TRYING TO DO, OFTEN WITH MANY MILLIONS OF TIMES LESS COMPUTATION.

HOW IS THIS RELEVANT. A DUMB CREATE-A-SPACE-REPRESENTING-ALL-POSSIBLE-STATES-OF-ALL-POSSIBLE-WORLDS APPROACH TO MULTIPLE WORLD REASONING IS OBVIOUSLY NOT GOING TO GET YOU VERY FAR. THE PERCENT OF SUCH A POSSIBLE STATE SPACE THAT IS LIKELY TO BE RELEVANT AND IMPORTANT IS INFINITESIMAL, I.E., APPROACHES ZERO AS A LIMIT AND DOES SO WITHIN MILLISECONDS. TO DEAL IN THE HYPER-DIMENSIONAL SPACE DEMANDED BY THE SEMANTICS WE HUMANS WANT AND NEED, THE REPRESENTATIONS REQUIRED FORM AN AMAZINGLY SPARSE SUBSET OF SUCH A HYPER-DIMENSIONAL MULTIPLE WORLDS STATE SPACE -- YET SUCH A SPARSE REPRESENTATION COULD BE VIEWED AS A SUBSET OF SUCH A SPACE. THANKS TO A MENTION IN PETER VOSS’S PAPER ON KURZWEIL’S WEB CITE A FEW YEARS AGO I READ PAPER ON GNG, “GROWING NEURAL GAS, EXPERIMENTS WITH GNG, GNG WITH UTILITY AND SUPERVISED GNG", BY JIM HOLMSTROM. I THOUGHT IT WAS A REALLY COOL PAPER BECAUSE IT CREATED A REALLY SIMPLE ALGORITHM FOR PLACING REPRESENTATION IN A POSSIBLY VERY HIGH DIMENSIONAL SPACE ONLY WHERE THERE HAD BEEN OBSERVED DATA (AND WITH THE RIGHT PARAMETER SETTINGS AND DATA, ONLY IN NEIGHBORHOODS WHERE THEY HAD BEEN MULTIPLE REPRESENTATIONS OR IN WHICH THE OBSERVATIONS WERE SOMEHOW DEEMED IMPORTATION. YES, I HAD BEEN PREVIOUSLY THINKING OF A SYSTEM THAT WOULD DO SOMETHING QUITE SIMILAR, IF DESCRIBED AT SUCH A HIGH LEVEL. BUT I HAD NOT CREATED SUCH A SIMPLE AND ELEGANT DEMONSTRATION OF THE CONCEPT.

I BELIEVE THE HUMAN BRAIN DOES SOMETHING VERY ROUGHLY SIMILAR TO GROWING NEURAL GAS WHEN IT BUILDS PATTERNS BASED ON EXPERIENCES, AND HAS PATTERNS COMPETE WITH EACH OTHER FOR CONTINUED EXISTENCE. CERTAINLY EVIDENCE ON BRAIN PLASTICITY SUPPORTS SUCH A BELIEF. FROM SUCH PATTERNS WHICH HAVE BEEN OBTAINED ABOUT MULTIPLE POSSIBLE OUTCOMES IN THE WORLD WE ARE ABLE TO COMPUTE SOME SUBSPACE OF THE MULTIPLE WORLDS APPROACH. I THINK WE DO SOMETHING SIMILAR TO THIS ALL THE TIME WHEN WE DO PROBABILISTIC INFERENCING. BEING ABLE TO APPROPRIATELY LIMIT THAT SUBSPACE, IN A CONTEXT APPROPRIATE WAY, IS ONE OF THE BIG CHALLENGES REQUIRED TO GET AGI WORKING WELL.

SO WHAT I AM SAYING IS THAT IT IS OFTEN VALUABLE TO UNDERSTAND THE FORMALISMS THAT WOULD BE THE RIGHT WAY TO COMPUTE IF YOU HAD UNLIMITED COMPUTING POWER, SO THAT THEN YOU CAN TRY TO EFFICIENTLY APPROXIMATE THEM WITH MUCH LESS COMPUTATION. AND I THINK COMPUTING WITH SOME ASPECT OF MULTIPLE WORLDS IS VALUABLE (AND I THINK GUYS IN SCHOOL TWO ARE ACTUALLY DOING SUCH COMPUTING WHETHER THEY RECOGNIZE IT OR NOT.)

BUT OBVIOUSLY A MULTIPLE-WORLDS APPROACH TO AGI WITHOUT THE WILLINGNESS TO USE SUBSTANTIAL SIMPLIFYING TECHNIQUES, SOMEWHAT LIKE THOSE ALMOST CERTAINLY USED IN THE HUMAN MIND, ARE BRAIN DEAD.

What is the alternative to this clean, idealistic interpretation of
semantics?

I HAVE CALLED THIS SCHOOL TWO.

The alternative is simply that the correspondence between symbols inside
an understander and the "things" in the outside world is DEFINED by what
the understander does.  The understanding system does not try to
approximate to some ideal, it IMPOSES its own technique for breaking up
the world, and shares that technique with other understandings systems
that are built along the same lines.

WE CAN, AND OFTEN DO, TRY TO APPROXIMATE SOME IDEAL, BUT OF COURSE, EXCEPT IN SIMPLE LIMITED DOMAINS SUCH AS INTEGER ARITHMETIC, OUR APPROXIMATIONS WILL USUALLY BE EXTREMELY PARTIAL.

In this conception, it makes to sense to insist on the "absolute" or
ideal meaning of a symbol, because every understanding system can do it
in a different way.  The fact that a community of similarly-constructed
understanders does it the same way is only an indication of the fact
that .... well, that they are similarly constructed!

In light of this second view of "semantics", it would make sense to say
that a properly grounded system had a proper semantics, because when it
is properly grounded it builds its symbols in a consistent, usable way -
and the very fact that it can build symbols in such a way as to use them
intelligently is BY ITSELF a statement that it has a viable "semantics".

Looked at that way, the grounding issue and the question of whether it
has a proper (well-formed, viable, self consistent....?) semantics, are
closely related and interdependent.  Not identical, but very close.

But by the same token, if someone says that "grounding" is really a
vague and inappropriate term, and that we should really be talking about
whether the system has a viable semantics, that distinction is somewhat
nonsensical:  "semantics" is no better defined than "grounding".

I SHOULDN’T BE PUTTING WORDS IN JOSH’S MOUTH, BUT WHAT I THOUGHT HE WAS SAYING WAS THAT INTELLIGENCE AND MEANING CAN BE BASED ON SOMETHING OTHER THAN HARNAD GROUNDING, AT LEAST IN ITS MORE NARROW INTERPRETATION OF MEANING DERIVED FROM DIRECT SENSORY EXPERIENCE. I SOMETIMES FIND IT USEFUL TO THINK OF INTELLIGENCE IN A BROAD SENSE THAT INCLUDES ANYTHING THAT COMPUTES, INCLUDING PHYSICAL REALITY (EVEN INCLUDING MY FIRST IMSAI 8080). THUS, IT IS NOT SURPRISING THAT IT MAKES SENSE FOR ME TO THINK OF PHYSICAL REALITY AS HAVING “MEANING” IN SOME SENSE OF THE WORD.

I have only been able to sketch the situation here.  What I was
previously referring to was just a reflection of this deeper division
among those who consider the question of what symbols in a symbol system
"mean".


In my view, the only consistent resolution of the question of what
"meaning" is (or what "semantics" is) is "Meaning is what minds just
happen to build when then apply their internal mechanisms to the job of
organizing their internal representation of the world".

AND THE JOB OF TRYING TO SURVIVE AND THRIVE IN THE REAL WORLD.

Why is this the
only consistent resolution?  Because you cannot ask the question "What
is the meaning of 'meaning'?" without begging the question itself:  you
have to *implicitly* answer the question the very moment that you open
your mouth to start answering the question, because of course you cannot
say what the meaning of anything is unless we already agree what
"meaning" entails.  The only way to resolve this paradox is to use a sui
generis approach:  "Meaning is a consensus of what all minds do when
they construct models of the world" allows you to go straight ahead and
define "meaning" in more detail because the definition is
self-consistent, and it is designed to converge on the consensus.

That means that possible-worlds semantics are nonsensical (because they
do not say WHY that interpretation of meaning is better than any other),
but the other view of semantics is perfectly consistent with itself and
with our experience.

I AGREE WITH MUCH OF WHAT YOU SAY, BUT I WOULD BE LESS CONDEMNING OF SCHOOL ONE. I TRY TO AVOID BEING DOGMATIC. I AM MORE INTERESTED IN BEING OPEN TO WHAT IS POTENTIALLY VALUABLE, AND WHAT IS POTENTIALLY LIMITING, IN VARIOUS APPROACHES RATHER THAN IN CONDEMNING THEM

ALSO, I WONDER HOW MANY PEOPLE TODAY ACTUALLY STRICTLY HOLD TO THE NOTION THAT WE CAN EVER HOPE TO COMPUTER PURE OR COMPLETE LOGIC OR TRUTH. (ALTHOUGH, SURPRISINGLY, IN THE PAST I HAVE RUN INTO A FEW WHO HAVE ARGUED AS MUCH.)

IT SEEMS TO ME THAT ANY SORT OF PROBABILISTIC THINKING HAS A POSSIBLE-WORLDS ASPECT TO IT. IF YOU FLIP A COIN, YOU MIND TENDS TO THINK OF THE TWO POSSIBLE OUTCOMES AND THEIR PROBABILITIES, THOSE ARE OPPOSING POSSIBLE WORLDS. YES, IT IS STUPID TO EVEN BEGIN TO TRY TO COMPUTE ALL OF ANY BUT RELATIVELY SIMPLE MULTIPLE WORLDS, BUT THERE IS VALUE -- SURVIVAL VALUE, MONETARY VALUE, SATISFACTION GENERATING VALUE -- IN HAVING OUR MODELS BETTER APPROXIMATE THE IMPORTANT ASPECTS OF EXTERNAL REALITY. AND THIS INCLUDES MODELING MULTIPLE POSSIBLE OUTCOMES OF MULTIPLE POSSIBLE EVENTS.

Richard Loosemore.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=54756694-d641d8

Reply via email to