RICHARD LOOSEMORE WROTE IN HIS Tue 10/16/2007 9:25 AM POST. So if someone tries to talk about what the grounding problem is by defining it in terms of semantics, I start to wonder what they're putting on their cornflakes in the morning. The trivial sense of "semantics" don't apply, and the deeper senses are so vague that they are almost synonymous with grounding.
I AM PERHAPS NOT AS SCHOOLED IN THE MEANING OF SEMANTICS AS YOU, BUT I THINK OF IT AS MEANING MEANING, AND I THINK MEANING COMES LARGELY FROM ASSOCIATIONS, IN HUMANS LARGELY DERIVED FROM EXPERIENCE: OUR OWN DIRECT EXPERIENCES; EXPERIENCES OF THINGS WE HAVE READ OR BEEN TOLD; AND EXPERIENCE THAT HAS BEEN DISTILLED BY EVOLUTION FROM OUR ANIMAL ANCESTORS (SUCH AS THE ASSOCIATIONS THAT MAKE SEX SEEM SO IMPORTANT TO MANY OF US). YES, IT IS A VAGUE TERM. BUT THAT DOES NOT MEAN IT IS WITHOUT USE. IF, AS YOU SUGGEST, THE DEEPER SENSES OF SEMANTIC ARE ALMOST SYNONYMOUS WITH GROUNDING, WHY WOULD THAT MAKE THE TERM ANY LESS USEFUL THAN GROUNDING. BUT TALKING ABOUT VAGUENESS, DOES GROUNDING, MEANING, AND SEMANTICS EACH ONLY HAVE RELEVANCE TO THINGS THAT ARE REPRESENTED IN ONE OR MORE MINDS, OR DO THINGS IN THE PHYSICAL WORLD HAVE MEANING BECAUSE OF THEIR ASSOCIATIONS, PROPERTIES, AND CAUSES AND EFFECTS -- INDEPENDENT OF ANY MINDS KNOWING OF THEM. DOES THE MEANING OF THE WORD TRIGGER, AS IN THE TRIGGER OF A GUN, RELATE TO THE ACTUAL FACT THAT PULLING IT CAN CAUSE A GUN TO FIRE AND PERHAPS INFLICT GREAT PAIN, DISABILITY, OR DEATH, OR ONLY TO UNDERSTANDING OF SUCH FACTS BY A HUMAN OR, PERHAPS, WELL-EDUCATED PRIMATE. THIS IS A DEFINITIONAL ISSUE, BUT IS IT ONE THAT HAS BEEN DECIDED WITH ANY CLARITY BY THE SCHOLARLY COMMUNITY? Moving on to what you say below, your comment about AI systems that have been cloned is, I think, exactly correct. If something gets grounded symbols as a result of the right kind of interaction with the world, there is nothing to stop another system from also having grounded symbols, provided it takes its knowledge structures AND knowledge acquisition mechanisms from the first system. Just because System 2 did not acquire its own knowledge from its own personal experience would not be good grounds [sorry] for saying it is not grounded. I AM GLAD THAT, YOU LIKE JOSH, AGREE WITH ME ON THIS. IT MEANS THAT GROUNDING SHOULD NOT BE LIMITED TO INFORMATION OBTAINED BY THE EXPERIENCES OF THE INTELLIGENCE THAT IS USING IT. Edward W. Porter Porter & Associates 24 String Bridge S12 Exeter, NH 03833 (617) 494-1722 Fax (617) 494-1822 [EMAIL PROTECTED] -----Original Message----- From: Richard Loosemore [mailto:[EMAIL PROTECTED] Sent: Tuesday, October 16, 2007 9:25 AM To: [email protected] Subject: Re: [agi] "symbol grounding" Q&A Edward W. Porter wrote: > This is in response to Josh Storrs Monday, October 15, 2007 3:02 PM > post and Richard Loosemores Mon 10/15/2007 1:57 PM post. > > I mis-understood you, Josh. I thought you were saying semantics could > be a type of grounding. It appears you were saying that grounding > requires direct experience, but that grounding is only one (although > perhaps the best) possible way of providing semantic meaning. Am I correct? If I may interject: a lot of confusion in this field occurs when the term "semantics" is introduced in a way that implies that it has a clear meaning [sic]. Some in the AI community do indeed talk about "semantics" as if the definition is sharply defined, but the more you probe it, the more problems surface, until eventually you can get to the point where you are chasing your own tail. So if someone tries to talk about what the grounding problem is by defining it in terms of semantics, I start to wonder what they're putting on their cornflakes in the morning. The trivial sense of "semantics" don't apply, and the deeper senses are so vague that they are almost synonymous with grounding. Moving on to what you say below, your comment about AI systems that have been cloned is, I think, exactly correct. If something gets grounded symbols as a result of the right kind of interaction with the world, there is nothing to stop another system from also having grounded symbols, provided it takes its knowledge structures AND knowledge acquisition mechanisms from the first system. Just because System 2 did not acquire its own knowledge from its own personal experience would not be good grounds [sorry] for saying it is not grounded. Richard Loosemore > I would tend to differ with the concept that grounding only relates to > what you directly experience. (Of course it appears to be a > definitional issue, so there is probably no theoretical right or > wrong.) I consider what I read, hear in lectures, and see in videos > about science or other abstract fields such as patent law to be > experience, even though the operative content in such experiences is > derived second, third, fourth, or more handed. > > In Richard Loosemores above mentioned informative post he implied > that > according to Harnad a system that could interpret its own symbols is > grounded. I think this is more important to my concept of grounding > than from where the information that lets the system do such important > interpretation comes. To me the important distinction is are we just > dealing with realtively naked symbols, or are we dealing with symbols > that have a lot of the relations with other symbols and patterns, > something like those Pei Wang was talking about, that lets the system > use the symbols in an intelligent way. > > Usually for such relations and patterns to be useful in a world, they > have to have come directly or indirectly from experience of that world. > But again, it is not clear to me that they has to come first handed. > > Presumably if the AGI equivalents of personal computers are being mass > produced by the millions 10 to 20 years from now, and if they come out > of the box with significant world knowledge that has been copied into > their non-volatile memory bit-for-bit from world knowledge that came > from the direct experience from many learning machines and indirectly > from massive sophisticated NL readings of large bodies of text and > visual recognition of large image and video data bases. I would > consider most of the symbols in such a brand new personal AGI to be > grounded -- even though they have not been derived from any experience > of a particular personal AGI, itself -- if they had meaning to the > personal AGI itself. > > It seems ridiculous to say that one could have two identical large > knowledge bases of experiential knowledge each containing millions of > identically interconnected symbols and patterns in two AGI having > identical hardware, and claim that the symbols in one were grounded but > those in the other were not because of the purely historical distinction > that the sensing to learn such a knowledge was performed on only one of > the two identical systems. ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?& ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_secret=54285393-26a1d3
