No, a symbol is simply anything abstract that stands for an object - word sounds, alphabetic words, numbers, logical variables etc. The earliest proto-symbols may well have been emotions.

My point is that Harnad clearly talks of two intermediate visual/sensory levels of processing - the iconic and still-more-schematic "categorical representations" - neither of which I can remember seeing in the ideas of anyone here for their AGI's. But I may have forgotten something/someone. Have I?

Richard:

You may want to check out the background material on this issue. Harnad invented the idea that there is a 'symbol grounding problem', so that is why I quoted him. His usage of the word 'symbol' is the one that is widespread in cognitive science, but it appears that you are missing this, and instead interpreting the word 'symbol' to be one of your own idiosyncratic meanings. You can see this most clearly when you write that the symbols are things like "H-O-R-S-E" and "C-A-T" etc .... those look like strings of letters, so if you think that a symbol, by definition, must involve a string of letters (or phonemes), then you are misunderstanding Harnad's (and everyone else's) meaning of by rather a wide margin. That probably explains your puzzlement in this case.


Richard Loosemore



Mike Tintner wrote:
I'm not quite sure why Richard would want to quote Harnad. Harnad's idea of how the brain works depends on it first processing our immediate sensory images as "iconic representations" - not 1m miles from Lakoff's image schemas. He sees the brain as first developing some kind of horse graphics, for the horses we see,

Then there is an additional and very confusing level of "categorical representations" which pick out the "invariant features" of horses - and are still nonsymbolic. But Harnad doesn't give any examples of what these features are. They are necessary he claims to be able to distinguish between horses and similar animals.

(If anyone has further light to shed here, I'd be v. interested).

And only after those two levels of processing does the brain come to symbols - to "H-O-R-S-E" and "C-A-T" etc - although, of course, if you're thinking evolutionarily, it's arguable that the brain doesn't actually need these symbols at all -our ancestors survived happily without language.

So Harnad depicts symbols as not so much simply grounded as deeply rooted in a tree of imagistic processing - and I'm not aware of any AGI-er using imagistic processing (or have I got someone, like Ben, wrong?)

Richard:
Derek Zahn wrote:
Richard Loosemore:

> My god, Mark: I had to listen to people having a general discussion of > "grounding" (the supposed them of that workshop) without a single person
 > showing the slightest sign that they had more than an amateur's
 > perspective on what that concept actually means.
I was not at that workshop and am no expert on that topic, though I have seen the word used in several different ways. Could you point at a book or article that does explain the concept or at least use it heavily in a correct way? I would like to improve my understanding of the meaning of the "grounding" concept. Note: sometimes written words do not convey intensions very well -- I am not being sarcastic, I am asking for information to help improve the quality of discussion that you have found lacking in the past.

I still think it is best to go back to Stevan Harnad's two main papers on the topic. He originated the issue, then revisited it with some frustration when people starting diverging it to mean anything under the sun.

So:

http://users.ecs.soton.ac.uk/harnad/Papers/Harnad/harnad90.sgproblem.html

and

http://users.ecs.soton.ac.uk/harnad/Papers/Harnad/harnad93.cogsci.html

are both useful.

I do not completely concur with Harnad, but I certainly agree with him that there is a real issue here.

However......

The core confusion about the SGP is so basic that you will find it difficult to locate one source that explains it. Here it is in Harnad's own words (from the second paper above):

"The goal of symbol grounding is not to guarantee uniqueness but to ensure that the connection between the symbols and the objects they are systematically interpretable as being about does not depend exclusively on an interpretation projected onto the symbols by an interpreter outside the system."

The crucial part is to guarantee that the meaning of the symbols does not depend on interpreter-applied meanings. This is a subtle issue, because the interpreter (i.e. the programmer or system designer) can insert their own interpretations on the symbols in all sorts of ways. For example, they can grab a symbol and label it "cat" (this being the most egregiouse example of failure to ground), or they can stick parameters into all of the symbols and insist that the parameter "means" something like the "probability that this is true, or real". If the programmer does anything to interpret the meaning of system components, then there is at least a DANGER that the symbol system has been compromised, and is therefore not grounded.

You see, when a programmer makes some kind of design choice, they very often insert some *implicit* interpretation of what symbols mean. But then, if that same programmer goes to the trouble of connecting that AGI to some mechanisms that build and use symbols, then the build-and-use mechanisms will also *implictly* impose a meaning on those symbols. under almost all circumstances (and especially if there is ANY SUSPICION OF COMPLEXITY IN THE SYSTEM), these two sets of implicit meanings will diverge. There is simply no reason why they should stay in sync with one another, so they don't. If there is any conflict, then the grounding of the system has been compromised. Ideally, the programmer gets out of the way completely and leaves it to the system to ground its own symbols. (That, of course, almost never happens).

But now, what happens in practice when people talk about symbol grounding? They usually take an extremely naive approach and assume that IF a system has some kind of connection to the outside world THEN it must have grounded symbols! This is crazy. The fact is that having an outside connection is a good first step to getting grounded symbols, but it does not even begin to address all the ways that the grounding can get compromised. Yet, people who do not really understand the idea of grounding, but know that it is a cool buzzword, tend to use the buzzword as if it just meant "connecting your AGI to the outside world".

This certainly happens on this list, but it was also present in many of the AGI 2006 papers.

At the 2006 workshop (whose theme was something like "Grounding symbols in the real world") I became more and more frustrated to see that grounding was being mentioned in this trivial way, and that nobody was stopping to point out that this was just downright wrong. Remember, this was supposed to be the *theme* of the workshop! How can that be the theme, and then everyone (including the workshop convener) not understand that this usage was trivial and worthless?

This idiotic situation went on and on until the penultimate session of the workshop, at which point I remember that I stood up in the discssion period just before the final coffee break and explained that we were not using "grounding" in a sensible way. Since I had only a few moments to talk I said that I was looking forward to the roundtable discussion after the break, because the topic of that roundtable was "Symbol Grounding", so we would have an opportunity to get down to some real meat and sort the problem out.

Then, when we came back from the break, Ben Goertzel announced that the roundtable on symbol grounding was cancelled, to make room for some other discussion on a topic like "the future of AGI", or some such. I was outraged by this. The subsequent discussion was a pathetic waste of time, during which we just listened to a bunch of people making vacuous speculations and jokes about artificial intelligence.

In the end, I decided that the reason this happened was that when the workshop was being planned, the title was chosen in ignorance. That, in fact, Ben never even intended to talk about the real issue of grounding symbols, but just needed a plausible-sounding theme-buzzword, and so he just intended the workshop to be about a meaningless concept like connecting AGI systems to the real world.

I hope that clarifies the issue a little. I have also written about the grounding issue on these lists, but I don't remember where those posts are.



Richard Loosemore

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to