Richard, It seems that under "Real Grounding Problem" you mean "Communication Problem".
Basically your goal is to make sure that when two systems communicate with each other -- they understand each other correctly. Right? If that's the problem -- I'm ready to give you my solution. BTW, I had to read your explanation 3 times to get it [if I got it]. :-) Tuesday, December 4, 2007, 6:29:11 PM, you wrote: > The grounding problem has to do with exactly who is doing the > interpreting of the AGI's internal symbols. > If the system is built in such a way that it builds its own symbols as > part of the process of using them, then by definition it is grounded > because it was the one that made the symbols. > But if we write down a bunch of symbols - deciding the format in which > the symbols are represented, and stuff at least some of them with > content - then there is a very big question about whether the mechanisms > that browse on those symbols will actually be *using* them as if their > meaning was the same as the meaning we originally intended. Meaning, > you see, is implicit in the way the symbols are used, so there is no > particular reason why the way the symbols are actually used by the > system should match up with the originally intended meaning that we > impose when we look at the symbols. > The way this most often manifests itself is when the AI system delivers > results in natural language that are simply an expression of our imposed > meanings. > Main difficulty: this entire problem is extremely subtle, and most > people simply don't get what the problem is, so they think it is about > connecting the AGI to its environment in some way. It takes a fair bit > of effort to get you head around the real problem (I have only sketched > a pale shadow of it in this post, for example). > Hope that makes enough sense. ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_secret=73456112-f3666a
