Dennis Gorelik wrote:
Richard,

1) Grounding Problem (the *real* one, not the cheap substitute that
everyone usually thinks of as the symbol grounding problem).

Could you describe, what *real* grounding problem is?

It would be nice to consider an example.

Say, we are trying to build AGI for the purpose of running intelligent
chat-bot.

What would be the grounding problem in this case?

I'll do my best.

The grounding problem has to do with exactly who is doing the interpreting of the AGI's internal symbols.

If the system is built in such a way that it builds its own symbols as part of the process of using them, then by definition it is grounded because it was the one that made the symbols.

But if we write down a bunch of symbols - deciding the format in which the symbols are represented, and stuff at least some of them with content - then there is a very big question about whether the mechanisms that browse on those symbols will actually be *using* them as if their meaning was the same as the meaning we originally intended. Meaning, you see, is implicit in the way the symbols are used, so there is no particular reason why the way the symbols are actually used by the system should match up with the originally intended meaning that we impose when we look at the symbols.

The way this most often manifests itself is when the AI system delivers results in natural language that are simply an expression of our imposed meanings.

Main difficulty: this entire problem is extremely subtle, and most people simply don't get what the problem is, so they think it is about connecting the AGI to its environment in some way. It takes a fair bit of effort to get you head around the real problem (I have only sketched a pale shadow of it in this post, for example).

Hope that makes enough sense.



Richard Loosemore

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=72122415-b8477a

Reply via email to