Ben Goertzel wrote:
Hi,

The real "grounding problem" is the awkward and annoying fact that if
you presume a KR format, you can't reverse engineer a learning mechanism
that reliably fills that KR with knowledge.

Sure...

To go back to the source, in

http://www.ecs.soton.ac.uk/~harnad/Papers/Harnad/harnad90.sgproblem.html

it is written

"
Suppose you had to learn Chinese as a first language and the only
source of information you had was a Chinese/Chinese dictionary![8]
This is more like the actual task faced by a purely symbolic model of
the mind: How can you ever get off the symbol/symbol merry-go-round?
How is symbol meaning to be grounded in something other than just more
meaningless symbols?[9] This is the symbol grounding problem
...
The standard reply of the symbolist (e.g., Fodor 1980, 1985) is that
the meaning of the symbols comes from connecting the symbol system to
the world "in the right way." But it seems apparent that the problem
of connecting up with the world in the right way is virtually
coextensive with the problem of cognition itself.
"

I suppose this is basically another way of saying the same thing that
you said above...

But, what I would say in response to you is: If you presume a **bad**
KR format, you can't match it with a learning mechanism that reliably
fills one's knowledge repository with knowledge...

If you presume a sufficiently and appropriately flexible KR format
(which is then really more of a meta-format), then it can reformat
itself adaptively based on the knowledge that comes in, as part of the
learning process ;-)

My conjecture is that a probabilistic weighted, labeled hypergraph --
with an appropriate collection of node/link types -- is a sufficiently
and appropriately flexible KR format, which can be made to adapt
itself based on the data within it, via coupling it with a careful
combination of evolutionary and inferential learning mechanisms...

I think you are precisely correct to say that one needs "a sufficiently and appropriately flexible KR format (which is then really more of a meta-format)" but I would object that when you go on to say that "a probabilistic weighted, labeled hypergraph [etc]..." is a good way to get that flexible KR format, you are underestimating the level at which the blowback is going to happen.

What I mean by that is that the hypergraph idea is already locking down many KR assumptions: the nodes are not open to multiple choices for internal active structure, they interact with other nodes in one particular choice of interaction space, relationships between nodes are encoded with relatively simple probabilistic clusters that have direct, high level semantics (IIRC), and so on. As far as flexible formats are concerned, this is a thoroughly collapsed wave function. The remaining flexibility is minimal.

If there were any proof-of-concept systems out there that showed pick up of even halfway sophisticated concepts purely as a result of learning mechanisms and real world sensory data, using the class of KRs into which the hypergraph fits, I would be less pessimistic about it. As it is, the hypergraphs are as much a leap of faith as anything else.

To frame it in the terms that Russell Wallace just used:

> You
> can't just take a generic KR format that was designed without the idea
> that it has to primarily deal with a 4D world, and plug in 4D sensors
> and expect it to reformat itself. That won't work, not well enough
> anyway. The 4D thing has to be part of the design from the ground up.

... such basic assumptions as the dimensionality of the space in which the system operates could have a deep impact on the learning mechanisms that will work. And that is *still* a relatively high level type of issue.

However, I don't think that this is reason for despair.

I think it is possible to define classes of cognitive systems that make relatively few assumptions, and that seem consistent with what we know of the human system, and then go on, without prejudging them, to investigate their developmental behavior to *see* what kind of KR formats they like to develop. Then, once we see what they develop, use it (a last, and fairly trivial step in the process).

This was what I was saying in my AGIRI workshop presentation.


Richard








-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to