J. Storrs Hall, PhD. wrote:
On Monday 25 September 2006 16:48, Ben Goertzel wrote:
My own view is that symbol grounding is not a waste of time ... but,
**exclusive reliance** on symbol grounding is a waste of time.
It's certainly not a waste of time in the general sense, especially if you're
going to be building a robot! But I just don't think it's on the critical
path.
Novamente utilizes a combination of grounding of symbols in
simulated-embodied experience with ingestion of information from
existing databases. I believe this sort of combination is optimal,
rather than purely relying on data sources with no attention to
embodied experience....
This discussion of "symbol grounding" is starting go the same way that
all the talk of symbol grounding went at the AGIRI workshop ... in other
words, it is not about SG at all.
The original symbol grounding problem, as I understood it, was not about
simply connecting an AI to the world with sensors that were rich enough,
it also was about the blowback effect that such a connection might have
on the original representations that are used to encode the knowledge
coming in. The story goes something like this:
STEP 1) Early AI pioneer designs a fabulous knowledge representation
formalism, but discovers that it just doesn't work for some reason, or
it doesn't scale up.
STEP 2) Along come Winograd and Flores (etc.), who points out that you
need to connect the thing to real world input AND get the system to pick
up its own knowledge (no hand-crafting by the programmer, please).
STEP 3) Early pioneer tacks sensors etc. onto previously favorite
knowledge reprsentation, then tries to get a mechanism working that will
allow the system to do autonomous knowledge building using that input
.... and to her dismay discovers a <You Can't Here From There> effect:
given the assumptions about KR, it is difficult to devise a methodology
that leads her to a learning mechanism that picksup and delivers
knowledge in that assumed KR format.
The real "grounding problem" is the awkward and annoying fact that if
you presume a KR format, you can't reverse engineer a learning mechanism
that reliably fills that KR with knowledge. At least some people (by
which I mean, my colleagues and I at Warwick, at least, and we just
assumed that other people understood it in the same way as us ... sorry,
folks, no references) always talked about the grounding problem in this
more sophisticated sense. Hooking up sensors was trivial: it was about
the fact that nobody knew how to get the learning algorithms to
autonomously pick up knowledge from the world without the chosen
learning algorithms having blowback effects on the presumed knowledge
representation.
Which makes it extremely non-trivial and potentially a showstopper.
And some claim that this *is* why the show is stopped.
Richard Loosemore
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]