Harry Chesley <[EMAIL PROTECTED]> wrote:
> One of the classic current AI issues is grounding, the argument being that a
> dictionary cannot be complete because it is only self-referential, and *has*
> to be grounded at some point to be truly meaningful. This argument is used
> to claim that abstract AI can never succeed, and that there must be a
> physical component of the AI that connects it to reality.
>
> I have never bought this line of reasoning. It seems to me that meaning is a
> layered thing, and that you can do perfectly good reasoning at one (or two
> or three) levels in the layering, without having to go "all the way down."

I mostly agree and I have a number of reasons why.  First of all, an
AI program can only learn through its input-output data environment,
this idea of grounding, as if it would somehow make it all more real
(to a computer program) is philosophically beyond the limits of
reason.  Let me emphasize that I believe that if an AI program was
actually capable of high level reasoning, then there is no doubt that
grounding would have a dramatic effect on its level of insight.  My
pov though is that at the current level of AI research an exaggerated
emphasis on grounding really won't produce the higher level reasoning
that we are thinking about in AGI.  Of course if all the reasoning of
an AI program was based on a minimal amount of data to work with then
we cannot really expect much out of it.  But with language we can
provide a working AI program with as many details as it could handle
if it were only capable of higher level reasoning to begin with.

I do think circularity and (what I call) networkity are serious
problems.  However, they are not caused by the choice of a rich
natrual language domain, they are caused by inadequate reasoning and
an inadequate IO data environment.

First we have to figure out a way to get our computers to do some
higher level reasoning in  an interactive data environment that has
the right stuff, and then they will undoubtably show dramatic
improvements when they are provided with a greater range of IO
interactions that would provide them with more grounding.

But first we have to figure it out, because there is not a robot in
the world that will be able to figure it out before we do.

Jim Bromer


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com

Reply via email to