Terren Suydam wrote:
Harry,
--- On Wed, 8/6/08, Harry Chesley <[EMAIL PROTECTED]> wrote:
> I'll take a stab at both of these...
>
> The Chinese Room to me simply states that understanding cannot be
> decomposed into sub-understanding pieces. I don't see it as
> addressing grounding, unless you believe that understanding can
> only come from the outside world, and must become part of the
> system as atomic pieces of understanding. I don't see any reason to
> think that, but proving it is another matter -- proving negatives
> is always difficult.
The argument is only implicitly about the nature of understanding. It
is explicit about the agent of understanding. It says that something
that moves symbols around according to predetermined rules - if
that's all it's doing - has no understanding. Implicitly, the
assumption is that understanding must be grounded in experience, and
a computer cannot be said to be experiencing anything.
But it's a preaching to the choir argument: Is there anything more to
the argument than the intuition that automatic manipulation cannot
create understanding? I think it can, though I have yet to show it.
Take it from another perspective: Is it possible to make a beer can out
of atoms? An aluminum atom is in no way a beer can. It doesn't look like
one. It can't hold beer. You can't drink from it. Perhaps the key aspect
of a beer can is "containment." An atom has no containment. So clearly
no collection of atoms can invoke containment.
It really helps here to understand what a computer is doing when it
executes code, and the Chinese Room is an analogy to that which makes
a computer's operation expressible in terms of human experience -
specifically, the experience of incomprehensible symbols like Chinese
ideograms. All a computer really does is apply rules determined in
advance to manipulate patterns of 1's and 0's. No comprehension is
necessary, and invoking that at any time is a mistake.
I totally agree with all but the last sentence. The Chinese Room does
provide a simple but accurate analogy to what a computer does. As such,
it's excellent for helping non-computer types understand this issue in
AI/philosophy. But I know of no definition of "comprehension" that is
impossible to create using a program or a Chinese Room -- of course, I
don't know /any/ complete definition of "comprehension," and maybe when
I do, it will have the feature you believe it has.
Fortunately, that does not rule out embodied AI designs in which the
agent is simulated. The processor still has no understanding - it
just facilitates the simulation.
That sounds like agreement with my point (we might be arguing two
aspects of the same side): If the processor has no understanding, but
the simulation does, then it must be possible to compose understanding
using a non-understanding processor.
> As to philosophy, I tend to think of it's relationship to AI as
> somewhat the same as alchemy's relationship to chemistry. That is,
> it's one of the origins of the field, and has some valid ideas, but
> it lacks the hard science and engineering needed to get things
> actually working. This is admittedly perhaps a naive view, and
> reflects the traditional engineering distrust of the humanities. I
> state it not to be critical of philosophy, but to give you an idea
> how some of us think of the area.
As an engineer who builds things everyday (in software), I can
appreciate the *limits* of philosophy. Spending too much time in that
domain can lead to all sorts of excesses of thought, castles in the
sky, etc. However, any good engineer will tell you how important
theory is in the sense of creating and validating design. And while
the theory behind rocket science involves physics, chemistry, and
fluid dynamics (and others no doubt), the theory of AI involves
information theory, computer science, and philosophy of mind &
knowledge, like it or not. If you want to be a good AI engineer, you
better be comfortable with all of the above.
Yes, I don't mean to dismiss philosophy. In some areas of AI, there is
far more understanding within philosophy than within computer science.
But there's also lots of angels dancing on pins, so it can take a lot of
time to find it. In some ways it's like having a domain expert, always a
good thing when writing a program.
-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com