Harry,
--- On Wed, 8/6/08, Harry Chesley <[EMAIL PROTECTED]> wrote:
> But it's a preaching to the choir argument: Is there
> anything more to
> the argument than the intuition that automatic manipulation
> cannot
> create understanding? I think it can, though I have yet to
> show it.
The burden is on you, or anyone pursuing purely logical approaches, to show how
you can cross the chasm from syntax to semantics - from form to meaning. How
does your intuition of "automatic understanding" inform a design that does
nothing but manipulate symbols? At what point does your design cross the
boundary from simply manipulating data automatically to understanding it? To
me, the real problem here is projecting your own understanding onto a machine
that appears to be doing something intelligent.
If your intuition is correct, than it's not a big leap to say that today's
chess programs comprehend chess. Do you agree?
> Take it from another perspective: Is it possible to make a
> beer can out
> of atoms? An aluminum atom is in no way a beer can. It
> doesn't look like
> one. It can't hold beer. You can't drink from it.
> Perhaps the key aspect
> of a beer can is "containment." An atom has no
> containment. So clearly
> no collection of atoms can invoke containment.
I think here you're a lot closer to my arguments about emergence. You can hold
this view and still accept the Chinese Room argument.
> > All a computer really does is apply rules
> determined in
> > advance to manipulate patterns of 1's and
> 0's. No comprehension is
> > necessary, and invoking that at any time is a
> mistake.
>
> I totally agree with all but the last sentence. The Chinese
> Room does
> provide a simple but accurate analogy to what a computer
> does. As such,
> it's excellent for helping non-computer types
> understand this issue in
> AI/philosophy. But I know of no definition of
> "comprehension" that is
> impossible to create using a program or a Chinese Room --
> of course, I
> don't know /any/ complete definition of
> "comprehension," and maybe when
> I do, it will have the feature you believe it has.
I think your problems here are due to lack of clarity about what it means for
some kind of agent to understand something. For starters, understanding is done
by something - it doesn't exist in a vacuum. What is the nature of that
something?
> That sounds like agreement with my point (we might be
> arguing two
> aspects of the same side): If the processor has no
> understanding, but
> the simulation does, then it must be possible to compose
> understanding
> using a non-understanding processor.
Agreed, but I don't see how you got there from your earlier points.
> Yes, I don't mean to dismiss philosophy. In some areas
> of AI, there is
> far more understanding within philosophy than within
> computer science.
> But there's also lots of angels dancing on pins, so it
> can take a lot of
> time to find it. In some ways it's like having a domain
> expert, always a
> good thing when writing a program.
Totally agree! But it is so valuable to have your beliefs challenged, which is
why we should not rely on others to do the heavy lifting.
Terren
-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com