Harry,

--- On Wed, 8/6/08, Harry Chesley <[EMAIL PROTECTED]> wrote:
> I'll take a stab at both of these...
> 
> The Chinese Room to me simply states that understanding
> cannot be 
> decomposed into sub-understanding pieces. I don't see
> it as addressing 
> grounding, unless you believe that understanding can only
> come from the 
> outside world, and must become part of the system as atomic
> pieces of 
> understanding. I don't see any reason to think that,
> but proving it is 
> another matter -- proving negatives is always difficult.

The argument is only implicitly about the nature of understanding. It is 
explicit about the agent of understanding. It says that something that moves 
symbols around according to predetermined rules - if that's all it's doing - 
has no understanding. Implicitly, the assumption is that understanding must be 
grounded in experience, and a computer cannot be said to be experiencing 
anything.

It really helps here to understand what a computer is doing when it executes 
code, and the Chinese Room is an analogy to that which makes a computer's 
operation expressible in terms of human experience - specifically, the 
experience of incomprehensible symbols like Chinese ideograms. All a computer 
really does is apply rules determined in advance to manipulate patterns of 1's 
and 0's. No comprehension is necessary, and invoking that at any time is a 
mistake.

Fortunately, that does not rule out embodied AI designs in which the agent is 
simulated. The processor still has no understanding - it just facilitates the 
simulation.

> As to philosophy, I tend to think of it's relationship
> to AI as somewhat 
> the same as alchemy's relationship to chemistry. That
> is, it's one of 
> the origins of the field, and has some valid ideas, but it
> lacks the 
> hard science and engineering needed to get things actually
> working. This 
> is admittedly perhaps a naive view, and reflects the
> traditional 
> engineering distrust of the humanities. I state it not to
> be critical of 
> philosophy, but to give you an idea how some of us think of
> the area.

As an engineer who builds things everyday (in software), I can appreciate the 
*limits* of philosophy. Spending too much time in that domain can lead to all 
sorts of excesses of thought, castles in the sky, etc. However, any good 
engineer will tell you how important theory is in the sense of creating and 
validating design. And while the theory behind rocket science involves physics, 
chemistry, and fluid dynamics (and others no doubt), the theory of AI involves 
information theory, computer science, and philosophy of mind & knowledge, like 
it or not. If you want to be a good AI engineer, you better be comfortable with 
all of the above.

Terren

> Terren Suydam wrote:
> > Abram,
> >
> > If that's your response then we don't actually
> agree. 
> >
> > I agree that the Chinese Room does not disprove strong
> AI, but I think it is a valid critique for purely logical or
> non-grounded approaches. Why do you think the critique fails
> on that level?  Anyone else who rejects the Chinese Room
> care to explain why?
> >
> > (I know this has been discussed ad nauseum, but that
> should only make it easier to point to references that
> clearly demolish the arguments. It should be noted however
> that relatively recent advances regarding complexity and
> emergence aren't quite as well hashed out with respect
> to the Chinese Room. In the document you linked to, mention
> of emergence didn't come until a 2002 reference
> attributed to Kurzweil.)
> >
> > If you can't explain your dismissal of the Chinese
> Room, it only reinforces my earlier point that some of you
> who are working on AI aren't doing your homework with
> the philosophy. It's ok to reject the Chinese Room, so
> long as you have arguments to do it (and if you do, I'm
> all ears!) But if you don't think the philosophy is
> important, then you're more than likely doing Cargo Cult
> AI.
> >
> > (http://en.wikipedia.org/wiki/Cargo_cult)
> >
> > Terren
> >
> > --- On Tue, 8/5/08, Abram Demski
> <[EMAIL PROTECTED]> wrote:
> >
> >   
> >> From: Abram Demski <[EMAIL PROTECTED]>
> >> Subject: Re: [agi] Groundless reasoning -->
> Chinese Room
> >> To: [email protected]
> >> Date: Tuesday, August 5, 2008, 9:49 PM
> >> Terren,
> >> I agree. Searle's responses are inadequate,
> and the
> >> whole thought
> >> experiment fails to prove his point. I think it
> also fails
> >> to prove
> >> your point, for the same reason.
> >>
> >> --Abram
> >>
> >>     
> >
> >
> >
> >       
> >
> >
> > -------------------------------------------
> > agi
> > Archives:
> https://www.listbox.com/member/archive/303/=now
> > RSS Feed:
> https://www.listbox.com/member/archive/rss/303/
> > Modify Your Subscription:
> https://www.listbox.com/member/?&;
> > Powered by Listbox: http://www.listbox.com
> >
> >   
> 
> 
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com


      


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com

Reply via email to