>> 
With 
the chinese room, we arent doing any reasoning really, just looking up answers 
according to instructions.... but given that, how do we determine 
"understanding"?

>This was Searle's entire point.  Mindless 
lookup is *not* understanding.  It may *look* like >understanding from the 
outside (and if you have an infinitely large book that also has >mis-spellings, 
etc. -- you'll never be able to prove otherwise), but it is merely 
manipulation, 
>not intention.
Thats my thought, but at what point is the manipulation responses just as good 
as "human" responses.... if they are both in a black box, to the outside 
observer, they are identical.  Then they both could be said to have an equal 
"understanding" about what they are doing. Given that the grading is always 
done from an outside source.


--- On Thu, 8/7/08, Mark Waser <[EMAIL PROTECTED]> wrote:
From: Mark Waser <[EMAIL PROTECTED]>
Subject: Re: [agi] Groundless reasoning --> Chinese Room
To: [email protected]
Date: Thursday, August 7, 2008, 2:49 PM



 
 

:-)  The Chinese Room can't pass the Turing 
test for exactly the reason you mention.
 
>> Well 
in the Chinese Room case I think the "book of instructions" is infinitely large 
to handle all cases, so things like misspellings and stuff would be 
included.... 
and I dont think that was meant to be a difference.

:-)  Arguments that involved infinities are always 
problematical.  Personally, I think that the intention was 
that you should accept the more reasonable and smaller infinity of just 
correct cases as being more in line with what Searle intended.  This is 
obviously just speculation however and YMMV.
 
>> 
With 
the chinese room, we arent doing any reasoning really, just looking up answers 
according to instructions.... but given that, how do we determine 
"understanding"?

This was Searle's entire point.  Mindless 
lookup is *not* understanding.  It may *look* like understanding from the 
outside (and if you have an infinitely large book that also has mis-spellings, 
etc. -- you'll never be able to prove otherwise), but it is merely 
manipulation, 
not intention.
 

  ----- Original Message ----- 
  From: 
  James Ratcliff 
  
  To: [email protected] 
  Sent: Thursday, August 07, 2008 3:10 
  PM
  Subject: Re: [agi] Groundless reasoning 
  --> Chinese Room
  

  
    
    
      
>No, I said nothing of the sort.  I said that Searle said (and I agree) that 
>a computer program that *only* manipulated formally defined elements without 
>intention or altering itself could not reach strong AIIs this part of the 
>Chinese Room?  I 
        looked and couldnt find that restriction.  It would seem that to 
        pass the Turing test, it would at least need to be able to add to its 
        data,
otherwise something as simple as the below 
        would fail the Turing Test.

Q: My name 
        is James.
AI: OK
Q: What is 
        my name?
AI:  *dont know didnt store it, or 
        something like?*

I read 
        that the CR agent receives the input, looks up in a rulebook what to 
do, 
        does it, 
and returns the output, correct?
It seems 
        that there is room for any action such as changing the rulebook in the 
        middle of the process, maybe to add a synonym for a chinese word 
        say.

James

_______________________________________
James 
        Ratcliff - http://falazar.com
Looking for something...


        From: 
          Mark Waser <[EMAIL PROTECTED]>
> So you are arguing that a computer program can not be defined solely
> in terms of computational processes over formally defined elements?

No, I said nothing of the sort.  I said that Searle said (and I agree) that 
a computer program that *only* manipulated formally defined elements without 
intention or altering itself could not reach strong AI.

> Computers could react to and interact with input back in the day when
> Searle wrote his book.

Yes.  But the Chinese Room does *not* alter itself in response to input or 
add to it's knowledge.

> A computer program is a computational process over formally defined
> elements even if is  able to build complex and sensitive structures of
> knowledge about its
 IO data environment through its interactions with
> it.

Yes.  This is why I believe that a computer program can achieve strong AI.

> This is a subtle argument that cannot be dismissed with an appeal
> to a hidden presumption of the human dominion over understanding or by
> fixing it to some primitive theory about AI which was unable to learn
> through trial and error.

I was not dismissing the argument and certainly not making a presumption of 
human dominion over understanding.  Quite the opposite in fact.  I'm not 
quite sure why you believe that I did.  Could you tell me which of my 
phrases caused you to believe that I did?

    Mark

----- Original Message ----- 
From: "Jim Bromer" <[EMAIL PROTECTED]>
To: <[email protected]>
Sent: Wednesday, August 06, 2008 7:32 PM
Subject: Re: [agi] Groundless reasoning --> Chinese Room


> On Wed, Aug 6,
 2008 at 6:11 PM, Mark Waser <[EMAIL PROTECTED]>
wrote:
>> This has been a great thread!
>>
>> Actually, if you read Searle's original paper, I think that you
will find
>> that he... is *not* meaning to argue against the
>> possibility of strong AI (since he makes repeated references to human
as
>> machines) but merely against the possibility of strong AI in machines 
>> where
>> "the operation of the machine is defined solely in terms of
computational
>> processes over formally defined elements" (which was the current
state of
>> the art in AI when he was arguing against it -- unlike today where
there 
>> are
>> a number of systems which don't require axiomatic reasoning over
formally
>> defined elements).  There's also the trick that Chinese Room is
>> assumed/programmed to be 100%
 omniscient/correct in it's required
domain.
>
> So you are arguing that a computer program can not be defined solely
> in terms of computational processes over formally defined elements?
> Computers could react to and interact with input back in the day when
> Searle wrote his book.
> A computer program is a computational process over formally defined
> elements even if is  able to build complex and sensitive structures of
> knowledge about its IO data environment through its interactions with
> it.  This is a subtle argument that cannot be dismissed with an appeal
> to a hidden presumption of the human dominion over understanding or by
> fixing it to some primitive theory about AI which was unable to learn
> through trial and error.
> Jim Bromer
>
>


  
  

  
    
    
      agi | Archives  | 
        Modify 
        Your Subscription
      



  
    
      
      agi | Archives

 | Modify
 Your Subscription


      
    
  





      


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com

Reply via email to