>> Please Answer: Now how can we really say how this is different from human 
>> understanding?

>> I receive a question, I rack my brain for stored facts, if relevant, and 
>> any experiences I have had if relevant, and respond, either with words or an 
>> action.

The difference comes about then presented with a novel situation.  The Chinese 
Room may be able to handle a *very* closely related situation yet a single 
small difference may throw it (like a single mis-spelled word in a book-sized 
block of text -- please don't harass me about Chinese and pictographs :-)  A 
human being will not be thrown by minor differences since they "understand" 
that space around their known solutions as well as the exact solutions.

  ----- Original Message ----- 
  From: James Ratcliff 
  To: [email protected] 
  Sent: Thursday, August 07, 2008 2:13 PM
  Subject: Re: [agi] Groundless reasoning --> Chinese Room


        Back on the problem of "understanding"

        more below

        _______________________________________
        James Ratcliff - http://falazar.com
        Looking for something...

        --- On Wed, 8/6/08, Terren Suydam <[EMAIL PROTECTED]> wrote:

          From: Terren Suydam <[EMAIL PROTECTED]>
          Subject: Re: [agi] Groundless reasoning --> Chinese Room
          To: [email protected]
          Date: Wednesday, August 6, 2008, 1:50 PM


Abram,I think a simulated, grounded, embodied approach is the one exception to 
theotherwise correct Chinese Room (CR) argument. It is the keyhole through 
which wemust pass to achieve strong AI.The Novamente example I gave may qualify 
as such an exception (although thehybrid nature of grounded and ungrounded 
knowledge used in the design is aquestion mark for me), and does not invalidate 
the arguments against ungroundedapproaches.The CR argument works for ungrounded 
approaches, because without grounding,
 thesymbols to be manipulated have no meaning, except within an external 
contextthat is totally independent of and inaccessible to the processing 
engine. --> Meaning and understanding here I dont believe are just a true false 
value.In this instance the Agent WOULD have some level of meaning known, if 
given a database of factsabout cats it would be able to answer some questions 
about cats, and woudl understand cats to a certain extent.I believe for this to 
be further constructive, you have to show either 1) howan ungrounded symbolic 
approach does not apply to the CR argument, or 2) why,specifically, the 
argument fails to show that ungrounded approaches cannotachieve
 comprehension.Unfortunately, I have to take a break from the list (why are 
peoplecheering??).  I will answer any further posts addressed to me in due 
time, but Ihave other commitments for the time 
being.Terren-----------------------------------------------------------James 
Reply1. Given that a Chinese Room VS an AI in a box, the agent replying to the 
chinese questionshas no "understanding" of chinese.  To all extents and 
purposes it is replying in a coherent way to all questions, and by the Turing 
test is unable to be different acting than ahuman.  That meets my
 burden of being an AGI, if it replies always in reasonable manner.Whether it 
understands anything or not seems to be a totally different question.2. 
Understanding, using any of the definitions, seems to be judgeable on a scale, 
emphasis on judgeable, in that there is no measure of understanding that can be 
done in a vacuum.So to say, does the AGI understand is nonsensical without that 
context.In school, we determine understanding by testing, and asking questions, 
and performing tasks.So an AGI it would seem would need to be handled in a 
similar fashion.A un-grounded AGI without a body when quizzed about certain 
items would show a certain level of understanding depending on the depth and 
correctness of its knowledgebases and routines.Is it truly "understanding" the 
concept any further than reading it, and answering the question?A grounded AGI 
may perform better because it is able to interact and gather more and better 
details about the topics.But in the end the grounded AGI simply has a larger 
lookup database of experiences it can use.When handed a question on a sheet of 
paper, it looks it up in the larger DB.A embodied robot AGI would have the 
added ability of interacting physically with the objects, therefor when handed 
a cup, it could look-up what to do with it, and "understand" that it could fill 
it up with a liquid, and follow a plan for that.In this sense it would be able 
to "prove" to an outsider that it understood what a cup was.Please Answer: Now 
how can we really say how this is different from human understanding?I receive 
a question, I rack my brain for stored facts, if relevant, and any experiences 
I have had if relevant, and respond, either with words or an action. 



------------------------------------------------------------------------------
        agi | Archives  | Modify Your Subscription  



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com

Reply via email to