Just to clarify, I'm not really interested in whether machines feel pain. I am 
just trying to point out the contradictions in Mark's sweeping generalizations 
about the treatment of intelligent machines. But to be fair, such criticism is 
unwarented. Mark is arguing about ethics. Everyone has ethical beliefs. Ethical 
beliefs are emotional, not rational, although we often forget this. Ethical 
beliefs are also algorithmically complex, so the result of this argument could 
only result in increasingly complex rules to fit his model. It would be unfair 
to bore the rest of this list with such a discussion.

For the record, I do have ethical beliefs like most other people, but they are 
irrelevant to the design of AGI. The question is not how should we interact 
with machines, but how will we? For example, when we develop the technology to 
simulate human minds in general, or to simulate specific humans who have died, 
common ethical models among humans will probably result in the granting of 
legal and property rights to these simulations. Since these simulations could 
reproduce, evolve, and acquire computing resources much faster than humans, the 
likely result will be human extinction, or viewed another way, our evolution 
into a non-DNA based life form. I won't offer an opinion on whether this is 
desirable or not, because my opinion would be based on my ethical beliefs.

-- Matt Mahoney, [EMAIL PROTECTED]

--- On Tue, 11/18/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
From: Ben Goertzel <[EMAIL PROTECTED]>
Subject: Re: Definition of pain (was Re: FW: [agi] A paper that actually does 
solve the problem of consciousness--correction)
To: [email protected]
Date: Tuesday, November 18, 2008, 6:29 PM



On Tue, Nov 18, 2008 at 6:26 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:

--- On Tue, 11/18/08, Mark Waser <[EMAIL PROTECTED]> wrote:



> Autobliss has no grounding, no internal feedback, and no

> volition.  By what definitions does it feel pain?



Now you are making up new rules to decide that autobliss doesn't feel pain. My 
definition of pain is negative reinforcement in a system that learns. There is 
no other requirement.



You stated that machines can feel pain, and you stated that we don't get to 
decide which ones. So can you precisely define grounding, internal feedback and 
volition (as properties of Turing machines) 

Clearly, this can be done, and has largely been done already ... though cutting 
and pasting or summarizing the relevant literature in emails would not a 
productive use of time
 
and prove that these criteria are valid?


That is a different issue, as it depends on the criteria of validity, of 
course...

I think one can argue that these properties are necessary for a 
finite-resources AI system to display intense systemic patterns correlated with 
its goal-achieving behavior in the context of diverse goals and situations.  
So, one can argue that these properties are necessary for **the sort of 
consciousness associated with general intelligence** ... but that's a bit 
weaker than saying they are necessary for consciousness (and I don't think they 
are)


ben
 






  
    
      
      agi | Archives

 | Modify
 Your Subscription


      
    
  





-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com

Reply via email to