>> I am just trying to point out the contradictions in Mark's sweeping 
>> generalizations about the treatment of intelligent machines

Huh?  That's what you're trying to do?  Normally people do that by pointing to 
two different statements and arguing that they contradict each other.  Not by 
creating new, really silly definitions and then trying to posit a universe 
where blue equals red so everybody is confused.

>> But to be fair, such criticism is unwarranted. 

So exactly why are you persisting?

>> Ethical beliefs are emotional, not rational,

Ethical beliefs are subconscious and deliberately obscured from the conscious 
mind so that defections can be explained away without triggering other 
primate's lie-detecting senses.  However, contrary to your antiquated beliefs, 
they are *purely* a survival trait with a very solid grounding.

>> Ethical beliefs are also algorithmically complex

Absolutely not.  Ethical beliefs are actually pretty darn simple as far as the 
subconscious is concerned.  It's only when the conscious "rational" mind gets 
involved that ethics are twisted beyond recognition (just like all your 
arguments).

>> so the result of this argument could only result in increasingly complex 
>> rules to fit his model

Again, absolutely not.  You have no clue as to what my argument is yet you 
fantasize that you can predict it's results.  BAH!

>> For the record, I do have ethical beliefs like most other people

Yet you persist in arguing otherwise.  *Most* people would call that dishonest, 
deceitful, and time-wasting. 

>> The question is not how should we interact with machines, but how will we? 

No, it isn't.  Study the results on ethical behavior when people are convinced 
that they don't have free will.

= = = = = 

BAH!  I should have quit answering you long ago.  No more.


  ----- Original Message ----- 
  From: Matt Mahoney 
  To: agi@v2.listbox.com 
  Sent: Tuesday, November 18, 2008 7:58 PM
  Subject: Re: Definition of pain (was Re: FW: [agi] A paper that actually does 
solve the problem of consciousness--correction)


        Just to clarify, I'm not really interested in whether machines feel 
pain. I am just trying to point out the contradictions in Mark's sweeping 
generalizations about the treatment of intelligent machines. But to be fair, 
such criticism is unwarented. Mark is arguing about ethics. Everyone has 
ethical beliefs. Ethical beliefs are emotional, not rational, although we often 
forget this. Ethical beliefs are also algorithmically complex, so the result of 
this argument could only result in increasingly complex rules to fit his model. 
It would be unfair to bore the rest of this list with such a discussion.

        For the record, I do have ethical beliefs like most other people, but 
they are irrelevant to the design of AGI. The question is not how should we 
interact with machines, but how will we? For example, when we develop the 
technology to simulate human minds in general, or to simulate specific humans 
who have died, common ethical models among humans will probably result in the 
granting of legal and property rights to these simulations. Since these 
simulations could reproduce, evolve, and acquire computing resources much 
faster than humans, the likely result will be human extinction, or viewed 
another way, our evolution into a non-DNA based life form. I won't offer an 
opinion on whether this is desirable or not, because my opinion would be based 
on my ethical beliefs.

        -- Matt Mahoney, [EMAIL PROTECTED]

        --- On Tue, 11/18/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:

          From: Ben Goertzel <[EMAIL PROTECTED]>
          Subject: Re: Definition of pain (was Re: FW: [agi] A paper that 
actually does solve the problem of consciousness--correction)
          To: agi@v2.listbox.com
          Date: Tuesday, November 18, 2008, 6:29 PM





          On Tue, Nov 18, 2008 at 6:26 PM, Matt Mahoney <[EMAIL PROTECTED]> 
wrote:

            --- On Tue, 11/18/08, Mark Waser <[EMAIL PROTECTED]> wrote:

            > Autobliss has no grounding, no internal feedback, and no
            > volition.  By what definitions does it feel pain?

            Now you are making up new rules to decide that autobliss doesn't 
feel pain. My definition of pain is negative reinforcement in a system that 
learns. There is no other requirement.

            You stated that machines can feel pain, and you stated that we 
don't get to decide which ones. So can you precisely define grounding, internal 
feedback and volition (as properties of Turing machines) 

          Clearly, this can be done, and has largely been done already ... 
though cutting and pasting or summarizing the relevant literature in emails 
would not a productive use of time
           
            and prove that these criteria are valid?


          That is a different issue, as it depends on the criteria of validity, 
of course...

          I think one can argue that these properties are necessary for a 
finite-resources AI system to display intense systemic patterns correlated with 
its goal-achieving behavior in the context of diverse goals and situations.  
So, one can argue that these properties are necessary for **the sort of 
consciousness associated with general intelligence** ... but that's a bit 
weaker than saying they are necessary for consciousness (and I don't think they 
are)

          ben
           




----------------------------------------------------------------------
                agi | Archives  | Modify Your Subscription  
       


------------------------------------------------------------------------------
        agi | Archives  | Modify Your Subscription  



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com

Reply via email to