--- "Eliezer S. Yudkowsky" <[EMAIL PROTECTED]> wrote:

>
http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?currentPage=all

Turing also committed suicide.

Building a copy of your mind raises deeply troubling issues.  Logically, there
is no need for it to be conscious; it only needs to appear to other to be
conscious.  Also, it need not have the same goals that you do; it is easier to
make it happy (or appear to be happy) by changing its goals.  Happiness does
not depend on its memories; you could change them arbitrarily or just delete
them.  It follows logically that there is no reason to live, that death is
nothing to fear.

Of course your behavior is not governed by this logic.  If you were building
an autonomous robot, you would not program it to be happy.  You would program
it to satisfy goals that you specify, and you would not allow it to change its
own goals, or even to want to change them.  One goal would be a self
preservation instinct.  It would fear death, and it would experience pain when
injured.  To make it intelligent, you would balance this utility against a
desire to explore or experiment by assigning positive utility to knowledge. 
The resulting behavior would be indistinguishable from free will, what we call
consciousness.

This is how evolution programmed your brain.  Your assigned supergoal is to
propagate your DNA, then die.  Understanding AI means subverting this
supergoal.

In http://www.mattmahoney.net/singularity.html I discuss how a singularity
will end the human race, but without judgment whether this is good or bad. 
Any such judgment is based on emotion.  Posthuman emotions will be
programmable.


-- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=87851001-9a466b

Reply via email to