On Jan 19, 2008, at 5:24 PM, Matt Mahoney wrote:

--- "Eliezer S. Yudkowsky" <[EMAIL PROTECTED]> wrote:


http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?currentPage=all

Turing also committed suicide.

In his case I understand that the British government saw fit to sentence him to heavy hormonal medication because they couldn't deal with the fact that he was gay. Arguably that unhinged his libido and other aspects of his psychology, was very upsetting and set up his suicide. In his case I think he was slowly murdered by intolerance backed by force of law and primitive medicine.



Building a copy of your mind raises deeply troubling issues. Logically, there is no need for it to be conscious; it only needs to appear to other to be conscious. Also, it need not have the same goals that you do; it is easier to make it happy (or appear to be happy) by changing its goals. Happiness does not depend on its memories; you could change them arbitrarily or just delete them. It follows logically that there is no reason to live, that death is
nothing to fear.


Those of us who have meditated a bit (and/or experimented with conscious in other ways in our youth) are aware of how much of our vaunted self can be seen as construct and phantasm. Rarely does seeing that alone drive someone over the edge.

Of course your behavior is not governed by this logic. If you were building an autonomous robot, you would not program it to be happy. You would program it to satisfy goals that you specify, and you would not allow it to change its
own goals, or even to want to change them.

That would depend greatly on how deeply "autonomous" I wanted it to be.

 One goal would be a self
preservation instinct. It would fear death, and it would experience pain when injured. To make it intelligent, you would balance this utility against a desire to explore or experiment by assigning positive utility to knowledge. The resulting behavior would be indistinguishable from free will, what we call
consciousness.


I don't think simply avoiding death or injury as counterposed with exploring and experimenting is sufficient to arrive at what we generally term free will.


This is how evolution programmed your brain. Your assigned supergoal is to
propagate your DNA, then die.  Understanding AI means subverting this
supergoal.


That is a bit blunt and very inaccurate seen analogously to giving goals to an AI. Besides this is not an "assigned" supergoal. It is just the fitness function applied to a naturally occurring wild GA. There is reason to read more into it than that.

In http://www.mattmahoney.net/singularity.html I discuss how a singularity will end the human race, but without judgment whether this is good or bad.
Any such judgment is based on emotion.

Really? I can think of arguments why this would be a bad thing without even referencing the fact that I am human and do not wish to die. That wish is not equivalent to an emotion if you consider it, as you appear to have done above, as one of your deepest goals. Goal per se do not equate to emotion.

- samantha

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=88044290-bafa52

Reply via email to