On Jan 19, 2008 8:24 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> --- "Eliezer S. Yudkowsky" <[EMAIL PROTECTED]> wrote:
> http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?currentPage=all
>
> Turing also committed suicide.

That's a personal solution to the Halting problem I do not plan to exercise.

> Building a copy of your mind raises deeply troubling issues.  Logically, there

Agreed.  If that mind is within acceptable tolerance for human life at
peak load of 30%(?) of capacity, can it survive hard takeoff?  I
consider myself reasonably intelligent and perhaps somewhat wise - but
I would not expect the stresses of thousand-fold "improvement" in
throughput would scale out/up.  Even the simplest human foible can
become an obsessive compulsion that could destabilize the integrity of
an expanding mind.  I understand this to be related to the issue of
Friendliness (am I wrong?)

> It follows logically that there is no reason to live, that death is nothing 
> to fear.

Given a directive to maintain life, hopefully the AI-controlled life
support system keeps perspective on such logical conclusions.  An AI
in a nuclear power facility should have the same directive.  I don't
expect that it shouldn't be allowed to self-terminate (that gives rise
to issues like slavery) but that it gives notice and transfers
responsibilities before doing so.

> In http://www.mattmahoney.net/singularity.html I discuss how a singularity
> will end the human race, but without judgment whether this is good or bad.
> Any such judgment is based on emotion.  Posthuman emotions will be
> programmable.

... and arbitrary?  Aren't we currently able to program emotions
(albeit in a primitive pharmaceutical way)?

Who do you expect will have control of that programming?  Certainly
not the individual.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=87858522-76fadd

Reply via email to