--- Mike Dougherty <[EMAIL PROTECTED]> wrote:

> On Jan 19, 2008 8:24 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > --- "Eliezer S. Yudkowsky" <[EMAIL PROTECTED]> wrote:
> >
>
http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?currentPage=all
> >
> > Turing also committed suicide.
> 
> That's a personal solution to the Halting problem I do not plan to exercise.
> 
> > Building a copy of your mind raises deeply troubling issues.  Logically,
> there
> 
> Agreed.  If that mind is within acceptable tolerance for human life at
> peak load of 30%(?) of capacity, can it survive hard takeoff?  I
> consider myself reasonably intelligent and perhaps somewhat wise - but
> I would not expect the stresses of thousand-fold "improvement" in
> throughput would scale out/up.  Even the simplest human foible can
> become an obsessive compulsion that could destabilize the integrity of
> an expanding mind.  I understand this to be related to the issue of
> Friendliness (am I wrong?)

That is not the issue.  There is a philosophical barrier to AGI, not just a
technical one.  The developers kill themselves.  Understanding the mind as a
program is deeply disturbing.  It leads to logical conclusions that conflict
with our most basic instincts.  But how else can you build AGI?

The problem is only indirectly related to friendliness.  Evolution has solved
the NGI (natural general intelligence) problem by giving you the means to make
slightly modified copies of yourself but with no need to understand or control
the process.  This process is not friendly because it satisfies the evolved
supergoal of propagating your DNA, not the subgoals programmed into your brain
like hunger, pain avoidance, sex drive, etc.  NGI is not supposed to make YOU
happy.

Humans are driven by their subgoals to build AGI to (1) serve us and (2)
upload to achieve immortality.  Maybe you can see an ethical dilemma already. 
Does one type of machine have a consciousness and the other not?  If you think
about the problem, you will encounter other difficult questions.  There is a
logical answer, but you won't like it.

> Given a directive to maintain life, hopefully the AI-controlled life
> support system keeps perspective on such logical conclusions.  An AI
> in a nuclear power facility should have the same directive.  I don't
> expect that it shouldn't be allowed to self-terminate (that gives rise
> to issues like slavery) but that it gives notice and transfers
> responsibilities before doing so.

Again, I am referring to the threat to the human builder, not the machine.  If
AGI is developed through recursive self improvement in a competitive,
evolutionary environment, then it will evolve a stable survival instinct. 
Humans have this instinct, but most humans don't think of their brains as
computers, so they never encounter the fundamental conflicts between logic and
emotion.

> > In http://www.mattmahoney.net/singularity.html I discuss how a singularity
> > will end the human race, but without judgment whether this is good or bad.
> > Any such judgment is based on emotion.  Posthuman emotions will be
> > programmable.
> 
> ... and arbitrary?  Aren't we currently able to program emotions
> (albeit in a primitive pharmaceutical way)?
> 
> Who do you expect will have control of that programming?  Certainly
> not the individual.

Correct, because they are weeded out by evolution.


-- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=87966276-94a0d6

Reply via email to