Dear Matt,

Thank you for your reply. I see your points; it might go the way you
say. 

This would mean that the AI does NOT evolve it's value system into stage
6, social compassion. Enslavement or destruction means value system 3 or
4 at the most. Whereas many people, especially in wealthy nations, are
in stage 5-7. Meaning that in terms of values, the AI would not have
surpassed us at all, only in intelligence.

So I wonder, what do you propose we do to avoid our downfall? 

Best regards,
Arthur

-----Oorspronkelijk bericht-----
Van: Matt Mahoney [mailto:[EMAIL PROTECTED] 
Verzonden: donderdag 13 december 2007 0:17
Aan: [email protected]
Onderwerp: Re: [singularity] war against the machines & spiral dynamics
- anyone?


--- postbus <[EMAIL PROTECTED]> wrote:

> Dear fellow minds, 
>  
> After editing the book "Nanotechnology, towards a molecular
construction
> kit" (1998), I have become a believer in strong AI. As a result, I
still
> worry about an upcoming "war against the machines" leading to our
> destruction or enslavement. Robots will simply evolve beyond us. Until
a
> few days ago, I believed this war and outcome to be inevitable. 

It doesn't work that way.  There will be no war because you won't know
you are
enslaved.  The AI could just reprogram your brain so you want to do its
bidding.

> However, there may be a way out. What thoughts has any of you
concerning
> the following line of reasoning: 
>  
> First, human values have evolved along the model of Claire Graves.
Maybe
> you heard about his work in terms of "Spiral Dynamics". Please look
into
> it if you don't. To me, it has been an eye opener. 
> Second, a few days ago it dawned on me that intelligent robots might
> follow the same spiral evolution of values: 
>  
> 1. The most intelligent robots today are struggling for their survival
> in the lab (survival). Next, they would develop a sense of: 
> 2. a tribe
> 3. glory & kingdom (here comes the war...)
> 4. order (the religous robots in Battlestar Galactica, which triggered
> this idea in the first place)
> 5. discovery and entrepreneurship (materialism)
> 6. social compassion ("robot hippies")
> 7. systemic thinking
> 8. holism. 
>  
> In other words, if we guide robots/AI quickly and safely into the
value
> system of order (3) and help them evolve further, they might not kill
us
> but become our companions in the universe. N.B. This is quite
different
> from installing Asimov's laws: the robots need to be able to develop
> their own set of values.  
>  
> Anyone? 

If AI follows the same evolutionary path as humans have followed, then
it does
not follow that that the AI will be compassionate toward humans any more
than
humans are compassionate toward lower animals.  Evolution is a
competitive
algorithm.  Animals eat animals of other species.  AI would not be
compassionate toward humans unless it increased their fitness.  But when
AI
becomes vastly more intelligent, we will be of no use to them.


-- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;
e



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=75521002-1814ce

Reply via email to