I think an animal rights analogy can help us answer important questions about 
AGI design.  

1. Should a superhuman AI (SAI) decide what is best for us?   Or should we 
decide? 

In the case of humans and animals, humans are smarter, and humans decide.  We 
keep hamsters in a cage because setting them free would put them at risk from 
predators.  We remove the sex organs of dogs and cats and vaccinate them to 
protect them.  We cull thousands of cattle or chickens to stop the spread of 
diseases.  So by analogy, if an SAI is smarter than human, it should decide 
what is best for us.

2. What is human?

- If you make an exact copy of a human and kill the original, is it murder?
- What if you copy only the brain and put it in a different body?
- What if you put the copy in a robot body?
- What if you copy only the information in the brain and run it in a simulation?
- What if you put the memory in archival storage but don't otherwise use it?
- What if you only copy part of the memory?  How much do you need?
- What if you copy none of it, but reconstruct a plausible substitute based on 
what you know about the person?

The analogy to animal rights is: what animals are "close enough to human" to be 
protected?  Is it moral to kill the fleas on a dog?  Is it moral for a dog to 
eat meat?


-- Matt Mahoney, [EMAIL PROTECTED]

----- Original Message ----
From: Anna Taylor <[EMAIL PROTECTED]>
To: singularity@v2.listbox.com
Sent: Friday, October 27, 2006 1:23:41 PM
Subject: Re: [singularity] Convincing non-techie skeptics that the Singularity 
isn't total bunk

Josh Cowan wrote:
>Issues associated with animal rights are better known then the coming
>Singularity.

Issues associated with animal rights are easy to understand, they make
you feel good when you help. The general public can pick up a phone,
donate money and feel rewarded that it is helping a cause. If there is
no cause, no warm feelings of helping others, chances are the general
public won't be interested. The Singularity is complicated with issues
that the general public can't even begin to grasp. I think that the
Singularity needs to be refined in terms if the scientific world wants
the general public to believe, contribute or be part of the
Singularity.

Anna:)



On 10/26/06, Josh Cowan <[EMAIL PROTECTED]> wrote:
> >
>
> Chris Norwood wrote:
>
> >  When talking about use, it is easy to explain by
> > giving examples. When talking about safety, I always
> > bring in disembodied AGI vs. embodied and the normal
> > "range of possible minds" debate. If they are still
> > wary, I talk about the possible inevitability of AGI.
> > I relate it to the making of the atom bomb during
> > WWII. Do we want someone aware of the danger and
> > motivated to make it, and standard practice
> > guidelines, as safe as possible? Or would you rather
> > someone with bad intent and recklessness to make the
> > attempt?
> >
> >
>
> Assuming memes in the general culture have some, if only very indirect,
> effect on the future. Perhaps a back up approach to both FAI and, more
> relevantly to the culture at large, would be encouraging animal rights.
> Issues associated with animal rights are better known then the coming
> Singularity.  Besides, if the AI is so completely in control and
> inevitable, and if  my children or I, shall be nothing more than
> insects (De Garis's description) or gold fish I want the general ethos
> to value the dignity of pets. Next time you see that collection-can at
> the grocery store, look at that cute puppy and give generously.   :)
>
>
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/[EMAIL PROTECTED]
>

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to