On 04/10/2007, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Linas Vepstas wrote:
> > Um, why, exactly, are you assuming that the first one will be freindly?
> > The desire for self-preservation, by e.g. rooting out and exterminating
> > all (potentially unfreindly) competing AGI, would not be what I'd call
> > "freindly" behavior.


> What I mean is that ASSUMING the first one is friendly (that assumption
> being based on a completely separate line of argument), THEN it will be
> obliged, because of its commitment to friendliness, to immediately
> search the world for dangerous AGI projects and quietly ensure that none
> of them are going to become a danger to humanity.


Whether you call it "extermination" or "ensuring they won't be a
danger" the end result seems like the same thing.  In the world of
realistic software development how is it proposed that this kind of
neutralisation (or "termination" if you prefer) should occur ?  Are we
talking about black hat type activity here, or agents of the state
breaking down doors and seizing computers?

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49651474-cd3887

Reply via email to