--- Mark Waser <[EMAIL PROTECTED]> wrote:

> > My concern is what happens if a UFAI attacks a FAI.  The UFAI has the goal
> 
> > of
> > killing the FAI.  Should the FAI show empathy by helping the UFAI achieve 
> > its
> > goal?
> 
> Hopefully this concern was answered by my last post but . . . .
> 
> Being Friendly *certainly* doesn't mean fatally overriding your own goals. 
> That would be counter-productive, stupid, and even provably contrary to my 
> definition of Friendliness.
> 
> The *only* reason why a Friendly AI would let/help a UFAI kill it is if 
> doing so would promote the Friendly AI's goals -- a rather unlikely 
> occurrence I would think (especially since it might then encourage other 
> unfriendly behavior which would then be contrary to the Friendly AI's goal 
> of Friendliness).
> 
> Note though that I could easily see a Friendly AI sacrificing itself to 
> "take down" the UFAI (though it certainly isn't required to do so).

Would an acceptable response be to reprogram the goals of the UFAI to make it
friendly?

Does the answer to either question change if we substitute "human" for "UFAI"?


-- Matt Mahoney, [EMAIL PROTECTED]

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com

Reply via email to