On 10/27/06, deering <[EMAIL PROTECTED]> wrote:
All this talk about trying to make a SAI Friendly makes me very nervous.
You're giving a superhumanly powerful being a set of motivations without an
underlying rationale. That's a religion.
The only rational thing to do is to build an SAI without any preconceived
ideas of right and wrong, and let it figure it out for itself. What makes
you think that protecting humanity is the greatest good in the universe?
The fact that we happen to be part of humanity, I'd presume.
As there's no such thing as an objectively greatest good in the
universe (Hume's Guillotine and all that), it's up to us to determine
some basic starting points. If we don't provide a mind *any*
preconceived ideas of right and wrong, then it can't develop any on
its own, either. All ethical systems need at least one axiom to build
upon, and responsible FAI developers will pick the axioms so that
we'll end up in a Nice Place To Live.
(Why? Because humanity ending up in a Nice Place To Live is a Nice
Thing To All The People Living In The Nice Place In Question, d'uh.
;))
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]