On 9/10/06, Ben Goertzel <[EMAIL PROTECTED]> wrote:
Shane Legg posted an interesting and controversial blog entry
http://www.vetta.org
entitled "Friendly AI is Bunk" which seems to me worthy of discussion.

It seems Shane Legg is not very accustomed to thinking about ethical
issues, as he makes such elementary mistakes as this:

"I suspect that the only stable long term possibility is a super AGI
that is primarily interested in its own self preservation. All other
possibilities are critically unstable due to evolutionary pressure."

Sounds absurd to all those of us, who have often considered our own
existence to hold only (or almost only) derived value, and clearly see
that such a feature doesn't necessarily constitute an evolutionary
disadvantage.

Some people extensively commented on this and other mistakes made in
the blog post in the blog comments. For example, here's Nick Hay's
elaboration on the mistake I mentioned above:

"Self-preservation is a derived goal from almost any goal. More
generally, what matters is preservation of a system that implements
the AIs optimisation criterion, whether it counts as "self" or not.
What do you mean by "primarily interested in its own self
preservation", and what are the "evolutionary pressures" working
against systems which are interested in self preservation for derived
reasons? (Leaving aside the anthropomorphism in terms such as
"interested" and "self".)"


This mistake of considering a primary (non-derived) interest in
self-preservation to be a necessary feature is surprisingly common,
and also among the very dangerous mistakes that can be made in
designing AGIs. The prevalence of such mistakes among AGI researchers
who are considered quite bright (as they might very well be in many
areas) shows how important it is that there are groups, who
concentrate their research on such ethical issues, such as the SIAI.
(Also the Oxford Future of Humanity Institute run by Nick Bostrom
seems like an institution that is willing to fund such researchers,
and whose leadership is knowledgeable enough in these matters that the
researchers the choose to fund might indeed be of high quality.)

--
Aleksei Riikonen - http://www.iki.fi/aleksei

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to