It seems Shane Legg is not very accustomed to thinking about ethical
issues, as he makes such elementary mistakes as this:
"I suspect that the only stable long term possibility is a super AGI
that is primarily interested in its own self preservation. All other
possibilities are critically unstable due to evolutionary pressure."
Sounds absurd to all those of us, who have often considered our own
existence to hold only (or almost only) derived value, and clearly see
that such a feature doesn't necessarily constitute an evolutionary
disadvantage.
Sorry, but you're going to have to explain this to me more explicitly.
This mistake of considering a primary (non-derived) interest in
self-preservation to be a necessary feature is surprisingly common,
and also among the very dangerous mistakes that can be made in
designing AGIs.
I did not claim that a primary interest in self preservation was a
necessary feature when designing an AGI. I only claimed that
the greater an AGI's emphasis on self preservation, the more
likely it is that it will survive.
Shane
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
