On 28/05/07, Shane Legg <[EMAIL PROTECTED]> wrote:

Which got me thinking.  It seems reasonable to think that killing a
human is worse than killing a mouse because a human is more
intelligent/complex/conscious/...etc...(use what ever measure you
prefer) than a mouse.

So, would killing a super intelligent machine (assuming it was possible)
be worse than killing a human?

If a machine was more intelligent/complex/conscious/...etc... than
all of humanity combined, would killing it be worse than killing all of
humanity?


Before you consider whether killing the machine would be bad, you have to
consider whether the machine minds being killed, and how much it minds being
killed. You can't actually prove that death is bad as a mathematical
theorem; it is something that has to be specifically programmed, in the case
of living things by evolution.


--
Stathis Papaioannou

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to