On 28/05/07, Samantha Atkins <[EMAIL PROTECTED]> wrote:

Before you consider whether killing the machine would be bad, you have to
consider whether the machine minds being killed, and how much it minds being
killed. You can't actually prove that death is bad as a mathematical
theorem; it is something that has to be specifically programmed, in the case
of living things by evolution.


Is killing something brilliant but uncaring if it lives or dies any less
heinous?  If I kill someone who is sufficiently depressed to not really care
or even welcome death should I be charged with a lesser crime?


Talking about humans who want to die - the whole issue of euthanasia -
brings an extra emotional dimension to the issue. We might say that humans
who are suicidal are by definition sick, suffering, and would much rather be
physically and mentally well so that they *didn't* want to die. This is not
the same as killing something that genuinely never had the slightest notion
of death as harm, despite understanding what death is, and even despite
having a dispassionate preference for life over death. This doesn't mean it
wouldn't be a tragedy to destroy a brilliant machine, just as it would be a
tragedy to destroy a beautiful building or work of art. People might mourn
the loss of an unfeeling object as much as the loss of a person, might even
call the destruction an immoral act, but that doesn't mean they consider
that the object has been "hurt" in the same way as a person who is hurt or
killed.

Any being, of whatever origin, that does not care at all about its own
survival seems unlikely to survive very long.   I doubt we would build the
seed of a true AGI without instilling in our "mind child" that it is
important to continue to exist.


Sure, it would at least have a preference for survival. But this could be a
dispassionate preference, as a car has a preference to go when the
accelerator pedal is pressed, but doesn't mind stopping when the brake is
applied. Would it serve any purpose to program an intelligent car to become
anxious at the thought of stopping, or at the thought of being scrapped?
Might an intelligent car somehow decide that stopping is bad, or that being
scrapped is bad without being explicitly designed that way?


--
Stathis Papaioannou

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to