On 5/1/2019 7:07 AM, John Clark wrote:
On Fri, Apr 26, 2019 at 9:33 AM <[email protected]
<mailto:[email protected]>> wrote:
> /AIs should have the same ethical protections as animals/
I would maintain that question is of no practical importance
whatsoever because AI's won't need our protection. The important
question is the one a AI might ask himself: Should I give humans the
same ethical protection that I give to other AI's?
What ethics attempt to do is to allow an interacting social group to
realize their individual values to the greatest degree possible by some
measure, even though they have some conflicting values. The problem
with AI's is they may have very different values, not only from humans,
but also from one another. For example, humans value companionship of
other humans. This is a big evolutionary advantage and appears in other
social animals. But there's no reason that an AI, say built as a Mars
Rover, would be provided with a desire for the companionship of another
Mars Rover. In fact we'd want them to explore independently and would
provide that as a hard-wired value the way evolution provides us with a
hard-wired value for sex.
In some ways this may make the problem of AI ethics easier, they may
have values that don't conflict with each other or with humans. An AI
may not care if it's turned off for a year or scrapped for parts. But
also it may not care if it has to eliminate the human race to achieve
it's values.
Brent
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.