Shane Legg wrote:

http://www.youtube.com/watch?v=WGoi1MSGu64

Which got me thinking.  It seems reasonable to think that killing a
human is worse than killing a mouse because a human is more
intelligent/complex/conscious/...etc...(use what ever measure you
prefer) than a mouse.

So, would killing a super intelligent machine (assuming it was possible)
be worse than killing a human?

If a machine was more intelligent/complex/conscious/...etc... than
all of humanity combined, would killing it be worse than killing all of
humanity?

What possible reason do we have for assuming that the "badness" of killing a creature is a linear, or even a monotonic, function of the intelligence/complexity/consciousness of that creature?

You produced two data points on the graph, and two inequalities:

        B(Human) > B(Mouse)
        I/C/C(Human) > I/C/C(Mouse)

How many functions could be fitted through the two data points, given this information?



Richard Loosemore.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to