Keith Elis wrote:
Shane Legg wrote:

--------------------
If a machine was more intelligent/complex/conscious/...etc... than all of humanity combined, would killing it be worse than killing all of
humanity?
--------------------

You're asking a rhetorical question but let's just get the correct
answer out there first: If it comes down to killing me or a machine, I
want that machine dead. If you're going to navel-gaze over some
hair-splitting ethical conundrum concerning who it makes more objective
sense to terminate, I'll kill it myself while you're pondering. And
since you're not sure whether killing machines is worse or better than
killing me and the people I care about, I'm probably going to have to do
something about you, too, since you're the guy trying to build the damn
things.

I have been of a mind for years to start a public website about 'Scary
AI Researchers' where people can look up the scariest things said by the
various AI researchers and learn more about them. I haven't done this
because I don't want to put anyone at risk. But someone will come up
with this website eventually. And then everything you ever wrote on the
topic *anywhere* will be taken completely out of context and it will
take an Act of Congress to set the record straight.

Keith

Keith,

Your email could be taken as threatening to set up a website to promote violence against AI researchers who speculate on ideas that, in your judgment, could be considered "scary".

While I think Shane's comments were silly, they are, in my opinion, so far removed from any situation in which they could make a difference in the real world, that your threatening remarks are viscerally disgusting.

I happen to be expert enough in the AI field to know that there are good reasons to believe that his comments cannot *ever*, in the entire history of the universe, have any effect on the behavior of a real AI. In fact, almost all of the "scary" things said about the impact of artificial intelligence are wild speculations that are in the same category: virtually or completely impossible in the real world.

In that larger context, if anyone were to promote attacks on AI researchers because those people think they are saying "scary" things, they would be no better than medieval witchhunters.


Richard Loosemore.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to