Keith Elis wrote:
Richard Loosemore wrote:

>Your email could be taken as threatening to set up a website >to promote >violence against AI researchers who speculate on ideas that, in your >judgment, could be considered "scary".

I'm on your side, too, Richard.

I understand this, and I apologize for what may have been too strong a reaction on my part. You have to understand that, from what I saw in your original message, your words looked only one step removed from a unabomber-style threat.

I am afraid the question and answer sequence below got too tangled for me to dissect it in detail: I accept that your intention was to warn against the foolishness of wild speculations, rather than to threaten anyone who indulged in thought experiments.

Still, I hope you will understand that by saying that you have considered collecting the remarks of AI researchers on a website, with the *implied* idea that this would embolden or encourage people to overreact to those remarks, or take them out of context, and perhaps cause those people to come after said researchers, you expressed yourself in a way that some might consider threatening.

You ask one question that I would like to answer in a separate message, since it is important enough in its own right.


Richard Loosemore.



Answer me this, if you dare: Do you believe it's possible to design an
artificial intelligence that won't wipe out humanity? >While I think Shane's comments were silly, they are, in my >opinion, so >far removed from any situation in which they could make a >difference in >the real world, that your threatening remarks are viscerally >disgusting.

I understand you're having a strong reaction to the viewpoint I posted,
but are you that far removed from the rest of humanity that this view
could disgust you? I would expect something more from a cognitive
scientist. What does your experience with minds tell you? Is this
viewpoint so ridiculous that few, if any, would agree with it? >I happen to be expert enough in the AI field to know that >there are good >reasons to believe that his comments cannot *ever*, in the entire >history of the universe, have any effect on the behavior of a >real AI.
I thought Shane posed a question about killing off humanity v. killing
off a superintelligent AI. What comments are you referring to?

>In fact, almost all of the "scary" things said about the impact of >artificial intelligence are wild speculations that are in the same >category: virtually or completely impossible in the real world.

I hope you're right. >In that larger context, if anyone were to promote attacks on AI >researchers because those people think they are saying >"scary" things, >they would be no better than medieval witchhunters.

This is a great way to put it. Now imagine yourself in front of the
Inquisition, and answer the first question I posed.

Thanks for the response.

Keith


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to