Richard Loosemore wrote: >Your email could be taken as threatening to set up a website >to promote >violence against AI researchers who speculate on ideas that, in your >judgment, could be considered "scary".
I'm on your side, too, Richard. Answer me this, if you dare: Do you believe it's possible to design an artificial intelligence that won't wipe out humanity? >While I think Shane's comments were silly, they are, in my >opinion, so >far removed from any situation in which they could make a >difference in >the real world, that your threatening remarks are viscerally >disgusting. I understand you're having a strong reaction to the viewpoint I posted, but are you that far removed from the rest of humanity that this view could disgust you? I would expect something more from a cognitive scientist. What does your experience with minds tell you? Is this viewpoint so ridiculous that few, if any, would agree with it? >I happen to be expert enough in the AI field to know that >there are good >reasons to believe that his comments cannot *ever*, in the entire >history of the universe, have any effect on the behavior of a >real AI. I thought Shane posed a question about killing off humanity v. killing off a superintelligent AI. What comments are you referring to? >In fact, almost all of the "scary" things said about the impact of >artificial intelligence are wild speculations that are in the same >category: virtually or completely impossible in the real world. I hope you're right. >In that larger context, if anyone were to promote attacks on AI >researchers because those people think they are saying >"scary" things, >they would be no better than medieval witchhunters. This is a great way to put it. Now imagine yourself in front of the Inquisition, and answer the first question I posed. Thanks for the response. Keith ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8
