We have to have a real success to get people emotionally geared to AI threat remediation. It's not SKYNET that threatens us, its ourselves. Plus, the ruling elites, who's politics are now Progressive, are not motivated to deal such a problem. We don't have international unrest protesting any technical issue. Last month, as predicted, it was the israel-Gaza war, next month, (maybe) the Ukraine war (no protests here!). To wit, rulers or ruled, we do not have a mind set for problem resolution, worldwide. It was ever thus, but now it is hurting our species, or starting to.
This is an inherent problem for many thinkers not to wish to involve politics in technical discussions, which I sympathize with. However, because of human nature, its difficult to separate the two. This is not Planet Vulcan and humans are not always rational actors. Radar, the Jet engine, the rocket, satellites, nuclear power, all came from war-a very emotional process indeed! -----Original Message----- From: Pierz <pier...@gmail.com> To: everything-list <everything-list@googlegroups.com> Sent: Tue, Sep 2, 2014 7:22 am Subject: Re: AI Dooms Us I have to say I find the whole thing amusing. Tegmark even suggested we should be spending one percent of GDP trying to research this terrible threat to humanity and wondered why we weren't doing it. Why not? Because, unlike global warming and nuclear weapons, there is absolutely no sign of the threat materializing. It's an absolutely theoretical risk based on a wild extrapolation. To me the whole idea of researching defences against a future robot attack is like building weapons to defend ourselves against aliens. So far, the major threat from computers is their stupidity, not their super-intelligence. It's the risk that they will blindly carry out some mechanical instruction (think of semi-autonomous military drones) without any human judgement. Some of you may know the story of the Russian commander who prevented World War III by overriding protocol when his systems told him the USSR was under missile attack. The computer systems f%^*ed up, he used his judgement and saved the world. The risk of computers will always be their mindless rigidity, not their turning into HAL 9000. Someone on the thread said something about Google face recognition software exhibiting behaviour its programmers didn't understand and they hadn't told it to do. Yeah. My programs do that all the time. It's called a bug. When software reaches a certain level of complexity, you simply lose track of what it's doing. Singularity, shmigularity. On Tuesday, August 26, 2014 5:05:04 AM UTC+10, Brent wrote: Bostrom says, "If humanity had been sane and had our act together globally, the sensible course of action would be to postpone development of superintelligence until we figured out how to do so safely. And then maybe wait another generation or two just to make sure that we hadn't overlooked some flaw in our reasoning. And then do it -- and reap immense benefit. Unfortunately, we do not have the ability to pause." But maybe he's forgotten the Dark Ages. I think ISIS is working hard to produce a pause. Repeating, the fault lies not in AI, but in ourselves, Horatio. Brent On 8/25/2014 10:27 AM Artificial Intelligence May Doom The Human Race Within A Century, Oxford Professor http://www.huffingtonpost.com/2014/08/22/artificial-intelligence-oxford_n_5689858.html?ir=Science -- You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com. To post to this group, send email to everything-list@googlegroups.com. Visit this group at http://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com. To post to this group, send email to everything-list@googlegroups.com. Visit this group at http://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/d/optout.