One day, a printout of this email will be found among the post apocalyptic
wreckage by one of the few remaining humans and they will enjoy the first
laugh they've had in a year.

Just kidding. I have no idea how to calibrate this threat. I'm pretty
skeptical, but some awfully smart people are seriously concerned about it.

Terren

On Sep 2, 2014 7:22 AM, "Pierz" <[email protected]> wrote:
>
> I have to say I find the whole thing amusing. Tegmark even suggested we
should be spending one percent of GDP trying to research this terrible
threat to humanity and wondered why we weren't doing it. Why not? Because,
unlike global warming and nuclear weapons, there is absolutely no sign of
the threat materializing. It's an absolutely theoretical risk based on a
wild extrapolation. To me the whole idea of researching defences against a
future robot attack is like building weapons to defend ourselves against
aliens. So far, the major threat from computers is their stupidity, not
their super-intelligence. It's the risk that they will blindly carry out
some mechanical instruction (think of semi-autonomous military drones)
without any human judgement. Some of you may know the story of the Russian
commander who prevented World War III by overriding protocol when his
systems told him the USSR was under missile attack. The computer systems
f%^*ed up, he used his judgement and saved the world. The risk of computers
will always be their mindless rigidity, not their turning into HAL 9000.
Someone on the thread said something about Google face recognition software
exhibiting behaviour its programmers didn't understand and they hadn't told
it to do. Yeah. My programs do that all the time. It's called a bug. When
software reaches a certain level of complexity, you simply lose track of
what it's doing. Singularity, shmigularity.
>
>
> On Tuesday, August 26, 2014 5:05:04 AM UTC+10, Brent wrote:
>>
>> Bostrom says, "If humanity had been sane and had our act together
globally, the sensible course of action would be to postpone development of
superintelligence until we figured out how to do so safely. And then maybe
wait another generation or two just to make sure that we hadn't overlooked
some flaw in our reasoning. And then do it -- and reap immense benefit.
Unfortunately, we do not have the ability to pause."
>>
>> But maybe he's forgotten the Dark Ages.  I think ISIS is working hard to
produce a pause.
>>
>> Brent
>>
>> On 8/25/2014 10:27 AM
>>>
>>> Artificial Intelligence May Doom The Human Race Within A Century,
Oxford Professor
>>>
>>>
http://www.huffingtonpost.com/2014/08/22/artificial-intelligence-oxford_n_5689858.html?ir=Science
>>
>>
> --
> You received this message because you are subscribed to the Google Groups
"Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
email to [email protected].
> To post to this group, send email to [email protected].
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to