From: everything-list@googlegroups.com 
[mailto:everything-list@googlegroups.com] On Behalf Of Stephen Paul King

 

Are our fears of AI running amuck and killing random persons based on unfounded 
assumptions?

 

Perhaps, and I see your point. 

However, am going to try to make the following case: 

If we take AI as some emergent networked meta-system, arising in a non-linear, 
fuzzy, non-demarcated manner from pre-existing (increasingly networked) 
proto-AI smart systems (+vast repositories), such as already exist… and then 
drill down through the code layers – through the logic (DNA) – embedded within 
and characterizing all those sub systems, and factor in all the many conscious 
and unconscious human assumptions and biases that exist throughout these deeply 
layered systems… I would argue that what could emerge (& given the trajectory 
will emerge fairly soon I think) will very much have our human fingerprints 
sown all the way through its source code, its repositories, its injected 
values. At least initially.

I am concerned by the kinds of “values” that are becoming encoded in sub-system 
after sub-system, when the driving motivation for these layered complex 
self-navigating, increasingly autonomous systems is to create untended killer 
robots as well as social data mining smart agents to penetrate social networks 
and identify targets. If this becomes the major part of the code base from 
which AI emerges then isn’t it a fairly good reason to be concerned about the 
software DNA of what could emerge? If the code base is driven by the desire to 
establish and maintain a system characterized by having a highly centralized 
and vertical social control, deep data mining defended by an army increasingly 
comprised of autonomous mobile warbots… isn’t this a cause for concern?

But then -- admittedly -- who really knows how an emergent machine based 
(probably highly networked) self-aware intelligence might evolve; my concern is 
the initial conditions (algorithms etc.) we are embedding into the source code 
from which an AI would emerge.



On Monday, August 25, 2014 3:20:24 PM UTC-4, cdemorsella wrote:

AI is being developed and funded primarily by agencies such as DARPA, NSA, DOD 
(plus MIC contractors). After all smart drones with independent untended 
warfighting capabilities offer a significant military advantage to the side 
that possesses them. This is a guarantee that the wrong kind of 
super-intelligence will come out of the process... a super-intelligent machine 
devoted to the killing of "enemy" human beings (+ opposing drones I suppose as 
well)

 

This does not bode well for a benign super-intelligence outcome does it?

  _____  

From: meekerdb <meek...@verizon.net <javascript:> >
To: 
Sent: Monday, August 25, 2014 12:04 PM
Subject: Re: AI Dooms Us

 

Bostrom says, "If humanity had been sane and had our act together globally, the 
sensible course of action would be to postpone development of superintelligence 
until we figured out how to do so safely. And then maybe wait another 
generation or two just to make sure that we hadn't overlooked some flaw in our 
reasoning. And then do it -- and reap immense benefit. Unfortunately, we do not 
have the ability to pause."

But maybe he's forgotten the Dark Ages.  I think ISIS is working hard to 
produce a pause.

Brent

On 8/25/2014 10:27 AM 


Artificial Intelligence May Doom The Human Race Within A Century, Oxford 
Professor 


 

http://www.huffingtonpost.com/2014/08/22/artificial-intelligence-oxford_n_5689858.html?ir=Science

 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-li...@googlegroups.com <javascript:> .
To post to this group, send email to everyth...@googlegroups.com <javascript:> .
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to