From: [email protected] 
[mailto:[email protected]] On Behalf Of Stephen Paul King
Sent: Saturday, August 30, 2014 8:35 PM
To: [email protected]
Subject: Re: AI Dooms Us

 

Hi Chris,

 

  Here is the thing. Does not the difficulty in creating a computational 
simulation of the brain in action give you pause?

 

Difficult yes, impossible I don’t think so. A simulation of the human brain 
need not have the same scale as an actual human brain (after all it is a 
model). For example statistical well bounded statements can be made about many 
social behavioral outcomes based on relatively small sampling sets. This also 
applies to the brain. A model could have a small fraction of the real brain’s 
complexity and scale and yet produce pretty accurate results.

Of course it is complex for us to imagine today… the human brain is after all 
vastly parallel with an immense number of connections 100s of trillions. Even 
within a single synapse (one of a large number of synapses) there is a world of 
exquisite molecular scale complexity and it seems multi-channeled to me. 

However, it is also true that the global networked meta-cloud (the dynamic 
process driven interconnected cloud of clouds operating over the underlying 
global scale physical and technological infrastructure) is also scaling up to 
immense numbers of disparate computational elements with thousands of trillions 
of network vertices. 

Perhaps I don’t understand the thrust of this statement? Why should it give me 
pause? 

The brain is a magnificent but of biology an admirable compact hyper energy 
efficient computational engine of unequaled parallelism. Yes, we agree. 

On the other hand the geometric growth rates of informatics capacity – in all 
dimensions: storage, speed, network size, traffic, cross talk, numbers of 
cores, memory, capacity of the various pipes.. you name it is also literally 
exploding out in scale. And on the level of fundamental understanding we are 
establishing a finer and finer grained understanding about the brain and how it 
works – dynamically in real time – and doing so from many various angles and 
scales of observation (from macro down to the electro-chemical molecular 
machinery of a single synapse). There are major initiatives in figuring out (at 
least at the macro scale) the human brain connectome. The micro architecture of 
the brain (at the scale of a single arrayed column -- usually around six 
neurons deep) is also being better understood, as are the various sensorial 
processing, memory, temporal, decisional and other brain algorithms.

A huge exciting challenge certainly… but for my way of thinking about this, not 
a cause for pause, rather a call to delve deeper into it and try to put it all 
together.

 

 

Why are we assuming that the AI will have a "mind" (program) that can be parsed 
by humans?

 

Who is assuming that? I was arguing that the code we create today will be the 
DNA of what emerges, by virtue of being the template in which subsequent 
development emerges from. Are you saying that our human prejudices, 
assumptions, biases, needs, desires, objectives, habits, ways of thinking… that 
all this assortment of hidden variables is not influencing the kind of code 
that is written. The hundreds of millions of lines of code written by 
programmers – mostly living and working in just a small number of technological 
centers operating on planet earth --  that all of this vast output of code is 
somehow unaffected by our humanness, by our nature?

Personally I would find that astounding and think it would seem rather obvious 
that in fact it is very much influenced by our nature and our objectives and 
needs.

I am not assuming anything by making the statement that whatever does emerge 
(assuming a self-aware intelligence does emerge) will have emerged out from a 
primordial soup that we cooked up and will have had its roots and beginnings 
from a code base of human creation, created for human ends and objectives with 
human prejudices and modes of thinking literally hard coded into the mind 
boggling numbers of services, objects, systems, frameworks and what have you 
that exist and are now all connecting up into non-locational dynamic cloud 
architectures.

 

   AFAIK, AGI (following Ben Goertzel's convention) will be completely 
incomprehensible to us. If we are trying to figure out its "values", what could 
we do better than to run the thing in a sandbox and let it interact in with 
"test AI". Can we "prove" that is intelligent?

We don’t know what it will turn out to become, but we can say with certainty 
that it will emerge from the code, from the algorithms from the physical chip 
architectures, network architectures, etc. that we have created. This is 
clearly an a priori assumption if we are speaking about human spawned AI – it 
has to emerge from human creation (unless we are speaking of alien AI of 
course).

We cannot even prove that we are intelligent or that we even exist. We do not 
even know what we are. We think we think, but measurements of brain activity 
indicate the thinking has already happened before we have had the though we 
though we thunk!

Mere narrators of our minds we are. We do not even understand how we think…. Or 
why we get our values. For example I tis becoming clear that our gut flora and 
fauna have more influence over our moods and desires than was previously 
realized… how much of “our” thoughts, decisions are really just our human host 
executing on the microbial desires and decisions of the five pounds or so of 
biological complexity in our guts? 

 

   I don't think so! Unless we could somehow "mindmeld" with it and the 
mindmeld results in a mutual "understanding", how could we have a proof. But 
melding minds together is a hard thing to do....

 

In thirty to forty years we may begin to converge as humans become increasingly 
cyborgs…. And I am being very conservative. Apple is about to unleash smart 
watches and google has its glasses… already people are thinking of having their 
biometrics wired up to a networked monitoring service… people born blind are 
being given rudimentary artificial vision. Nano-scale molecular machinery 
techniques and self-assembling systems approaches are pushing the scale down to 
levels where informatics may soon become incorporated throughout the body and 
the brain itself. 

My question for you is how much longer do you think we will remain recognizably 
human? Twenty years, fifty years.. a hundred perhaps. I just don’t see us 
stopping at some arbitrary wall unless our technology itself is collapsed by 
our collapsing resource base.. or unless (it has been argued) there is a point 
at which increasingly complex systems begin to fail. And this has merit as an 
argument too. But then taking computer architecture for example instead of 
scaling in complexity it scales out… multi-core architectures for example (each 
single core’s complexity within manageable bounds)

I assume nothing (or at least make an attempt)… we very much do live in 
interesting times… on this I think we can agree.

On Fri, Aug 29, 2014 at 3:16 AM, 'Chris de Morsella' via Everything List 
<[email protected]> wrote:

 

 

From: [email protected] 
[mailto:[email protected]] On Behalf Of Stephen Paul King

 

Are our fears of AI running amuck and killing random persons based on unfounded 
assumptions?

 

Perhaps, and I see your point. 

However, am going to try to make the following case: 

If we take AI as some emergent networked meta-system, arising in a non-linear, 
fuzzy, non-demarcated manner from pre-existing (increasingly networked) 
proto-AI smart systems (+vast repositories), such as already exist… and then 
drill down through the code layers – through the logic (DNA) – embedded within 
and characterizing all those sub systems, and factor in all the many conscious 
and unconscious human assumptions and biases that exist throughout these deeply 
layered systems… I would argue that what could emerge (& given the trajectory 
will emerge fairly soon I think) will very much have our human fingerprints 
sown all the way through its source code, its repositories, its injected 
values. At least initially.

I am concerned by the kinds of “values” that are becoming encoded in sub-system 
after sub-system, when the driving motivation for these layered complex 
self-navigating, increasingly autonomous systems is to create untended killer 
robots as well as social data mining smart agents to penetrate social networks 
and identify targets. If this becomes the major part of the code base from 
which AI emerges then isn’t it a fairly good reason to be concerned about the 
software DNA of what could emerge? If the code base is driven by the desire to 
establish and maintain a system characterized by having a highly centralized 
and vertical social control, deep data mining defended by an army increasingly 
comprised of autonomous mobile warbots… isn’t this a cause for concern?

But then -- admittedly -- who really knows how an emergent machine based 
(probably highly networked) self-aware intelligence might evolve; my concern is 
the initial conditions (algorithms etc.) we are embedding into the source code 
from which an AI would emerge.



On Monday, August 25, 2014 3:20:24 PM UTC-4, cdemorsella wrote:

AI is being developed and funded primarily by agencies such as DARPA, NSA, DOD 
(plus MIC contractors). After all smart drones with independent untended 
warfighting capabilities offer a significant military advantage to the side 
that possesses them. This is a guarantee that the wrong kind of 
super-intelligence will come out of the process... a super-intelligent machine 
devoted to the killing of "enemy" human beings (+ opposing drones I suppose as 
well)

 

This does not bode well for a benign super-intelligence outcome does it?

  _____  

From: meekerdb <[email protected]>
To: 
Sent: Monday, August 25, 2014 12:04 PM
Subject: Re: AI Dooms Us

 

Bostrom says, "If humanity had been sane and had our act together globally, the 
sensible course of action would be to postpone development of superintelligence 
until we figured out how to do so safely. And then maybe wait another 
generation or two just to make sure that we hadn't overlooked some flaw in our 
reasoning. And then do it -- and reap immense benefit. Unfortunately, we do not 
have the ability to pause."

But maybe he's forgotten the Dark Ages.  I think ISIS is working hard to 
produce a pause.

Brent

On 8/25/2014 10:27 AM 


Artificial Intelligence May Doom The Human Race Within A Century, Oxford 
Professor 


 

http://www.huffingtonpost.com/2014/08/22/artificial-intelligence-oxford_n_5689858.html?ir=Science

 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].


To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to a topic in the Google 
Groups "Everything List" group.
To unsubscribe from this topic, visit 
https://groups.google.com/d/topic/everything-list/YJeHJO5dNqQ/unsubscribe.
To unsubscribe from this group and all its topics, send an email to 
[email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.





 

-- 

Kindest Regards,

Stephen Paul King

Senior Researcher

Mobile: (864) 567-3099

[email protected]

 http://www.provensecure.us/

  <http://m.c.lnkd.licdn.com/media/p/8/000/2c9/1ca/29d0ccd.png> 

 


“This message (including any attachments) is intended only for the use of the 
individual or entity to which it is addressed, and may contain information that 
is non-public, proprietary, privileged, confidential and exempt from disclosure 
under applicable law or may be constituted as attorney work product. If you are 
not the intended recipient, you are hereby notified that any use, 
dissemination, distribution, or copying of this communication is strictly 
prohibited. If you have received this message in error, notify sender 
immediately and delete this message immediately.”


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to