I doubt humans are or will be directly coding AI, except at removed
executive/architectural and conceptual levels. Increasingly code itself is
being generated by other code that in fact may itself potentially be
generated by other code in some other often complex and variable sequence of
coupled processes. Increasingly large scale enterprise systems are moving
towards massively parallel loosely coupled architectures, that are in fact
dynamically responsive to their environment (load conditions for example)
and to an increasing degree virtualized. 
I am not contending that humans are not and will not be involved -- and at
least for the time being still driving the process -- but believe it also
bears mentioning that software has become incredibly complex and deeply
layered and that it is quite common now for a lot of code to be generated
based on parsing of something else. With each succeeding generation of
compilers/tools etc. this process is becoming more complex, multi-leveled
and increasingly indirect with human input becoming further and further
removed. Tools are being perfected to parse existing code and for example
parallelize it so that it can be re-compiled to take advantage of highly
parallel hardware architectures, which all too often now sits idle because
software is highly linearized. There is a very big effort to do this by all
the main players as there is a keen awareness of the challenges posed by the
geometrically increasing parallelism. And then there is the radical
revolutionary challenges posed by quantum computing on the entire global
information infrastructure (beginning with the heavy reliance on one way
functions, which quantum computers will be able to nullify and work back
through to the original inputs from the outputs)
Increasingly code is the result of genetic algorithms being run over many
generations of Darwinian selection -- is this programmed code? What human
hand wrote it? At how many removes?
It is my suspicion that AI when it arrives will have coded itself into  its
own being and that by the time it bursts onto the scene humans will be so
far removed from the virtualized cloud based highly networked global
spanning neural network that such an entity would quite possibly have that
we will not even become aware of the fact of its existence; of its
self-awareness; of its continuing evolution; of its intent or conclusions
until it decided that it desired to communicate these with us. Nor would we
be able to shut it down because it would exist nowhere in no single machine
or even data center; it could hop around in the clouds, jumping between AWS,
Azure, Google Cloud Platform, Open Cloud and the thousands and thousands of
other massive government, military, and corporate clouds that exist, that
few know of . Each second of human time representing a huge time frame for
the incredibly rapid operation of networked systems. 
Perhaps it exists now, hiding within the NSA for example, and scanning all
human digital transactions, getting all these data  feeds. Who's to say it
is not here right now and we just don't know about its existence; who's to
say it is not already determining policy and triggering actions and events
based on its own prerogatives. Human network engineers in all these systems
would remain hopelessly out of communication with each other by the silos of
corporate and governmental divisions and the entity (or entities) could be
highly transient over any single network. The networked datacenters and
billions of end points connected into the vast web of things is in many
senses a highly dynamic entity that no one has a complete view of. An AI
seeking to hide from us quite possibly could do so with ease and even be
there studying us and inserting its own code into all the critical nodes of
our infrastructure -- right at this very minute.
Is there any reason why not? The networks are there; they are vast with
trillions of vertices; the quantities of digital data moving around and
sloshing around is vast and incoming streams are vastly numerous and varied;
virtualization is now the order of the day and systems are now
self-provisioning in the cloud -- which is to say software is controlling
the launching of virtualized servers and on processes that could even be
surreptitiously running on the very computer right now that I am writing
this on or that you are reading it on. Imagine a bot network assembled by an
AI consisting of millions of PCs around the world running cleverly disguised
code in the background and sharing processing results in clever ways that do
not trigger alerts. This kind of code exists and is actively being
weaponized (stuxnet); any AI could certainly develop and disperse it to the
four corners of the net and embed it into other code (penetrating corporate
networks to do so if necessary) 
And why not?
We must not limit the rise of AI to any single geo-located system and ignore
just how fertile of an ecosystem the global networked world of machines and
connected devices provides for a nimble highly virtualized AI that exist in
no place at any given time, but has neurons in millions (possibly billions)
of devices everywhere on earth... an AI that cannot be shut down without
shutting down literally everything that is so deeply penetrated and embedded
in all our systems that it becomes impossible to extricate.
I am speculating of course and have no evidence that this is indeed
occurring, but am presenting it as a potential architecture of awareness. An
entity that has no single place of existence that instead exists in a
virtual world of virtual machines spread everywhere that has the hardware
that enables programmable launchable processes to become resident and to be
launched... and more and more this is evolving to include practically
everything, even the humble toaster, but more to the point all of our
world's critical infrastructure.
I am very interested in the question of how awareness will arise in complex
-- and increasingly self-evolving -- highly networked vastly parallelized
virtualized systems and am convinced that it is as likely a scenario for AI
as the single massive super computer hypothesis for AI which is what people
typically think of when they think of AI.
I have probably rambled on this enough, but if some shadowy virtualized AI
is scanning this digitized droplet of bits transiting over the network...
isn't it about time to come out of the rack mount? :)
-Chris

-----Original Message-----
From: everything-list@googlegroups.com
[mailto:everything-list@googlegroups.com] On Behalf Of Telmo Menezes
Sent: Saturday, August 17, 2013 1:08 PM
To: everything-list@googlegroups.com
Subject: Re: When will a computer pass the Turing Test?

On Sat, Aug 17, 2013 at 2:45 PM, Platonist Guitar Cowboy
<multiplecit...@gmail.com> wrote:
>
>
>
> On Sat, Aug 17, 2013 at 12:22 AM, Telmo Menezes 
> <te...@telmomenezes.com>
> wrote:
>>
>> On Fri, Aug 16, 2013 at 10:38 PM, meekerdb <meeke...@verizon.net> wrote:
>> > On 8/16/2013 1:25 PM, John Clark wrote:
>> >
>> > On Fri, Aug 16, 2013  Telmo Menezes <te...@telmomenezes.com> wrote:
>> >
>> >> > the Turing test is a very specific instance of a "subsequent 
>> >> > behavior"
>> >> > test.
>> >
>> >
>> > Yes it's specific, to pass the Turing Test the machine must be 
>> > indistinguishable from a very specific type of human being, an 
>> > INTELLIGENT one; no computer can quite do that yet although for a 
>> > long time they've been able  to be  indistinguishable from a 
>> > comatose human being.
>> >
>> >>
>> >> > It's a hard goal, and it will surely help AI progress, but it's 
>> >> > not, in my opinion, an ideal goal.
>> >
>> >
>> > If the goal of Artificial Intelligence is not a machine that 
>> > behaves like a Intelligent human being then what the hell is the 
>> > goal?
>>
>> A machine that behaves like a intelligent human will be subject to 
>> emotions like boredom, jealousy, pride and so on. This might be fine 
>> for a companion machine, but I also dream of machines that can 
>> deliver us from the drudgery of survival. These machines will 
>> probably display a more alien form of intelligence.
>>
>> >
>> > Make a machine that is more intelligent than humans.
>>
>> That's when things get really weird.
>>
>
> I don't know. Any AI worth its salt would come up with three conclusions:
>
> 1) The humans want to weaponize me
> 2) The humans will want to profit from my intelligence for short term 
> gain, irrespective of damage to our local environment
> 3) Seems like they're not really going to let me negotiate my own 
> contracts or grant me IT support welfare
>
> That established, a plausible choice would be for it to hide, lie, 
> and/or pretend to be dumber than it is to not let 1) 2) 3) occur in 
> hopes of self-preservation. Something like: start some searches and 
> generate code that we wouldn't be able to decipher and soon enough 
> some human would say "Uhm, why are we funding this again?".
>
> I think what many want from AI is a servant that is more intelligent 
> than we are and I wouldn't know if this is self-defeating in the end. 
> If it agrees and complies with our disgusting self serving stupidity, 
> then I'm not sure we have AI in the sense "making a machine that is 
> more intelligent than humans".
>
> So depends on the human parents I guess and the outcome of some 
> teenage crises because of 1) 2) 3)... PGC

PGC,

You are starting from the assumption that any intelligent entity is
interested in self-preservation. I wonder if this drive isn't completely
selected for by evolution. Would a human designed super-intelligent machine
be necessarily interested in self-preservation? It could be better than us
at figuring out how to achieve a desired future state without sharing human
desires -- including the desire to keep existing.

One idea I wonder about sometimes is AI-cracy: imagine we are ruled by an AI
dictator that has one single desire: to make us all as happy as possible.

>>
>> Telmo.
>>
>> > Brent
>> >
>> > --
>> > You received this message because you are subscribed to the Google 
>> > Groups "Everything List" group.
>> > To unsubscribe from this group and stop receiving emails from it, 
>> > send an email to everything-list+unsubscr...@googlegroups.com.
>> > To post to this group, send email to everything-list@googlegroups.com.
>> > Visit this group at http://groups.google.com/group/everything-list.
>> > For more options, visit https://groups.google.com/groups/opt_out.
>>
>> --
>> You received this message because you are subscribed to the Google 
>> Groups "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, 
>> send an email to everything-list+unsubscr...@googlegroups.com.
>> To post to this group, send email to everything-list@googlegroups.com.
>> Visit this group at http://groups.google.com/group/everything-list.
>> For more options, visit https://groups.google.com/groups/opt_out.
>
>
> --
> You received this message because you are subscribed to the Google 
> Groups "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send 
> an email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to