Re: The Real AI Threat?

2020-12-11 Thread Allen McKinley Kitchen (gmail)
Unobserved, a small capacitor on an insignificant board near the top of a 
highly secure electronics cabinet in the Group Six radio communications system 
emits a puff of smoke...

(This is a paraphrase from memory, as I couldn’t locate Burdick's book 
quickly..)

..Allen

> On Dec 11, 2020, at 15:45, b...@theworld.com wrote:
> 
> 
> Slow Friday...
> 
> One pressing problem of "AI", and might be a useful analogy, is that
> we're (everyone w/ the money) deploying it, for some value of "it",
> into weapons systems.
> 
> The problem is that decisions made by for example an attack drone
> might have to be made in milliseconds incorporating many real-time
> facts, much faster than a human can. Particularly if one considers
> such weapons "dog fighting" where both sides have them.
> 
> Some decisions we're probably comfortable enough with, can I get a
> clear shot at a moving target etc. A human presumably already
> identified the target so that's just execution.
> 
> But some amount to policy.
> 
> Such as an armed response where there was no armed conflict a few
> milliseconds ago because the software decided a slight variation in
> the flight pattern of that hypersonic cruise missile -- Russia claims
> to be deploying these, some with nuclear power so can stay aloft
> essentially forever -- is threatening and not just another go-around.
> 
> Etc.
> 
> The point being it's not only the decision/policy matrix, it's also
> that when we put that into real-time systems the element of time
> becomes a factor.
> 
> One can, for example, imagine similar issues regarding identifying and
> responding to cyberattacks in real-time. An attempt to bring down the
> country's cyberdefenses? Or just another cat photo? You have 10ms to
> decide whether to cut off all traffic from the source (or whatever,
> counter-attack) before your lights (might) go out and what are the
> implications?
> 
> I'm sure there are better examples but I hope you get the general
> idea.
> 
> -- 
>-Barry Shein
> 
> Software Tool & Die| b...@theworld.com | 
> http://www.TheWorld.com
> Purveyors to the Trade | Voice: +1 617-STD-WRLD   | 800-THE-WRLD
> The World: Since 1989  | A Public Information Utility | *oo*


Re: The Real AI Threat?

2020-12-11 Thread bzs


Slow Friday...

One pressing problem of "AI", and might be a useful analogy, is that
we're (everyone w/ the money) deploying it, for some value of "it",
into weapons systems.

The problem is that decisions made by for example an attack drone
might have to be made in milliseconds incorporating many real-time
facts, much faster than a human can. Particularly if one considers
such weapons "dog fighting" where both sides have them.

Some decisions we're probably comfortable enough with, can I get a
clear shot at a moving target etc. A human presumably already
identified the target so that's just execution.

But some amount to policy.

Such as an armed response where there was no armed conflict a few
milliseconds ago because the software decided a slight variation in
the flight pattern of that hypersonic cruise missile -- Russia claims
to be deploying these, some with nuclear power so can stay aloft
essentially forever -- is threatening and not just another go-around.

Etc.

The point being it's not only the decision/policy matrix, it's also
that when we put that into real-time systems the element of time
becomes a factor.

One can, for example, imagine similar issues regarding identifying and
responding to cyberattacks in real-time. An attempt to bring down the
country's cyberdefenses? Or just another cat photo? You have 10ms to
decide whether to cut off all traffic from the source (or whatever,
counter-attack) before your lights (might) go out and what are the
implications?

I'm sure there are better examples but I hope you get the general
idea.

-- 
-Barry Shein

Software Tool & Die| b...@theworld.com | http://www.TheWorld.com
Purveyors to the Trade | Voice: +1 617-STD-WRLD   | 800-THE-WRLD
The World: Since 1989  | A Public Information Utility | *oo*


Re: The Real AI Threat?

2020-12-11 Thread bzs


"Don't anthropomorphize computers, it just pisses them off." -- some wag

-- 
-Barry Shein

Software Tool & Die| b...@theworld.com | http://www.TheWorld.com
Purveyors to the Trade | Voice: +1 617-STD-WRLD   | 800-THE-WRLD
The World: Since 1989  | A Public Information Utility | *oo*


Re: The Real AI Threat?

2020-12-11 Thread Lady Benjamin PD Cannon
Exactly, it’s going to be bad code on the power grid resetting generator sync 
devices - not “AI” that eats us.
—L.B.

Lady Benjamin PD Cannon, ASCE
6x7 Networks & 6x7 Telecom, LLC 
CEO 
b...@6by7.net 
"The only fully end-to-end encrypted global telecommunications company in the 
world.”
FCC License KJ6FJJ



> On Dec 11, 2020, at 9:26 AM, Miles Fidelman  
> wrote:
> 
> Valdis,
> 
> Thank you for a prime example of the REAL threat of software eating the 
> world.  (Well that, and "rm -f *" typed by the wrong users at the wrong place 
> in an increasingly global file heirarchy).  Meanwhile, folks are busy 
> watching AI scenarios on tv.
> 
> Miles
> 
> Valdis Klētnieks wrote:
>> On Thu, 10 Dec 2020 18:56:04 -0500, Max Harmony via NANOG said:
>>> Programs have never done what you *want* them to do, only what you =
>>> *tell* them to do.
>> Amen to that - there was the time many moons ago when we launched a copy of a
>> vendor's network monitoring system, and told it to auto-discover the network.
>> It found all the on-campus subnets and most of the machines, and didnt seem 
>> to
>> be doing anything else, so we all headed home.
>> 
>> Come in the next morning, and discover that our 56k leased line to Nysernet
>> (yes, *that* many moons ago) was clogged with the monitoring system trying to
>> do SNMP probes against a significant fraction of the Internet in the 
>> Northeast.
>> 
>> Things apparently went particularly pear-shaped when it discovered the 
>> MIT/Boston
>> routing swamp...
>> 
>> And of course, we *told* it "discover the network", when we *meant* "discover
>> the network in this one /16.".  Fortunately, it didn't support "discover the
>> network and perform security scans on machines" - but I'm sure there's at 
>> least
>> one security-scanning package out there that makes this same whoopsie all too
>> easy to do, 3+ decades later...
>> 
> 
> 
> -- 
> In theory, there is no difference between theory and practice.
> In practice, there is.   Yogi Berra
> 
> Theory is when you know everything but nothing works. 
> Practice is when everything works but no one knows why. 
> In our lab, theory and practice are combined: 
> nothing works and no one knows why.  ... unknown



Re: The Real AI Threat?

2020-12-11 Thread Lady Benjamin PD Cannon
You know what happens in early slackware or RHEL if you type “killall” with no 
args, as root? 

I do :)

It does, exactly what you tell it to do...
—L.B.

Lady Benjamin PD Cannon, ASCE
6x7 Networks & 6x7 Telecom, LLC 
CEO 
b...@6by7.net 
"The only fully end-to-end encrypted global telecommunications company in the 
world.”
FCC License KJ6FJJ



> On Dec 10, 2020, at 4:53 PM, Valdis Klētnieks  wrote:
> 
> On Thu, 10 Dec 2020 18:56:04 -0500, Max Harmony via NANOG said:
>> Programs have never done what you *want* them to do, only what you =
>> *tell* them to do.
> 
> Amen to that - there was the time many moons ago when we launched a copy of a
> vendor's network monitoring system, and told it to auto-discover the network.
> It found all the on-campus subnets and most of the machines, and didnt seem to
> be doing anything else, so we all headed home.
> 
> Come in the next morning, and discover that our 56k leased line to Nysernet
> (yes, *that* many moons ago) was clogged with the monitoring system trying to
> do SNMP probes against a significant fraction of the Internet in the 
> Northeast.
> 
> Things apparently went particularly pear-shaped when it discovered the 
> MIT/Boston
> routing swamp...
> 
> And of course, we *told* it "discover the network", when we *meant* "discover
> the network in this one /16.".  Fortunately, it didn't support "discover the
> network and perform security scans on machines" - but I'm sure there's at 
> least
> one security-scanning package out there that makes this same whoopsie all too
> easy to do, 3+ decades later...
> 



Re: The Real AI Threat?

2020-12-11 Thread Brandon Svec
On Fri, Dec 11, 2020 at 9:25 AM Miles Fidelman 
wrote:

>
>
> (The point being:  We don't have to wait for "real" AI to see many of the
> dangers that folks fictionalize about - we are already seeing those dangers
> from mundane software - and it's only going to get worse while people are
> looking elsewhere.)
>
> Miles Fidelman
>
>
> Well put. No matter what you call it, algorithms are already dangerous and
can be unpredictable. People have a tendency to not want to make hard
choices and will often defer to computations or calculations.

Recommended reading on the topic:
https://smile.amazon.com/Weapons-Math-Destruction-Increases-Inequality/dp/0553418815


Re: The Real AI Threat?

2020-12-11 Thread Miles Fidelman

Valdis,

Thank you for a prime example of the REAL threat of software eating the 
world.  (Well that, and "rm -f *" typed by the wrong users at the wrong 
place in an increasingly global file heirarchy).  Meanwhile, folks are 
busy watching AI scenarios on tv.


Miles

Valdis Klētnieks wrote:

On Thu, 10 Dec 2020 18:56:04 -0500, Max Harmony via NANOG said:

Programs have never done what you *want* them to do, only what you =
*tell* them to do.

Amen to that - there was the time many moons ago when we launched a copy of a
vendor's network monitoring system, and told it to auto-discover the network.
It found all the on-campus subnets and most of the machines, and didnt seem to
be doing anything else, so we all headed home.

Come in the next morning, and discover that our 56k leased line to Nysernet
(yes, *that* many moons ago) was clogged with the monitoring system trying to
do SNMP probes against a significant fraction of the Internet in the Northeast.

Things apparently went particularly pear-shaped when it discovered the 
MIT/Boston
routing swamp...

And of course, we *told* it "discover the network", when we *meant* "discover
the network in this one /16.".  Fortunately, it didn't support "discover the
network and perform security scans on machines" - but I'm sure there's at least
one security-scanning package out there that makes this same whoopsie all too
easy to do, 3+ decades later...




--
In theory, there is no difference between theory and practice.
In practice, there is.   Yogi Berra

Theory is when you know everything but nothing works.
Practice is when everything works but no one knows why.
In our lab, theory and practice are combined:
nothing works and no one knows why.  ... unknown



Re: The Real AI Threat?

2020-12-11 Thread Miles Fidelman
Um... there are long standing techniques for programs to tune themselves 
& their algorithms - with languages that are particularly good for 
treating code as data (e.g., LISP - the grand daddy of AI languages - 
for whatever definition of AI you want to use).


And... a common complaint with current machine learning algorithms, is 
that they often "learn" to make decisions that can't be understood, 
after-the-fact.  We already have examples of "racist bots," and there 
are lots of legal issues regarding things like liability for injuries 
caused by self-guiding cars.


And then there are "spelling correctors" and digital "assistants" - when 
has Siri EVER done only what you want "her" to do?


The REAL problem is programs that blindly go off and do what you think 
you told them to do, and get it woefully wrong.  The more leeway we 
allow our programs to adapt, or learn, or self-tune, or 
whatever-you-want-to-call-it - the more trouble we're in.


(The point being:  We don't have to wait for "real" AI to see many of 
the dangers that folks fictionalize about - we are already seeing those 
dangers from mundane software - and it's only going to get worse while 
people are looking elsewhere.)


Miles Fidelman

J. Hellenthal wrote:
Let me know when a program will rewrite itself and add its own 
features ... then we may have a problem... otherwise they only do what 
you want them to do.


--
 J. Hellenthal

The fact that there's a highway to Hell but only a stairway to Heaven 
says a lot about anticipated traffic volume.



On Dec 10, 2020, at 12:41, Mel Beckman  wrote:


Jeez... some guys seem to take a joke literally - while ignoring a 
real and present danger - which was the point.


Miles,

With all due respect, you didn’t present this as a joke. You 
presented "AI self-healing systems gone wild” as a genuine risk. 
Which it isn’t. In fact, AI fear mongering is a seriously 
debilitating factor in technology policy, where policymakers and 
pundits — who also don’t get “the joke” — lobby for silly laws and 
make ridiculous predictions, such as Elon Musks claim that, by 2025, 
“AI will be where AI conscious and vastly smarter than humans.”


That’s the kind of ignorance that will waste billions of dollars. No 
joke.


 -mel



On Dec 10, 2020, at 8:47 AM, Miles Fidelman 
mailto:mfidel...@meetinghouse.net>> wrote:


Ahh invasive spambots, running on OpenStack ... "the telephone 
bell is tolling... "


Miles

adamv0025@netconsultings.comwrote:
>Automated resource discovery + automated resource allocation = 
recipe for disaster

That is literally how OpenStack works.
For now, don’t worry about AI taking away your freedom on its own, 
rather worry about how people using it might…

adam
*From:*NANOG*On 
Behalf Of*Miles Fidelman

*Sent:*Thursday, December 10, 2020 2:44 PM
*To:*'NANOG'
*Subject:*Re: The Real AI Threat?
adamv0...@netconsultings.com 
<mailto:adamv0...@netconsultings.com>wrote:


> Put them together, and the nightmare scenario is:

> - machine learning algorithm detects need for more resources

All good so far

  

> - machine learning algorithm makes use of vulnerability analysis library 


> to find other systems with resources to spare, and starts attaching

> those resources

Right so a company would built, trained and fine-tuned an AI,
or would have bought such a product and implemented it as part
of its NMS/DDoS mitigation suite, to do the above?
What is the probability of anyone thinking that to be a good idea?
To me that does sound like an AI based virus rather than a tool
one would want to develop or buy from a third party and then
integrate into the day to day operations.
You can’t take for instance alpha-0 or GPT-3 and make it do the
above. You’d have to train it to do so over millions of
examples and trials.
Oh and also these won’t “wake up” one day and “think” to
themselves oh I’m fed up with Atari games I’m going to learn
myself some chess and then do some reading on wiki about the
chess rules.


Jeez... some guys seem to take a joke literally - while ignoring a 
real and present danger - which was the point.


Meanwhile, yes, I think that a poorly ENGINEERED DDoS mitigation 
suite might well have failure modes that just keep eating up 
resources until systems start crashing all over the place.  Heck, 
spinning off processes until all available resources have been 
exhausted has been a failure mode of systems for years.  Automated 
resource discovery + automated resource allocation = recipe for 
disaster.  (No need for AIs eating the world.)


Miles





--
In theory, there is no difference between theory and practice.
In practice, there is.   Yogi Berra
Theory is when you know everything but nothing works.
Practice is when everything works but no one knows why.
In our lab, theory and pr

Re: The Real AI Threat?

2020-12-10 Thread Valdis Klētnieks
On Thu, 10 Dec 2020 18:56:04 -0500, Max Harmony via NANOG said:
> Programs have never done what you *want* them to do, only what you =
> *tell* them to do.

Amen to that - there was the time many moons ago when we launched a copy of a
vendor's network monitoring system, and told it to auto-discover the network.
It found all the on-campus subnets and most of the machines, and didnt seem to
be doing anything else, so we all headed home.

Come in the next morning, and discover that our 56k leased line to Nysernet
(yes, *that* many moons ago) was clogged with the monitoring system trying to
do SNMP probes against a significant fraction of the Internet in the Northeast.

Things apparently went particularly pear-shaped when it discovered the 
MIT/Boston
routing swamp...

And of course, we *told* it "discover the network", when we *meant* "discover
the network in this one /16.".  Fortunately, it didn't support "discover the
network and perform security scans on machines" - but I'm sure there's at least
one security-scanning package out there that makes this same whoopsie all too
easy to do, 3+ decades later...



pgpCSjE5gzAPG.pgp
Description: PGP signature


Re: The Real AI Threat?

2020-12-10 Thread Max Harmony via NANOG

> On 10 Dec 2020, at 18.11, J. Hellenthal via NANOG  wrote:
> 
> Let me know when a program will rewrite itself and add its own features ... 
> then we may have a problem... otherwise they only do what you want them to do.

Programs have never done what you *want* them to do, only what you *tell* them 
to do.


signature.asc
Description: Message signed with OpenPGP


Re: The Real AI Threat?

2020-12-10 Thread J. Hellenthal via NANOG
Let me know when a program will rewrite itself and add its own features ... 
then we may have a problem... otherwise they only do what you want them to do.

-- 
 J. Hellenthal

The fact that there's a highway to Hell but only a stairway to Heaven says a 
lot about anticipated traffic volume.

> On Dec 10, 2020, at 12:41, Mel Beckman  wrote:
> 
> 
>> 
>>> Jeez... some guys seem to take a joke literally - while ignoring a real and 
>>> present danger - which was the point.
> 
> Miles,
> 
> With all due respect, you didn’t present this as a joke. You presented "AI 
> self-healing systems gone wild” as a genuine risk. Which it isn’t. In fact, 
> AI fear mongering is a seriously debilitating factor in technology policy, 
> where policymakers and pundits — who also don’t get “the joke” — lobby for 
> silly laws and make ridiculous predictions, such as Elon Musks claim that, by 
> 2025, “AI will be where AI conscious and vastly smarter than humans.”
> 
> That’s the kind of ignorance that will waste billions of dollars. No joke.
> 
>  -mel
> 
> 
> 
>>> On Dec 10, 2020, at 8:47 AM, Miles Fidelman  
>>> wrote:
>>> 
>>> Ahh invasive spambots, running on OpenStack ... "the telephone bell is 
>>> tolling... "
>>> 
>>> Miles
>>> 
>>> adamv0...@netconsultings.com wrote:
>>> > Automated resource discovery + automated resource allocation = recipe for 
>>> > disaster
>>> That is literally how OpenStack works. 
>>>  
>>> For now, don’t worry about AI taking away your freedom on its own, rather 
>>> worry about how people using it might…
>>>  
>>>  
>>> adam
>>>  
>>> From: NANOG  On 
>>> Behalf Of Miles Fidelman
>>> Sent: Thursday, December 10, 2020 2:44 PM
>>> To: 'NANOG' 
>>> Subject: Re: The Real AI Threat?
>>>  
>>> adamv0...@netconsultings.com wrote:
>>> > Put them together, and the nightmare scenario is:
>>> > - machine learning algorithm detects need for more resources
>>> All good so far
>>>  
>>> > - machine learning algorithm makes use of vulnerability analysis library 
>>> > to find other systems with resources to spare, and starts attaching
>>> > those resources
>>> Right so a company would built, trained and fine-tuned an AI, or would have 
>>> bought such a product and implemented it as part of its NMS/DDoS mitigation 
>>> suite, to do the above? 
>>> What is the probability of anyone thinking that to be a good idea?
>>> To me that does sound like an AI based virus rather than a tool one would 
>>> want to develop or buy from a third party and then integrate into the day 
>>> to day operations.
>>>  
>>> You can’t take for instance alpha-0 or GPT-3 and make it do the above. 
>>> You’d have to train it to do so over millions of examples and trials. 
>>> Oh and also these won’t “wake up” one day and “think” to themselves oh I’m 
>>> fed up with Atari games I’m going to learn myself some chess and then do 
>>> some reading on wiki about the chess rules. 
>>> 
>>> Jeez... some guys seem to take a joke literally - while ignoring a real and 
>>> present danger - which was the point.
>>> 
>>> Meanwhile, yes, I think that a poorly ENGINEERED DDoS mitigation suite 
>>> might well have failure modes that just keep eating up resources until 
>>> systems start crashing all over the place.  Heck, spinning off processes 
>>> until all available resources have been exhausted has been a failure mode 
>>> of systems for years.  Automated resource discovery + automated resource 
>>> allocation = recipe for disaster.  (No need for AIs eating the world.)
>>> 
>>> Miles
>>> 
>>> 
>>> 
>>> 
>>> 
>>> -- 
>>> In theory, there is no difference between theory and practice.
>>> In practice, there is.   Yogi Berra
>>>  
>>> Theory is when you know everything but nothing works. 
>>> Practice is when everything works but no one knows why. 
>>> In our lab, theory and practice are combined: 
>>> nothing works and no one knows why.  ... unknown
>> 
>> 
>> -- 
>> In theory, there is no difference between theory and practice.
>> In practice, there is.   Yogi Berra
>> 
>> Theory is when you know everything but nothing works. 
>> Practice is when everything works but no one knows why. 
>> In our lab, theory and practice are combined: 
>> nothing works and no one knows why.  ... unknown
> 


Re: The Real AI Threat?

2020-12-10 Thread Mel Beckman
Jeez... some guys seem to take a joke literally - while ignoring a real and 
present danger - which was the point.

Miles,

With all due respect, you didn’t present this as a joke. You presented "AI 
self-healing systems gone wild” as a genuine risk. Which it isn’t. In fact, AI 
fear mongering is a seriously debilitating factor in technology policy, where 
policymakers and pundits — who also don’t get “the joke” — lobby for silly laws 
and make ridiculous predictions, such as Elon Musks claim that, by 2025, “AI 
will be where AI conscious and vastly smarter than humans.”

That’s the kind of ignorance that will waste billions of dollars. No joke.

 -mel



On Dec 10, 2020, at 8:47 AM, Miles Fidelman 
mailto:mfidel...@meetinghouse.net>> wrote:

Ahh invasive spambots, running on OpenStack ... "the telephone bell is 
tolling... "

Miles

adamv0...@netconsultings.com<mailto:adamv0...@netconsultings.com> wrote:
> Automated resource discovery + automated resource allocation = recipe for 
> disaster
That is literally how OpenStack works.

For now, don’t worry about AI taking away your freedom on its own, rather worry 
about how people using it might…


adam

From: NANOG 
<mailto:nanog-bounces+adamv0025=netconsultings@nanog.org>
 On Behalf Of Miles Fidelman
Sent: Thursday, December 10, 2020 2:44 PM
To: 'NANOG' <mailto:nanog@nanog.org>
Subject: Re: The Real AI Threat?

adamv0...@netconsultings.com<mailto:adamv0...@netconsultings.com> wrote:

> Put them together, and the nightmare scenario is:

> - machine learning algorithm detects need for more resources

All good so far



> - machine learning algorithm makes use of vulnerability analysis library

> to find other systems with resources to spare, and starts attaching

> those resources

Right so a company would built, trained and fine-tuned an AI, or would have 
bought such a product and implemented it as part of its NMS/DDoS mitigation 
suite, to do the above?
What is the probability of anyone thinking that to be a good idea?
To me that does sound like an AI based virus rather than a tool one would want 
to develop or buy from a third party and then integrate into the day to day 
operations.

You can’t take for instance alpha-0 or GPT-3 and make it do the above. You’d 
have to train it to do so over millions of examples and trials.
Oh and also these won’t “wake up” one day and “think” to themselves oh I’m fed 
up with Atari games I’m going to learn myself some chess and then do some 
reading on wiki about the chess rules.

Jeez... some guys seem to take a joke literally - while ignoring a real and 
present danger - which was the point.

Meanwhile, yes, I think that a poorly ENGINEERED DDoS mitigation suite might 
well have failure modes that just keep eating up resources until systems start 
crashing all over the place.  Heck, spinning off processes until all available 
resources have been exhausted has been a failure mode of systems for years.  
Automated resource discovery + automated resource allocation = recipe for 
disaster.  (No need for AIs eating the world.)

Miles






--

In theory, there is no difference between theory and practice.

In practice, there is.   Yogi Berra



Theory is when you know everything but nothing works.

Practice is when everything works but no one knows why.

In our lab, theory and practice are combined:

nothing works and no one knows why.  ... unknown



--
In theory, there is no difference between theory and practice.
In practice, there is.   Yogi Berra

Theory is when you know everything but nothing works.
Practice is when everything works but no one knows why.
In our lab, theory and practice are combined:
nothing works and no one knows why.  ... unknown



Re: The Real AI Threat?

2020-12-10 Thread Miles Fidelman
Ahh invasive spambots, running on OpenStack ... "the telephone bell 
is tolling... "


Miles

adamv0...@netconsultings.com wrote:


> Automated resource discovery + automated resource allocation = recipe 
for disaster


That is literally how OpenStack works.

For now, don’t worry about AI taking away your freedom on its own, 
rather worry about how people using it might…


adam

*From:*NANOG  
*On Behalf Of *Miles Fidelman

*Sent:* Thursday, December 10, 2020 2:44 PM
*To:* 'NANOG' 
*Subject:* Re: The Real AI Threat?

adamv0...@netconsultings.com <mailto:adamv0...@netconsultings.com> wrote:

> Put them together, and the nightmare scenario is:

> - machine learning algorithm detects need for more resources

All good so far

  

> - machine learning algorithm makes use of vulnerability analysis library 


> to find other systems with resources to spare, and starts attaching

> those resources

Right so a company would built, trained and fine-tuned an AI, or
would have bought such a product and implemented it as part of its
NMS/DDoS mitigation suite, to do the above?

What is the probability of anyone thinking that to be a good idea?

To me that does sound like an AI based virus rather than a tool
one would want to develop or buy from a third party and then
integrate into the day to day operations.

You can’t take for instance alpha-0 or GPT-3 and make it do the
above. You’d have to train it to do so over millions of examples
and trials.

Oh and also these won’t “wake up” one day and “think” to
themselves oh I’m fed up with Atari games I’m going to learn
myself some chess and then do some reading on wiki about the chess
rules.


Jeez... some guys seem to take a joke literally - while ignoring a 
real and present danger - which was the point.


Meanwhile, yes, I think that a poorly ENGINEERED DDoS mitigation suite 
might well have failure modes that just keep eating up resources until 
systems start crashing all over the place.  Heck, spinning off 
processes until all available resources have been exhausted has been a 
failure mode of systems for years.  Automated resource discovery + 
automated resource allocation = recipe for disaster.  (No need for AIs 
eating the world.)


Miles





--
In theory, there is no difference between theory and practice.
In practice, there is.   Yogi Berra
Theory is when you know everything but nothing works.
Practice is when everything works but no one knows why.
In our lab, theory and practice are combined:
nothing works and no one knows why.  ... unknown



--
In theory, there is no difference between theory and practice.
In practice, there is.   Yogi Berra

Theory is when you know everything but nothing works.
Practice is when everything works but no one knows why.
In our lab, theory and practice are combined:
nothing works and no one knows why.  ... unknown



RE: The Real AI Threat?

2020-12-10 Thread adamv0025
> Automated resource discovery + automated resource allocation = recipe for 
> disaster

That is literally how OpenStack works. 

 

For now, don’t worry about AI taking away your freedom on its own, rather worry 
about how people using it might…

 

 

adam

 

From: NANOG  On Behalf Of 
Miles Fidelman
Sent: Thursday, December 10, 2020 2:44 PM
To: 'NANOG' 
Subject: Re: The Real AI Threat?

 

adamv0...@netconsultings.com <mailto:adamv0...@netconsultings.com>  wrote:

> Put them together, and the nightmare scenario is:
> - machine learning algorithm detects need for more resources
All good so far
 
> - machine learning algorithm makes use of vulnerability analysis library 
> to find other systems with resources to spare, and starts attaching
> those resources

Right so a company would built, trained and fine-tuned an AI, or would have 
bought such a product and implemented it as part of its NMS/DDoS mitigation 
suite, to do the above? 

What is the probability of anyone thinking that to be a good idea?

To me that does sound like an AI based virus rather than a tool one would want 
to develop or buy from a third party and then integrate into the day to day 
operations.

 

You can’t take for instance alpha-0 or GPT-3 and make it do the above. You’d 
have to train it to do so over millions of examples and trials. 

Oh and also these won’t “wake up” one day and “think” to themselves oh I’m fed 
up with Atari games I’m going to learn myself some chess and then do some 
reading on wiki about the chess rules. 


Jeez... some guys seem to take a joke literally - while ignoring a real and 
present danger - which was the point.

Meanwhile, yes, I think that a poorly ENGINEERED DDoS mitigation suite might 
well have failure modes that just keep eating up resources until systems start 
crashing all over the place.  Heck, spinning off processes until all available 
resources have been exhausted has been a failure mode of systems for years.  
Automated resource discovery + automated resource allocation = recipe for 
disaster.  (No need for AIs eating the world.)

Miles







-- 
In theory, there is no difference between theory and practice.
In practice, there is.   Yogi Berra
 
Theory is when you know everything but nothing works. 
Practice is when everything works but no one knows why. 
In our lab, theory and practice are combined: 
nothing works and no one knows why.  ... unknown


Re: The Real AI Threat?

2020-12-10 Thread Miles Fidelman

adamv0...@netconsultings.com wrote:

> Put them together, and the nightmare scenario is:
> - machine learning algorithm detects need for more resources
All good so far
> - machine learning algorithm makes use of vulnerability analysis library 
> to find other systems with resources to spare, and starts attaching

> those resources

Right so a company would built, trained and fine-tuned an AI, or would 
have bought such a product and implemented it as part of its NMS/DDoS 
mitigation suite, to do the above?


What is the probability of anyone thinking that to be a good idea?

To me that does sound like an AI based virus rather than a tool one 
would want to develop or buy from a third party and then integrate 
into the day to day operations.


You can’t take for instance alpha-0 or GPT-3 and make it do the above. 
You’d have to train it to do so over millions of examples and trials.


Oh and also these won’t “wake up” one day and “think” to themselves oh 
I’m fed up with Atari games I’m going to learn myself some chess and 
then do some reading on wiki about the chess rules.




Jeez... some guys seem to take a joke literally - while ignoring a real 
and present danger - which was the point.


Meanwhile, yes, I think that a poorly ENGINEERED DDoS mitigation suite 
might well have failure modes that just keep eating up resources until 
systems start crashing all over the place.  Heck, spinning off processes 
until all available resources have been exhausted has been a failure 
mode of systems for years.  Automated resource discovery + automated 
resource allocation = recipe for disaster.  (No need for AIs eating the 
world.)


Miles




--
In theory, there is no difference between theory and practice.
In practice, there is.   Yogi Berra

Theory is when you know everything but nothing works.
Practice is when everything works but no one knows why.
In our lab, theory and practice are combined:
nothing works and no one knows why.  ... unknown



RE: The Real AI Threat?

2020-12-10 Thread adamv0025
> Put them together, and the nightmare scenario is:
> - machine learning algorithm detects need for more resources
All good so far
 
> - machine learning algorithm makes use of vulnerability analysis library 
> to find other systems with resources to spare, and starts attaching
> those resources

Right so a company would built, trained and fine-tuned an AI, or would have 
bought such a product and implemented it as part of its NMS/DDoS mitigation 
suite, to do the above? 

What is the probability of anyone thinking that to be a good idea?

To me that does sound like an AI based virus rather than a tool one would want 
to develop or buy from a third party and then integrate into the day to day 
operations.

 

You can’t take for instance alpha-0 or GPT-3 and make it do the above. You’d 
have to train it to do so over millions of examples and trials. 

Oh and also these won’t “wake up” one day and “think” to themselves oh I’m fed 
up with Atari games I’m going to learn myself some chess and then do some 
reading on wiki about the chess rules. 

 



adam

 

 

From: NANOG  On Behalf Of 
Miles Fidelman
Sent: Wednesday, December 9, 2020 7:07 PM
To: NANOG 
Subject: The Real AI Threat?

 

Hi Folks,

It occurs to me that network & systems admins are the the folks who really have 
to worry about AI threats.
 
After watching yet another AI takes over the world show - you know, the 
same general theme, AI wipes out humans to preserve its existence - it 
occurred to me:
 
Perhaps the real AI threat is "self-healing systems" gone wild. Consider:
 
- automated system management
- automated load management
- automated resource management - spin up more instances of  
as necessary
- automated threat detection & response
- automated vulnerability analysis & response
 
Put them together, and the nightmare scenario is:
- machine learning algorithm detects need for more resources
- machine learning algorithm makes use of vulnerability analysis library 
to find other systems with resources to spare, and starts attaching 
those resources
- unbounded demand for more resources
 
Kind of what spambots have done to the global email system.
 
"For Homo Sapiens, the telephone bell had tolled."
(Dial F for Frankenstein, Arthur C. Clarke)
 
I think I need to start putting whisky in my morning coffee.  And maybe not 
thinking 
about NOT replacing third shift with AI tools.
 
Miles Fidelman
-- 
In theory, there is no difference between theory and practice.
In practice, there is.   Yogi Berra
 
Theory is when you know everything but nothing works. 
Practice is when everything works but no one knows why. 
In our lab, theory and practice are combined: 
nothing works and no one knows why.  ... unknown


Re: The Real AI Threat?

2020-12-10 Thread Rich Kulawiec
On Thu, Dec 10, 2020 at 12:34:33AM +, Mel Beckman wrote:
> So don???t be fooled by Siri and Google voice response. There is no
> intellect there, only pattern matching. Which we???ve been doing with
> machines since the Jacquard Loom.

On this particular point: many years ago, some of us at Purdue discussed
this at great length and eventually coined the term "ad-hockery" --
which found its way into the New Hacker's Dictionary.  The gist of
the idea is that it's possible to craft a sufficient number of ad hoc
rules, plug them into a pattern matcher, and present a modestly plausible
appearance of "intelligence"...  when in fact nothing resembling actual
intelligence is involved.  Such systems are brittle and demonstrate
it when presented with input not covered by those ad hoc rules, which
is why they're often made progressively less so by repeated tweaking.
(Also known as "release 3.0" and accompanied by prose touting it as
an innovative upgrade.)

But to borrow Mel's phrasing, even a very large collection of ad hoc
rules that performs its task tolerably well is no more intelligent than
the loom.


---rsk


Re: The Real AI Threat?

2020-12-09 Thread Mel Beckman
Miles,

My point is that there is no reason to wrap this engineering problem in the 
mythology of AI. There are no new problems here. The only way automated threat 
detection & response combined with automatic resource allocation allocation 
will lead to a disaster is if an incompetent (sometimes called “agile” :) 
software engineer fails to design in appropriate safeguards.

The problem isn’t the technology. It’s the lack of competent engineering. You 
can’t equate  "AI fighting for survival" with "bad engineering”, anymore than 
you can equate time travel with bad engineering.

Incidentally, Alan Turing’s test, which posits that a computer can be said to 
possess human intelligence if it can fool a human with its responses, was 
debunked long ago by an undergraduate CS student, who innocently asked, at an 
AI conference attended by CS luminaries “Does it follow, then, that if a 
computer can fool a dog with its responses, that it possesses dog-level 
intelligence?” ROTFL!

So don’t be fooled by Siri and Google voice response. There is no intellect 
there, only pattern matching. Which we’ve been doing with machines since the 
Jacquard Loom.

 -mel

On Dec 9, 2020, at 2:32 PM, Miles Fidelman 
mailto:mfidel...@meetinghouse.net>> wrote:

Mel Beckman wrote:

Miles,

You realize that “AI” as general artificial intelligence is science fiction, 
right? There is no general AI, and even ML is not actually learning in the 
sense that humans or animals learn. “Neural networks”, likewise, have nothing 
to do at all with the way biological neurons work in cognition (which science 
doesn’t understand). That’s all mythology, amplified by science fiction and TV 
fantasies like Star Trek’s character “Data”. It’s just anthropomorphizing 
technology.

Well, duh.  I'm old enough to remember the old aphorism "it's AI until we solve 
it, then it's engineering."


We create unnecessary risk when we anthropomorphize technology. The truth is, 
any kind of automation incurs risk. There is nothing related to intelligence, 
AI or otherwise. It’s all just automation to varying degrees. ML, for example, 
simply builds data structures based on prior input, and uses those structures 
to guide future actions. But that’s not general behavior — it all has to be 
purpose-designed for specific tasks.

We create unnecessary risk when we deploy technology with positive feedback 
loops.
Machine learning (not AI) + automated threat detection & response + automatic 
resource allocation = a recipe for disaster.
Call it "AI fighting for survival" or bad engineering - either way, it will 
kill us a lot sooner than any of the more fictional varieties of AI.

Since the academics’  promised general intelligence of AI never materialized, 
they had to dumb-down their terminology, and came up with “narrow AI”. Or “not 
AI”, as I prefer to say. But narrow AI is mathematically indistinguishable from 
any other kind of automation, and it has nothing whatsoever to do with 
intelligence, which science doesn’t remotely yet understand. It’s all 
automation, all the time.


Then again, Google's "AI" has gotten awfully good at answering relatively 
free-form questions.  And allowing for really, really dumb people, Siri comes 
pretty close to passing the classic Turing Test.


All automated systems require safeguards. If you don’t put safeguards in, 
things blow up: rockets on launchpads, guns on ships, Ansible on steroids. When 
things blow up, it’s never because systems unilaterally exploited general 
intelligence to “hook up” and become self-smarted. It’s because you were stupid.


Yup.  And folks are looking in the wrong place for things to protect against.

Miles


For a nice, rational look at why general AI is fiction, and what “narrow AI”, 
such as ML, can actually do, get Meredith Broussard’s excellent book 
"Artificial Unintelligence - How computers misunderstand the world".

https://www.amazon.com/Artificial-Unintelligence-Computers-Misunderstand-World/dp/026253701X

Or if you prefer a video summary, she has a quick talk on YouTube, "ERROR – The 
Art of Imperfection Conference: The Fragile”:

https://www.youtube.com/watch?v=OuDFhSUwOAQ

At 2:20 into the video, she puts the kibosh on the mythology of general AI.

 -mel




On Dec 9, 2020, at 11:07 AM, Miles Fidelman 
 wrote:

Hi Folks,
It occurs to me that network & systems admins are the the folks who really have 
to worry about AI threats.

After watching yet another AI takes over the world show - you know, the
same general theme, AI wipes out humans to preserve its existence - it
occurred to me:

Perhaps the real AI threat is "self-healing systems" gone wild. Consider:

- automated system management
- automated load management
- automated resource management - spin up more instances of 
as necessary
- automated threat detection & response
- automated vulnerability analysis & response

Put them together, and the nightmare scenario is:
- machine learning algorithm 

Re: The Real AI Threat?

2020-12-09 Thread Miles Fidelman

Ben Cannon wrote:
To follow - Siri couldn’t figure out how to add an entry to my 
calendar today.  I am yet to be afraid.


Although the google bot that placed a call to book a haircut was 
impressive.
"Siri, book dinner with my wife, on our anniversary."  Be afraid, VERY 
afraid. :-)


Miles



Ms. Lady Benjamin PD Cannon, ASCE
6x7 Networks & 6x7 Telecom, LLC
CEO
b...@6by7.net 
"The only fully end-to-end encrypted global telecommunications company 
in the world.”


FCC License KJ6FJJ

Sent from my iPhone via RFC1149.


On Dec 9, 2020, at 12:16 PM, Mel Beckman  wrote:

Miles,

You realize that “AI” as general artificial intelligence is science 
fiction, right? There is no general AI, and even ML is not actually 
learning in the sense that humans or animals learn. “Neural 
networks”, likewise, have nothing to do at all with the way 
biological neurons work in cognition (which science doesn’t 
understand). That’s all mythology, amplified by science fiction and 
TV fantasies like Star Trek’s character “Data”. It’s just 
anthropomorphizing technology.


We create unnecessary risk when we anthropomorphize technology. The 
truth is, any kind of automation incurs risk. There is nothing 
related to intelligence, AI or otherwise. It’s all just automation to 
varying degrees. ML, for example, simply builds data structures based 
on prior input, and uses those structures to guide future actions. 
But that’s not general behavior — it all has to be purpose-designed 
for specific tasks.


The Musk-stoked fear that if we build automated systems and then “put 
them together” in the same network, or whatever, that they will 
somehow gain new capabilities not originally designed and go on a 
rampage is just plain silly. Mongering that fear, however, is quite 
lucrative. It’s up to us, the real technologists, to smack down the 
fear mongers and tell truth, not hype.


Since the academics’  promised general intelligence of AI never 
materialized, they had to dumb-down their terminology, and came up 
with “narrow AI”. Or “not AI”, as I prefer to say. But narrow AI is 
mathematically indistinguishable from any other kind of automation, 
and it has nothing whatsoever to do with intelligence, which science 
doesn’t remotely yet understand. It’s all automation, all the time.


All automated systems require safeguards. If you don’t put safeguards 
in, things blow up: rockets on launchpads, guns on ships, Ansible on 
steroids. When things blow up, it’s never because systems 
unilaterally exploited general intelligence to “hook up” and become 
self-smarted. It’s because you were stupid.


For a nice, rational look at why general AI is fiction, and what 
“narrow AI”, such as ML, can actually do, get Meredith Broussard’s 
excellent book "Artificial Unintelligence - How computers 
misunderstand the world".


https://www.amazon.com/Artificial-Unintelligence-Computers-Misunderstand-World/dp/026253701X

Or if you prefer a video summary, she has a quick talk on YouTube, 
"ERROR – The Art of Imperfection Conference: The Fragile”:


https://www.youtube.com/watch?v=OuDFhSUwOAQ

At 2:20 into the video, she puts the kibosh on the mythology of 
general AI.


-mel


On Dec 9, 2020, at 11:07 AM, Miles Fidelman 
 wrote:


Hi Folks,
It occurs to me that network & systems admins are the the folks who 
really have to worry about AI threats.


After watching yet another AI takes over the world show - you know, the
same general theme, AI wipes out humans to preserve its existence - it
occurred to me:

Perhaps the real AI threat is "self-healing systems" gone wild. 
Consider:


- automated system management
- automated load management
- automated resource management - spin up more instances of 
as necessary
- automated threat detection & response
- automated vulnerability analysis & response

Put them together, and the nightmare scenario is:
- machine learning algorithm detects need for more resources
- machine learning algorithm makes use of vulnerability analysis 
library

to find other systems with resources to spare, and starts attaching
those resources
- unbounded demand for more resources

Kind of what spambots have done to the global email system.

"For Homo Sapiens, the telephone bell had tolled."
(Dial F for Frankenstein, Arthur C. Clarke)

I think I need to start putting whisky in my morning coffee.  And 
maybe not thinking

about NOT replacing third shift with AI tools.

Miles Fidelman
--
In theory, there is no difference between theory and practice.
In practice, there is.   Yogi Berra

Theory is when you know everything but nothing works.
Practice is when everything works but no one knows why.
In our lab, theory and practice are combined:
nothing works and no one knows why.  ... unknown





--
In theory, there is no difference between theory and practice.
In practice, there is.   Yogi Berra

Theory is when you know everything but nothing works.
Practice is when everything works but no one knows why.
In our 

Re: The Real AI Threat?

2020-12-09 Thread Miles Fidelman

Mel Beckman wrote:

Miles,

You realize that “AI” as general artificial intelligence is science fiction, 
right? There is no general AI, and even ML is not actually learning in the 
sense that humans or animals learn. “Neural networks”, likewise, have nothing 
to do at all with the way biological neurons work in cognition (which science 
doesn’t understand). That’s all mythology, amplified by science fiction and TV 
fantasies like Star Trek’s character “Data”. It’s just anthropomorphizing 
technology.
Well, duh.  I'm old enough to remember the old aphorism "it's AI until 
we solve it, then it's engineering."


We create unnecessary risk when we anthropomorphize technology. The truth is, 
any kind of automation incurs risk. There is nothing related to intelligence, 
AI or otherwise. It’s all just automation to varying degrees. ML, for example, 
simply builds data structures based on prior input, and uses those structures 
to guide future actions. But that’s not general behavior — it all has to be 
purpose-designed for specific tasks.


We create unnecessary risk when we deploy technology with positive 
feedback loops.
Machine learning (not AI) + automated threat detection & response + 
automatic resource allocation = a recipe for disaster.
Call it "AI fighting for survival" or bad engineering - either way, it 
will kill us a lot sooner than any of the more fictional varieties of AI.

Since the academics’  promised general intelligence of AI never materialized, 
they had to dumb-down their terminology, and came up with “narrow AI”. Or “not 
AI”, as I prefer to say. But narrow AI is mathematically indistinguishable from 
any other kind of automation, and it has nothing whatsoever to do with 
intelligence, which science doesn’t remotely yet understand. It’s all 
automation, all the time.
Then again, Google's "AI" has gotten awfully good at answering 
relatively free-form questions.  And allowing for really, really dumb 
people, Siri comes pretty close to passing the classic Turing Test.



All automated systems require safeguards. If you don’t put safeguards in, 
things blow up: rockets on launchpads, guns on ships, Ansible on steroids. When 
things blow up, it’s never because systems unilaterally exploited general 
intelligence to “hook up” and become self-smarted. It’s because you were stupid.
Yup.  And folks are looking in the wrong place for things to protect 
against.


Miles


For a nice, rational look at why general AI is fiction, and what “narrow AI”, such as ML, 
can actually do, get Meredith Broussard’s excellent book "Artificial Unintelligence 
- How computers misunderstand the world".

https://www.amazon.com/Artificial-Unintelligence-Computers-Misunderstand-World/dp/026253701X

Or if you prefer a video summary, she has a quick talk on YouTube, "ERROR – The 
Art of Imperfection Conference: The Fragile”:

https://www.youtube.com/watch?v=OuDFhSUwOAQ

At 2:20 into the video, she puts the kibosh on the mythology of general AI.

  -mel



On Dec 9, 2020, at 11:07 AM, Miles Fidelman  wrote:

Hi Folks,
It occurs to me that network & systems admins are the the folks who really have 
to worry about AI threats.

After watching yet another AI takes over the world show - you know, the
same general theme, AI wipes out humans to preserve its existence - it
occurred to me:

Perhaps the real AI threat is "self-healing systems" gone wild. Consider:

- automated system management
- automated load management
- automated resource management - spin up more instances of 
as necessary
- automated threat detection & response
- automated vulnerability analysis & response

Put them together, and the nightmare scenario is:
- machine learning algorithm detects need for more resources
- machine learning algorithm makes use of vulnerability analysis library
to find other systems with resources to spare, and starts attaching
those resources
- unbounded demand for more resources

Kind of what spambots have done to the global email system.

"For Homo Sapiens, the telephone bell had tolled."
(Dial F for Frankenstein, Arthur C. Clarke)

I think I need to start putting whisky in my morning coffee.  And maybe not 
thinking
about NOT replacing third shift with AI tools.

Miles Fidelman
--
In theory, there is no difference between theory and practice.
In practice, there is.   Yogi Berra

Theory is when you know everything but nothing works.
Practice is when everything works but no one knows why.
In our lab, theory and practice are combined:
nothing works and no one knows why.  ... unknown



--
In theory, there is no difference between theory and practice.
In practice, there is.   Yogi Berra

Theory is when you know everything but nothing works.
Practice is when everything works but no one knows why.
In our lab, theory and practice are combined:
nothing works and no one knows why.  ... unknown



Re: The Real AI Threat?

2020-12-09 Thread Ben Cannon
To follow - Siri couldn’t figure out how to add an entry to my calendar today.  
I am yet to be afraid.

Although the google bot that placed a call to book a haircut was impressive.

Ms. Lady Benjamin PD Cannon, ASCE
6x7 Networks & 6x7 Telecom, LLC 
CEO 
b...@6by7.net
"The only fully end-to-end encrypted global telecommunications company in the 
world.”

FCC License KJ6FJJ

Sent from my iPhone via RFC1149.

> On Dec 9, 2020, at 12:16 PM, Mel Beckman  wrote:
> 
> Miles,
> 
> You realize that “AI” as general artificial intelligence is science fiction, 
> right? There is no general AI, and even ML is not actually learning in the 
> sense that humans or animals learn. “Neural networks”, likewise, have nothing 
> to do at all with the way biological neurons work in cognition (which science 
> doesn’t understand). That’s all mythology, amplified by science fiction and 
> TV fantasies like Star Trek’s character “Data”. It’s just anthropomorphizing 
> technology. 
> 
> We create unnecessary risk when we anthropomorphize technology. The truth is, 
> any kind of automation incurs risk. There is nothing related to intelligence, 
> AI or otherwise. It’s all just automation to varying degrees. ML, for 
> example, simply builds data structures based on prior input, and uses those 
> structures to guide future actions. But that’s not general behavior — it all 
> has to be purpose-designed for specific tasks.
> 
> The Musk-stoked fear that if we build automated systems and then “put them 
> together” in the same network, or whatever, that they will somehow gain new 
> capabilities not originally designed and go on a rampage is just plain silly. 
> Mongering that fear, however, is quite lucrative. It’s up to us, the real 
> technologists, to smack down the fear mongers and tell truth, not hype. 
> 
> Since the academics’  promised general intelligence of AI never materialized, 
> they had to dumb-down their terminology, and came up with “narrow AI”. Or 
> “not AI”, as I prefer to say. But narrow AI is mathematically 
> indistinguishable from any other kind of automation, and it has nothing 
> whatsoever to do with intelligence, which science doesn’t remotely yet 
> understand. It’s all automation, all the time.
> 
> All automated systems require safeguards. If you don’t put safeguards in, 
> things blow up: rockets on launchpads, guns on ships, Ansible on steroids. 
> When things blow up, it’s never because systems unilaterally exploited 
> general intelligence to “hook up” and become self-smarted. It’s because you 
> were stupid.
> 
> For a nice, rational look at why general AI is fiction, and what “narrow AI”, 
> such as ML, can actually do, get Meredith Broussard’s excellent book 
> "Artificial Unintelligence - How computers misunderstand the world". 
> 
> https://www.amazon.com/Artificial-Unintelligence-Computers-Misunderstand-World/dp/026253701X
> 
> Or if you prefer a video summary, she has a quick talk on YouTube, "ERROR – 
> The Art of Imperfection Conference: The Fragile”:
> 
> https://www.youtube.com/watch?v=OuDFhSUwOAQ
> 
> At 2:20 into the video, she puts the kibosh on the mythology of general AI.
> 
> -mel
> 
> 
>> On Dec 9, 2020, at 11:07 AM, Miles Fidelman  
>> wrote:
>> 
>> Hi Folks,
>> It occurs to me that network & systems admins are the the folks who really 
>> have to worry about AI threats.
>> 
>> After watching yet another AI takes over the world show - you know, the 
>> same general theme, AI wipes out humans to preserve its existence - it 
>> occurred to me:
>> 
>> Perhaps the real AI threat is "self-healing systems" gone wild. Consider:
>> 
>> - automated system management
>> - automated load management
>> - automated resource management - spin up more instances of  
>> as necessary
>> - automated threat detection & response
>> - automated vulnerability analysis & response
>> 
>> Put them together, and the nightmare scenario is:
>> - machine learning algorithm detects need for more resources
>> - machine learning algorithm makes use of vulnerability analysis library 
>> to find other systems with resources to spare, and starts attaching 
>> those resources
>> - unbounded demand for more resources
>> 
>> Kind of what spambots have done to the global email system.
>> 
>> "For Homo Sapiens, the telephone bell had tolled."
>> (Dial F for Frankenstein, Arthur C. Clarke)
>> 
>> I think I need to start putting whisky in my morning coffee.  And maybe not 
>> thinking 
>> about NOT replacing third shift with AI tools.
>> 
>> Miles Fidelman
>> -- 
>> In theory, there is no difference between theory and practice.
>> In practice, there is.   Yogi Berra
>> 
>> Theory is when you know everything but nothing works. 
>> Practice is when everything works but no one knows why. 
>> In our lab, theory and practice are combined: 
>> nothing works and no one knows why.  ... unknown
> 


Re: The Real AI Threat?

2020-12-09 Thread Mel Beckman
Miles,

You realize that “AI” as general artificial intelligence is science fiction, 
right? There is no general AI, and even ML is not actually learning in the 
sense that humans or animals learn. “Neural networks”, likewise, have nothing 
to do at all with the way biological neurons work in cognition (which science 
doesn’t understand). That’s all mythology, amplified by science fiction and TV 
fantasies like Star Trek’s character “Data”. It’s just anthropomorphizing 
technology. 

We create unnecessary risk when we anthropomorphize technology. The truth is, 
any kind of automation incurs risk. There is nothing related to intelligence, 
AI or otherwise. It’s all just automation to varying degrees. ML, for example, 
simply builds data structures based on prior input, and uses those structures 
to guide future actions. But that’s not general behavior — it all has to be 
purpose-designed for specific tasks.

The Musk-stoked fear that if we build automated systems and then “put them 
together” in the same network, or whatever, that they will somehow gain new 
capabilities not originally designed and go on a rampage is just plain silly. 
Mongering that fear, however, is quite lucrative. It’s up to us, the real 
technologists, to smack down the fear mongers and tell truth, not hype. 

Since the academics’  promised general intelligence of AI never materialized, 
they had to dumb-down their terminology, and came up with “narrow AI”. Or “not 
AI”, as I prefer to say. But narrow AI is mathematically indistinguishable from 
any other kind of automation, and it has nothing whatsoever to do with 
intelligence, which science doesn’t remotely yet understand. It’s all 
automation, all the time.

All automated systems require safeguards. If you don’t put safeguards in, 
things blow up: rockets on launchpads, guns on ships, Ansible on steroids. When 
things blow up, it’s never because systems unilaterally exploited general 
intelligence to “hook up” and become self-smarted. It’s because you were stupid.

For a nice, rational look at why general AI is fiction, and what “narrow AI”, 
such as ML, can actually do, get Meredith Broussard’s excellent book 
"Artificial Unintelligence - How computers misunderstand the world". 

https://www.amazon.com/Artificial-Unintelligence-Computers-Misunderstand-World/dp/026253701X

Or if you prefer a video summary, she has a quick talk on YouTube, "ERROR – The 
Art of Imperfection Conference: The Fragile”:

https://www.youtube.com/watch?v=OuDFhSUwOAQ

At 2:20 into the video, she puts the kibosh on the mythology of general AI.

 -mel


> On Dec 9, 2020, at 11:07 AM, Miles Fidelman  
> wrote:
> 
> Hi Folks,
> It occurs to me that network & systems admins are the the folks who really 
> have to worry about AI threats.
> 
> After watching yet another AI takes over the world show - you know, the 
> same general theme, AI wipes out humans to preserve its existence - it 
> occurred to me:
> 
> Perhaps the real AI threat is "self-healing systems" gone wild. Consider:
> 
> - automated system management
> - automated load management
> - automated resource management - spin up more instances of  
> as necessary
> - automated threat detection & response
> - automated vulnerability analysis & response
> 
> Put them together, and the nightmare scenario is:
> - machine learning algorithm detects need for more resources
> - machine learning algorithm makes use of vulnerability analysis library 
> to find other systems with resources to spare, and starts attaching 
> those resources
> - unbounded demand for more resources
> 
> Kind of what spambots have done to the global email system.
> 
> "For Homo Sapiens, the telephone bell had tolled."
> (Dial F for Frankenstein, Arthur C. Clarke)
> 
> I think I need to start putting whisky in my morning coffee.  And maybe not 
> thinking 
> about NOT replacing third shift with AI tools.
> 
> Miles Fidelman
> -- 
> In theory, there is no difference between theory and practice.
> In practice, there is.   Yogi Berra
> 
> Theory is when you know everything but nothing works. 
> Practice is when everything works but no one knows why. 
> In our lab, theory and practice are combined: 
> nothing works and no one knows why.  ... unknown