Dear FRIAM, 

 

I apologize for this REPLY TO ALL error.  I was actually reaching out to Owen 
about an old private argument concerning what was appropriate for FRIAM.  I 
hope you all will forgive me.  

 

Nick 

 

Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

 <http://home.earthlink.net/~nickthompson/naturaldesigns/> 
http://home.earthlink.net/~nickthompson/naturaldesigns/

 

From: Nick Thompson [mailto:[email protected]] 
Sent: Tuesday, August 11, 2015 9:58 PM
To: 'The Friday Morning Applied Complexity Coffee Group' <[email protected]>
Subject: RE: [FRIAM] [EXTERNAL] Re: unikernels?

 

Hi Owen, 

 

How’s your summer. 

 

I note that not only can glen and company participate in a conversation with me 
that bores the living crap out of you, but they can also participate in a 
conversation with you that bores the living crap out of me.  But I am not 
threatening to pick up my marbles and go home.  

 

I think it’s in the nature of things.  They are multitalented bores.  
Polybores, we might call them.  I guess being a polybore is the other side of 
being a polymath.  

 

Nick 

 

Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

 <http://home.earthlink.net/~nickthompson/naturaldesigns/> 
http://home.earthlink.net/~nickthompson/naturaldesigns/

 

From: Friam [mailto:[email protected]] On Behalf Of Owen Densmore
Sent: Tuesday, August 11, 2015 7:41 PM
To: The Friday Morning Applied Complexity Coffee Group <[email protected]>
Subject: Re: [FRIAM] [EXTERNAL] Re: unikernels?

 

Thanks! Fascinating.

 

   -- Owen

 

On Tue, Aug 11, 2015 at 4:37 PM, Parks, Raymond <[email protected] 
<mailto:[email protected]> > wrote:

  The original articles/blogs are from the U of Cambridge Xen folks and a 
somewhat buzzword lovin' sysadmin.  The current trend in using someone else's 
computer (SEC, more commonly called cloud) is LInux containers and docker.  The 
articles predict that the future is unikernels.  A unikernel is application 
specific, like containers, but in the form of a monolithic VM that includes the 
specific application and necessary kernel services for that app.  At least two 
of the current unikernel projects use functional languages - OCaml and Haskell. 
 The Xen model is for a developer to specify the kernel services they need, 
develop the application code, develop the configuration code, and the whole 
thing gets turned into a monolithic VM that runs in the Xen hypervisor.  In 
theory, this makes for greater efficiency and less chance of the tail wagging 
the dog.  By that latter, I mean that one of the major issues in securing 
computer systems of systems is that one gets all of a system one includes (i.e 
DNS Bind) even though one uses one small feature.   That means all of the 
vulnerabilities as well as all the features that are not used.

 

  As I said in a previous post, this is a reinvention (for hypervisors) of IBM 
VM and CSM - the latter being a minimalist kernel with, usually, a single 
application.

 

  The downside of monolithic VMs is that any change requires a complete rebuild 
of the VM - even minor configuration changes that are the equivalent of 
environment variables.  In a SEC environment, for example, adding a static or 
CDN to the list of sources for a web server will require a rebuild.  
Alternatively, of course, one could simply allow the web-server unikernel to 
invoke scripts from any web-site recursively - but then an attacker simply 
inserts an advertisement that invokes malware and we're no better off than 
before.

 

The idea of unikernels is not bad nor is it new - but the benefits will 
probably not be as great as the current promises.  The UX will not be different 
for the end-user although it might be somewhat better for the content provider.

 

  It's not clear to me that the visionaries have thought about this outside of 
the WWW.  For example, I recently read an article about how NetFlix worked hard 
to be able to provide streaming video with SSL encryption.  They started with 
their standard server and added SSL - the performance hit made that 
impractical.  Eventually, they found a configuration of VMs and infrastructure 
that made the performance hit acceptable.  A unikernel that only served 
SSL-encrypted video would be more efficient than their current VMs running a 
general-purpose OS plus video streaming software.  But configuration changes 
(newly added caching locations, links that are down, et cetera) would be the 
bane of a unikernel NetFlix.  Each time BGP reports a change, either the video 
streaming unikernel would need to be rebuilt or there would need to be another 
layer of unikernel that dispatches requests to the video streaming unikernel 
VMs.  But that dispatcher would either need to be reconfigured or there would 
need to be another unikernel that tracks network connectivity changes and 
informs the dispatcher - and now we still have configuration changes and a 
complex system of unikernels that exist to make it possible.

 

  The Internet is a dynamic system that constantly changes - and any system 
that uses the Internet needs to adapt to constant change.  The two ways to do 
that with unikernels are to have the meta on meta layers I imagine in the 
previous paragraph or to change the VM unikernels on the fly so the user will 
eventually get directed to a correctly configured unikernel.  That latter means 
there will be performance hits - how bad those will be is TBD.

 

Ray Parks
Consilient Heuristician/IDART Old-Timer
V: 505-844-4024 <tel:505-844-4024>   M: 505-238-9359 <tel:505-238-9359>   P: 
505-951-6084 <tel:505-951-6084> 

 

On Aug 11, 2015, at 3:25 PM, Owen Densmore wrote:

 

I'm so outta this conversation!

 

Could one of us give a brief explanation of unikernels and the related tech 
being discussed?

 

On Tue, Aug 11, 2015 at 2:49 PM, glen ep ropella <[email protected] 
<mailto:[email protected]> > wrote:


OK.  But what I'm still missing is this:  if unikernels are more domain- and/or 
task-specific, then the dev environment will branch, perhaps quite a bit.  I.e. 
one dev environment for many deployables.  We end up with not just the original 
(false?) dichotomy between config and compiled, but meta-config and, perhaps, 
meta-compiled.  And that may even have multiple layers, meta-meta.

So, while I agree pwning the devop role allows the attacker to infect the 
deployables, the attacks have to be sophisticated enough to survive that 
branching to eventually achieve the attacker's objective.  I.e. "closeness" in 
terms of automation doesn't necessarily mean "closeness" in terms of total cost 
of attack.

It just seems that the more objective-specific the deployable(s), the less 
likely a hacked devops process will give the desired result.  I'd expect a lot 
more failed integration/deployment attempts if my devops process was modified.

But this is all too abstract, of course.  The devil is in the particulars.


On 08/11/2015 01:13 PM, Parks, Raymond wrote:

   I would expect the dev environment to be closer if not actually in the same 
hypervisor.  It's almost like the web-site we once attacked by using the java 
compiler on the web-site's computer sytem to modify the code in the web-site.  
Right now, with devops, the dev environment is probably not in the 
cloud/hypervisor.  And, for unikernels that may also be true.  But to deploy 
quickly in both devops and unikernel, there has to be a direct channel from dev 
to cloud.

   In more traditional environments, there's a dev server in a separate space, 
a testing server in its own environment (sometimes shared with production but 
not touching production data), and a production server.  While traditional 
environments don't always follow the process well, in theory the flow is 
developers develop on a development network with the dev server, roll their 
system into the testing server which runs alongside the production server with 
a copy of the production data and processing real or test transactions, and 
finally install the tested version on the production server.  From my 
standpoint, that means I can attack the production server directly or the 
development server on a separate network.  There has to be connectivity, but 
it's likely to be filtered, if only to prevent the development server from 
affecting the production environment.

   In devops and in future unikernel systems, the pace of change is, of 
necessity, much faster and the three roles are collapsed into one VM.  The VM 
image is modified in place, given a new name so that rollback is possible, and 
the management software is told to use the new image instead of the old.

   One of the principles of cyberwarfare (as formulated in our paper of that 
name) is that some entity, somewhere, has the privileges to do whatever the 
attacker wants to do and the attacker's goal is to become that entity.  In the 
case of devops and unikernel, that entity is the developer(s) who make(s) the 
changes to the VM.  In traditional environments, the attacker might need to 
assume the privileges of several entities.

 

-- 
glen ep ropella -- 971-255-2847 <tel:971-255-2847> 

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

 

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

 


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

 

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

Reply via email to