At the dawn of serious thinking about programming there was the notion of
information hiding. The given wisdom was that all you were supposed to know
about a programming interface was what each entry point did. In particular,
you were to be explicitly precluded from knowing how each entry did whatever it
did. A tenent of good API design held that this information was to be hidden
from the user of the interface; thus, information hiding was regarded as a Good
Thing. The primary justification for this API design rule was that the provider
of the API could change the how without disturbing the what and hence not
ripple the code using the API.
In the event, very few hows were ever actually changed and needed information
not found in documentation of the entry points leaked. Consider, for
example, a square root entry point. The documentation says you give it a
positive number and it gives you back a number that when multiplied by itself
is almost the number you give it. Now, suppose I want to add up a bunch of
square roots. I care if the approximation is equally likely to be too big as
to small so that my sum stays close to the actual sum of the square roots. If
the approximation is always too low or too high my sum drifts away from the
actual sum. The information I need is hidden from me by good API design so I
conduct experiments against the entry point. Once I figure out a property of
the how I want to use what I’ve discovered but now don’t want the designer to
change the how.
You can see how this leads to all sorts of excess code in the interest of
correctness. I don’t know what I need to know about what’s down there so I
have to constantly check to make sure it’s behaving the way I need it to
behave. Or, I just do it myself and don’t use the API. I’d be willing to bet
that at least ... pick a number ... 1/3 of the code in large systems (think an
ObamaCare connector) is unnecessary. This bloat is can be at least partially
attributed to information hiding.
The point of all of this is to agree with Dan’s “yes” intuition. Set aside
crypto. Set aside buffer overruns. Set aside all the other little programming
gotchas we worry about for the moment. How do we put code together? My
programming experience (which started in the mid 1950’s) tells me that the
composition of N>1 known-secure components is more likely than not to be less
secure than the least secure of lot because they are all running --- wittingly
or not --- on unjustified assumptions about what the others are doing.
I don’t have a silver bullet. (The Lone Ranger stole it.) But I do believe
that insufficient LANGSEC attention is given to the impact on end-to-end
security of the way code is composed.
There is a reason that Knuth wrote V1.0 of TeX as one monolithic program. It
wasn’t for security. It was for understanding.
IMHO, as always.
Cheers, Scott
-----Original Message-----
From: d...@geer.org
Sent: Friday, January 08, 2016 9:19 PM
To: langsec-discuss@mail.langsec.org
Subject: [langsec-discuss] composability
So far as I know, security is not composable, which is to say
that there is no reason to expect that the connection of N>1
known-secure components is itself secure in the aggregate.
But as an honest question, could or would the broad deployment
of LANGSEC diligence help with that problem of composability?
My intuition is "yes, it could or would help" but it is only
intuition, not a deduction.
Were it possible to persuasively show that diligent LANGSEC
work would help with composability, then the demand for that
diligence might grow quite strong.
Thinking out loud,
--dan
_______________________________________________
langsec-discuss mailing list
langsec-discuss@mail.langsec.org
https://mail.langsec.org/cgi-bin/mailman/listinfo/langsec-discuss
_______________________________________________
langsec-discuss mailing list
langsec-discuss@mail.langsec.org
https://mail.langsec.org/cgi-bin/mailman/listinfo/langsec-discuss