For a long time I thought that software product liability would
eventually be forced onto developers in response to their long-term
failure to take responsibility for their shoddy code. I was mistaken.
The pool of producers (i.e., the software industry) is probably too
small for such blunt economic policy to work.
Keep in mind that economics does have a tendency to balance out risk and
reward, and to fairly allocate liability. But it takes time. We're
only about 50 years into the life of the software industry, and we're
just starting to see regulatory notice that computers even exist.
It appears, now, that producers will not be regulated, but rather users
and consumers. SOX, HIPAA, BASEL II, etc. are all about regulating
already well-established business practices that just happen to be
incorporating more software into their operations.
But as with other serious security policy formulations - the
technology is irrelevant. The policies, whether SOX or Multi-level
Security, are intended to protect information of vital importance to the
organization. If technical controls are adequate to enforce them -
fine. If not, that in no way absolves the enterprise of the need to
provide adequate controls.
The computer software industry has lost its way. It appears to be
satisfied with prodding and encouraging software developers to develop
some modicum of shame for the shoddy quality of their output. Feed the
beast, and support rampant featurism - its what's made so many people
rich, after all.
In the long run, though, featurism without quality is not sustainable.
That is certainly true, and I applaud efforts to encourage developers to
rise up from their primordial ooze and embrace the next steps in sane
programming (we HAVE largely stamped out self-modifying code, but
strcpy() is still a problem...)
But that's not security. It's just reducing irresponsible defects.
For computer security to have any meaning, someone, some where, has to
say what is supposed to happen and what is not supposed to happen with
regard to access to information and resources of the system. In other
words, there has to be a security policy.
If there's no way to articulate how the security policy can be enforced
by a system, i.e., no security model, then there's no real way to even
have a discussion about whether a system, much less individual
components of the system, contribute to or get in the way of enforcing
the security policy.
What's most disappointing to me is the near-total lack of discussion
about security policies and models in the whole computer security field,
We're at about the 19th century level of sophistication of the practice
of medicine - we have a germ theory (bugs make you sick), but we're
still trying to get the doctors and nurses to wash their hands between
surgery (Doctor! It HURTS when I do that! Then stop DOING that!).
Better languages, better language skills, and better transparency
(disclosure) are all areas of important improvement.
The question I raise is this - will we return to a serious discussion
about whether and how computers can be used to secure the vital
information of enterprises before our industry reaches its first century
(say, by 2055)? Ought a computer be able to be expected to apply
controls adequate to bet your life on, or not? If so, when will that
discussion get started again? It seems like the analytical approaches
that brought us Bell-LaPadula and similar models are considered
off-topic, today, but I haven't seen anything replace them as the basis
for a rational computer security discussion.
If engineering is the practice of applying the logic and proofs provided
by science to real world situations, software engineering and computer
science seem simply to have closed their eyes to the question of system
security and internal controls.
Perhaps economics will reinvigorate the discussion in the coming decades.
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.