On Nov 30, 2007 8:15 AM, Kenneth Van Wyk <[EMAIL PROTECTED]> wrote:
> But the real problem with it, as I said, is metrics.  Should it be
> based on (say) defect density per thousand lines of code as reported
> by (say) 3 independent static code analyzers?  What about design
> weaknesses that go blissfully unnoticed by code scanners?  (At least
> the field experience concept could begin to address these over time,
> perhaps.)

You, sir, are on the right track!  Secure design inspection and
[secure] code review results can give these metrics.  I think that
code that is high-percent CAPEC-free during design inspection and
high-percent CWE-free during code review should be given a higher
software secure assured rating than inspections/reviews that contain
many CAPEC/CWE issues.

Extended criteria to the above would involve designs/code that are
formally specified or verified (using formal methods/logic).  Think
Orange Book (TCSEC, ITSEC, Common Criteria, et al) and you'll know
what I mean by this.  The interesting thing about Orange Book is that
even the most assured/functional division, A1, only requires that the
formal top-level specification (i.e. TCB design aka security policy
model aka access-control matrix) be formally verified.  Criteria that
goes beyond A1 would formally specify and/or verify actual source
code.

The NHTSA NCAP five-star rating system (Chapter 2 in Geekonomics) that
is used in automobile safety utilizes crash test dummies in a similar
way.

When you say 3 independent scanners, I really like what you are
talking about here.  I often see this more as a three-step process:
first start with secure design inspection, move to binary/bytecode
analysis, and then move to manual code review augmented by static
source code analysis.  Each part of the process can add information
(i.e. generate test-cases) to the next process to improve the reviews.

I always though that the software security assurance five-star rating
system would work in a similar way as the Food Safety Network works -
by using samples to identify "diseased/vulnerable" product.  This
would mean that we could take the most core, critical components in
any application (the ones that require the most security).  Then take
samples of that code (by choosing the ones that "smell the worst /
look the most diseased"), by combining code coverage with cyclomatic
complexity.  There is a project that combines these two metrics called
Crap4J.  Code that has been already covered with testing (and that has
shown the highest "cluster" of bugs can be included.  Untested areas
of code can also be included.  Cyclomatic complexity metrics can
slightly augment the process, although from my persepctive - less
complex code can contain vulnerabilities just the same.

Let's say that Cigital (on-track to be CWE-Compatible/Effective)
performs a secure design inspection (note: I wonder if there is going
to be a CAPEC-Compatible program?) and hands their report to Veracode
(currently CWE-Compatible).  Veracode can then perform binary/bytecode
analysis and hand their report back to Cigital for the secure code
review.  Using a mix of CWE-Compatible tools, Cigital inspectors can
then build the last of three reports.  All three reports could be used
to score an application using a standardized five-star rating system.

While applications should be tested at every revision, some
applications may only be tested on some sort of scheduled basis.
Companies who refuse to provide timely designs and code samples should
be penalized by heavy fines.  I'm not sure who is going to enforce
this, but I see it as a government function.  However, this isn't a
regulation - it's simply assurance level reporting, made available to
the public (think equivalents to safercar.gov, Consumer Reports, and
automobile price stickers).  Imagine flipping through pages of a tech
magazine and seeing secure software assurance five-star ratings listed
at the top section of product reviews, right next to or below the name
of the product.

Of course, open-source software has readily available code, but there
should be penalities for OSS developers who do not provide timely
software designs.  Clearly, these cannot be directly monetary - but
instead the rating board can fine software vendors who use the
open-source software as a third-party component in their applications
(or shipped along with their products).  This will likely provide the
necessary pushback on open-source developers to provide software
designs.

Note that my five-star rating system suggested is only half of a
secure software initiative - the assurance part.  According to the
Orange Book, there has to be functional measures that placate the
inherent problems with trusting the TCB: object reuse and covert
channels.  The Orange Book specifies that the TCB requires a security
kernel, security perimeter, and trusted paths to/from the users and
the TCB (input validation is referred to as "secure attention keys").

Not only does the access control matrix have to be well designed and
implemented, but there also has to be safeguards (today we sometimes
call these exploitation countermeasures) to protect against problems
with object reuse or covert channels.

Access control systems can be programmatic, discretionary, mandatory,
role-based, or declarative (in order of sophistication and assurance).
 Transitioning between states is possibly the hardest thing to get
right in terms of security in applications - fortunately it is
something usually taken seriously by developers (i.e. vertical and
horizontal privilege escalation), but strangely less-so by security
professionals (who often see semantic bugs like buffer overflows,
parser bugs, and input/output issues to be the most interesting).
This is probably why the TCB is the only part in the Orange Book
criteria that requires formal specification and/or verification to get
to the highest division (A1).

Object reuse and covert channels provide the landscape for security
weaknesses to happen.  Object reuse means that because memory is
shared, things like buffer overflows (stack or heap), integer
vulnerabilities, and format string vulnerabilities are possible.  When
a disk or memory is shared, it can be written to or read by anyone
with virtual access to it (depending on the nature of the filesystem
and how the TCB grants access to it).  Covert channels provide access
to IPC (inter process communication), both inside and outside of the
system via TCB access rights.  Additional problems of storage and
timing channels also fall into this category.

Of course, the proper way to prevent misuse of objects or channels is
to assure the TCB design and the source code.  However, by using
security perimeters and trusted paths (along with a reference monitor,
or security kernel) - we can provide the "functionality" necessary to
further protect these needed elements.  The same is true with
automobiles: in the form of seat-belts + airbags, as well as optional
components such as harnesses, roll cages, helmets, driving gloves,
sunglasses, etc.

Modern day exploitation countermeasures and vulnerability management
are these security functions: software-update is your seat-belt, ASLR
your airbags, and optional components include GPG, SSL, anti-virus,
patch management, firewalls, host or network-based IPS, NAC, DLP,
honeypots, etc.  I like to compare the optional components to the
highway system and traffic lights when provided externally (e.g. from
your IT department or ISP).

It may appear that I'm suggesting an end to penetration testing (which
probably isn't popular on this mailing-list anyways), but that's not
entirely true.  Automobiles still require the occasional
brake/wiper/tire replacement, tune-up, oil change, and even weekly
checks such as tire pressure.  Security testing in the operations or
maintenance phase will probably remain popular for quite some time
(especially the free security testing from the vulnerability-obsessed
security community).

> I do think that software developers who produce bad (security) code
> should be penalized, but at least for now, I still think the best way
> of doing this is market pressure.  I don't think we're ready for more,
> on the whole, FWIW.  But _consumers_ wield more power than they
> probably realize in most cases.

Developers shouldn't be allowed to properly build code that has
inherent security weaknesses.  This is best enforced by a build server
that errors on CWE's.  However, it is best accomplished by providing a
developer with a vuln-IDE.  If a developer has CWE-Compatiable tools
to guide his/her day-to-day programming efforts, this may prove to
reduce CWE's more effectively than punishing developers who either a)
don't know any better, or b) are being pushed to skip over
warnings/errors because of deadlines.

If a developer can work at the same pace and have their software built
without CWE enforced errors, then they should be rewarded, not
punished.  Developers who slip behind and continually check-in
security weaknesses should be trained.  Unfortunately, we can't just
fire or punish developer failure because this would stifle innovation
and production.

Cheers,
Andre
_______________________________________________
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
_______________________________________________

Reply via email to