I'd like to follow up on Brian Chess' comments...

Brian Chess (brian at fortifysoftware dot com) said:
False positives:
Nobody likes dealing with a pile of false positives, and we work hard to
reduce false positives without giving up potentially exploitable
vulnerabilities.

I think everyone agrees that there are "way too many false positives"
in the sense that "there are so many it's annoying and it costs money
to check them out" in most of today's tools.

But before you say "tools are useless" you have to ask, "compared to what?"
Manual review can find all sorts of things, but manual review is likely
to miss many serious problems too.  ESPECIALLY if there are only a
few manual reviewers for a large codebase, an all-too-common situation.

Today's tools have a very large set of problems.  But if you look
at the trendlines of the amount of software that people are using,
you'll notice that it is increasing exponentially.   That is unsustainable
for purely manual review approaches, at least as the ONLY approach.
We can either drastically cut the amount of software
(easing review) or use tools -- those are really our only choices.
Reducing the amount of software that needs review is MUCH better
security-wise; if you can do that, DO THAT.  But I think that's
unlikely to occur (or be enough) in many circumstances,
so we need an alternative than crossing our fingers.

I think a sense of perspective is important.  Yes, tools aren't perfect,
but are they better than your alternatives?  Also, tools will
get better as they are used.  I expect that tools will be
refined as they are used in the field (or lose out to better tools).

In some sense, this is where security tools get the raw end of the deal.  If
you're performing static analysis in order to find general quality problems,
you can get away with dropping a potential issue on the floor as soon as you
get a hint that your analysis might be off.  You can't do that if you are
really focused on security....

To compensate, many tools use "risk levels" to try to give an
approximate sense of what to look at first.  But the problem is still
the same, tools often cannot be CERTAIN that a construct is a vulnerability,
yet if you throw it away, you might have thrown away reporting on
the most important vulnerability.

Compounding the problem is that, when the static analysis tool does point
you at an exploitable vulnerability, it's often not a very memorable
occasion.  It's just a little goof-up in the code...


Yes. I'll add that often people aren't even certain it IS a
security vulnerability; the analysis to determine if something is a
vulnerability or not may take longer than simply "cleaning up" the code.

Although it's old, the paper on ITS4 is still interesting
(it won the "best paper" award at the time):
 http://www.acsac.org/2000/papers/78.pdf
ITS4 is about as simple/naive a tool as it's possible to usefully implement
(the same is true for RATS and flawfinder, which use the same technique).
But I think the following statements about tools are still true, even
for the more sophisticated tools:
* it still takes time to do analysis (though tools reduce it)
* tools still require expertise to use (particularly in understanding
  the answers and determining if it indicates a real problem)
* tools CAN be helpful in finding real security vulnerabilities.

IIRC, ITS4 once found a vulnerability, the researchers said
"that can't happen" and later they discovered it COULD happen.
I don't remember where I saw that.  The OpenBSD folks have this right,
I think: it is often better to change code to be CERTAIN that it
doesn't have a vulnerability, instead of wasting lengthy efforts
to determine if there's a code path that can be exploited.
It's easy to miss a suprising code path, and even if it's impossible
today, a "trivial" maintenance change might open it up.

So EVEN IF a tool reports on a false positive, sometimes it's because
there's so much excess cleverness that a later maintainance change
would turn it into a serious problem.  In those cases, it's better
to fix it now, so that the software is more resilient to maintenance changes.

Also, a student and I did some measurements and found an unsurprising
trend: software that had a high density of potential vulnerabilities
was more likely to HAVE more vulnerabilities.  We looked at several
source-available programs, and ran several vulnerability analysis
tools on them. The trend was fairly obvious in our limited sample, and
it also wasn't very sensitive to WHICH tool was used.
We expected this, actually.  After all, if a programmer
tends to play with fire, sooner or later they
will get burned.   So if you're getting THAT many false positives,
that may indicate that maybe you should change your approach to
be "safer".  Also, I speculate that this would be true of modules
INSIDE a software system.  If one module has an unusually large
density of vulnerability reports, even if they're all false positives
I would start looking at that module more closely.

--- David A. Wheeler


_______________________________________________
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php

Reply via email to