Re: [SC-L] Interesting tidbit in iDefense Security Advisory 06.26.07

2007-06-29 Thread Florian Weimer
* Kenneth Van Wyk:

 1) the original author of the defect thought that s/he was doing
 things correctly in using strncpy (vs. strcpy).

 2) the original author had apparently been doing static source
 analysis using David Wheeler's Flawfinder tool, as we can tell from
 the comments.

This is not a first, BTW.  The Real folks have always been a bit
overzealous when adding those Flawfinder: ignore annotations:

http://archive.cert.uni-stuttgart.de/vulnwatch/2005/03/msg0.html
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Interesting tidbit in iDefense Security Advisory 06.26.07

2007-06-28 Thread David A. Wheeler
In this discussion:
 | This is a perfect example of how a source code analysis tool failed,
 | because you let a developer tell it to NOT scan it. :) I wonder if
 | there are flags like that in Fortify?
 There are flags like that in *every* source code scanner I know of.  The
 state of the art is just not at a point where you don't need a way to
 turn off warnings for false positives.

That's exactly right, unfortunately.  To compensate for the problem of 
people inserting bad ignore directives, many scanning tools _also_ 
include an ignore the ignores command.  For example, flawfinder has a 
--neverignore (-n) flag that ignores the ignore command.  I believe 
that such an option (ignore ignores) is critically necessary for any 
tool that has ignore directives, to address this very problem.

If you couldn't insert ignore directives, many people wouldn't use 
such tools at all, and would release code with vulnerabilities that 
WOULD be found by such tools.

--- David A. Wheeler


___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Interesting tidbit in iDefense Security Advisory 06.26.07

2007-06-28 Thread J. M. Seitz
 
Hey there,
 
 If you couldn't insert ignore directives, many people 
 wouldn't use such tools at all, and would release code with 
 vulnerabilities that WOULD be found by such tools.

Of course, much like an IDS, you have to find the baseline and adjust your
ruleset according to the norm, if it is constantly firing on someone
accessing /index.html of your website, then that's working against you. 

I am not disagreeing with the fact the static source analysis is a good
thing, I am just saying that this is a case where it failed (or maybe the
user/developer of it failed or misunderstood it's use). Fair enough that on
this particular list you are going to defend source analysis over any other
method, it is about secure coding after all, but I definitely still strongly
disagree that other methods wouldn't have found this bug. 

Shall we take a look at the customer lists of the big source analyzer
companies, and then cross-map that to the number of vulnerabilities
released? Why are we still finding bugs in software that have the SDL? Why
are we still finding bugs in software that have been analyzed before the
compiler has run? Why are these companies like Fortify charging an arm and a
leg for such a technology when the bughunters are still beating the snot out
of this stuff? You guys all have much more experience on that end, so I am
looking forward to your responses!

Cheers! 

JS

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Interesting tidbit in iDefense Security Advisory 06.26.07

2007-06-28 Thread Leichter, Jerry
On Thu, 28 Jun 2007, J. M. Seitz wrote:
| Hey there,
|  
|  If you couldn't insert ignore directives, many people 
|  wouldn't use such tools at all, and would release code with 
|  vulnerabilities that WOULD be found by such tools.
| 
| Of course, much like an IDS, you have to find the baseline and adjust
| your ruleset according to the norm, if it is constantly firing on
| someone accessing /index.html of your website, then that's working
| against you.
| 
| I am not disagreeing with the fact the static source analysis is a
| good thing, I am just saying that this is a case where it failed (or
| maybe the user/developer of it failed or misunderstood it's use). Fair
| enough that on this particular list you are going to defend source
| analysis over any other method, it is about secure coding after all,
| but I definitely still strongly disagree that other methods wouldn't
| have found this bug.
| 
| Shall we take a look at the customer lists of the big source analyzer
| companies, and then cross-map that to the number of vulnerabilities
| released? 
It would be great to have that information.  Even better would be to
classify the vulnerabilities in two buckets:  Those that the analyzer
was expected to find, and those that it's not ready for.

You would have to allow for the time it takes from when the analyzer
starts being used to when software that actually went through, not just
the analyzer, but the remediation process, actually hits the streets.
For large companies and widely-used commerical products, that can be a
long time.

However ... I think the chances of getting that kind for commercial
projects is just about nil.  Companies generally consider the details of
their software development processes proprietary, and at most will give
you broad generalities about the kinds of tools and processes they use.
(Given the cost of these analyzers, you'd think that potential customers
would want some data about actual payoffs.  However, I think they
recognize that no such data exists at this point.  A prudent customer
might well collect such data for himself to help in deciding whether the
contract for the analyzer is worth renewing - though even this kind of
careful analysis is very rare in the industry.)

An OSS project might be a better starting place for this kind of study.
I know that Coverity has analyzed a couple of significant pieces of OSS
for free (including, I believe, the Linux and NetBSD kernels).  It's
likely that other analyzer vendors have done something similar, though I
haven't heard about it.  A study showing how using an analyzer led to an
x% decrease in reported bugs would make for great sales copy.  (Of
course, there's always the risk that the analyzer doesn't actually help
make the software more secure!)

|   Why are we still finding bugs in software that have the SDL?
| Why are we still finding bugs in software that have been analyzed
| before the compiler has run? Why are these companies like Fortify
| charging an arm and a leg for such a technology when the bughunters
| are still beating the snot out of this stuff? 
I'm not sure that's a fair comparison.  The defenders have to plug *all*
the holes, including those using techniques that weren't known at the
time the software was produced.  The attackers only have to find one
hole.

|   You guys all have much
| more experience on that end, so I am looking forward to your
| responses!
As with almost everything in software engineering, sad to say, there is
very little objective evidence.  It's hard and expensive to gather, and
those who are in a position to pay for the effort rarely see a major
payoff from making it.  Which would you rather do:  Put up $1,000,000 to
do a study which *might* show your software/methodology/whatever helps,
or pay $1,000,000 to hire a bunch of feet on the street to sell more
copies?  So we have to go by gut feel and engineering judgement.  Those
certainly say, for most people, that static analyzers will help.  Then
again, I'm sure most people on this list will argue that strong static
typing is essential for secure, reliable software.  Everyone has known
that for 20 years or more.  Except that ... if you look around, you'll
find many arguments these days that the run-time typing characteristic
of languages like Python or Ruby is just as good and lets you produce
software much faster.  Which argument is true?  You'd think that after
50 years of software development, we'd at least know how to frame a
test ... but even that seems beyond the state of the art.

-- Jerry

| Cheers! 
| 
| JS
| 
| ___
| Secure Coding mailing list (SC-L) SC-L@securecoding.org
| List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
| List charter available at - http://www.securecoding.org/list/charter.php
| SC-L is hosted and moderated by KRvW Associates, LLC 

Re: [SC-L] Interesting tidbit in iDefense Security Advisory 06.26.07

2007-06-28 Thread David A. Wheeler
On the comment:
 | I am not disagreeing with the fact the static source analysis is a
 | good thing, I am just saying that this is a case where it failed (or
 | maybe the user/developer of it failed or misunderstood it's use). Fair
 | enough that on this particular list you are going to defend source
 | analysis over any other method, it is about secure coding after all,
 | but I definitely still strongly disagree that other methods wouldn't
 | have found this bug.

Actually, I am _not_ of the opinion that analysis tools are always 
better than any other method.  I don't really believe in a silver 
bullet, but if I had to pick one, developer education would be my 
silver bullet, not analysis tools.  (Basically, a fool with a tool is 
still a fool.)  I believe that for secure software you need a SET of 
methods, and tool use is just a part of it.

That said, I think tools that search for vulnerabilities usually need to 
be PART of the answer for secure software in today's world.  Customers 
are generally _unwilling_ to reduce the amount of functionality they 
want to something we can easily prove correct, and formally proving 
programs correct has not scaled well yet (though I commend the work to 
overcome this).   No language can prevent all vulnerabilities from being 
written in the first place.  Human review is _great_, but it's costly in 
many circumstances and it often misses things that tools _can_ pick up. 
So we end up needing analysis tools as part of the process, even though 
current tools have a HUGE list of problems because NOT using them is 
often worse. Other methods may have found the bug, but other methods 
typically don't scale well.

 As with almost everything in software engineering, sad to say, there is
 very little objective evidence.  It's hard and expensive to gather, and
 those who are in a position to pay for the effort rarely see a major
 payoff from making it.

This is a serious problem with software development as a whole; there's 
almost no science behind it at all.   Science requires repeatable 
experiments with measurable results, but most decisions in software 
development are based on guesses and fads, not hard scientific data.

I'm not saying educated guesses are always bad; when you don't have 
data, and you need to do something NOW, educated guesses are often the 
best you can do... and we'll never know EVERYTHING.  The problem is that 
we rarely perform or publish the experiments to get better, so we don't 
make real PROGRESS.  And we don't do the experiments in part because few 
organizations fund actual, publishable scientific work to determine 
which software development approaches work best in software development. 
  (There are exceptions, but it sure isn't the norm.)  We have some good 
science on very low-level stuff like big-O/complexity theory, syntactic 
models, and so on - hey, we can prove trivial programs correct! But 
we're hopeless if you want to use science to make decisions about 50 
million line programs.  Which design approach or processes are best, and 
for what purpose - and can you show me the experimental results to 
justify the claim?  There IS some, but not much.  We lack the scientific 
information necessary to make decisions about many real-world (big) 
applications, and what's worse, we lack a societal process to grow that 
pool of information.  I've no idea how to fix that.

--- David A. Wheeler


___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Interesting tidbit in iDefense Security Advisory 06.26.07

2007-06-26 Thread Steven M. Christey

On Tue, 26 Jun 2007, Kenneth Van Wyk wrote:

 Mind you, the overrun can only be exploited when specific characters
 are used as input to the loop in the code.  Thus, I'm inclined to
 think that this is an interesting example of a bug that would have
 been extraordinarily difficult to find using black box testing, even
 fuzzing.

I would assume that smart fuzzing could have lots of manipulations of
the HH:mm:ss.f format (the intended format mentioned in the advisory), so
this might be findable using black box testing, although I don't know how
many fuzzers actually know how to muck with time strings.  Because the
programmer told flawfinder to ignore the strncpy() that it had flagged, it
also shows a limitation of manual testing.

In CVE anyway, I've seen a number of overflows involving strncpy, and
they're not all off-by-one errors.  They're hard to enumerate because we
don't usually track which function was used, but here are some:

CVE-2007-2489 - negative length

CVE-2006-4431 - empty input causes crash involving strncpy

CVE-2006-0720 - incorrect strncpy call

CVE-2004-0500 - another bad strncpy

CVE-2003-0465 - interesting API interaction


- Steve
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Interesting tidbit in iDefense Security Advisory 06.26.07

2007-06-26 Thread Paco Hope
On 6/26/07 4:25 PM, Wall, Kevin [EMAIL PROTECTED] wrote:

I mean, was the fix really rocket science that it had to take THAT LONG??? 
IMHO, no excuse for taking that long.

8 months seems awfully long, but it doesn't surprise me that a big organization 
takes a really long time to get things like this out the door. I have worked 
with a lot of software companies over the years, and few have their entire test 
suite (unit, integration, system, regression) fully automated. It just doesn't 
work that way. People on this list would be just as condemning of a company 
that rushed a fix out the door with inadequate testing and managed to ship a 
new buffer overflow in the fix for an old one. Furthermore, it's not like the 
source code had been sitting idle with no modifications to it. They were surely 
in the middle of a dev cycle where they were adding lots of features and 
testing and so on. They have business priorities to address, since those 
features (presumably) are what bring revenue in the door and keep competitors 
off their market turf.

So, if everyone dropped everything they were doing and focused solely on fixing 
this one issue and doing a full battery of tests until it was release-worthy, 
it would have gone out a lot faster. But a company that did that on every bug 
that was reported would get no features released and go out of business. They 
have to weigh the impact of missing product goals versus the risk of exploits 
of a buffer overflow. I'm not sure we can categorically say (none of us being 
RealNetworks people) that they made the wrong decision. We don't have the 
information.

Paco
--
Paco Hope, CISSP
Technical Manager, Cigital, Inc
http://www.cigital.com/ * +1.703.585.7868
Software Confidence. Achieved.

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Interesting tidbit in iDefense Security Advisory 06.26.07

2007-06-26 Thread J. M. Seitz
Hey all,

 1) the original author of the defect thought that s/he was 
 doing things correctly in using strncpy (vs. strcpy).
 2) the original author had apparently been doing static 
 source analysis using David Wheeler's Flawfinder tool, as we 
 can tell from the comments.
 

This is humorous, suppose they put it there intentionally and created the
flawfinder tag so that bughunters wouldn't see it doing a quick code scan :)
Conspiracy theory++! But on the other hand, if you can make big bucks
selling 0-days, and can write code, why wouldn't you try to sneak a few into
an open source app?

 Mind you, the overrun can only be exploited when specific 
 characters are used as input to the loop in the code.  Thus, 
 I'm inclined to think that this is an interesting example of 
 a bug that would have been extraordinarily difficult to find 
 using black box testing, even fuzzing.  The iDefense team 
 doesn't say how the (anonymous) person who reported it found 
 it, but I for one would be really curious to hear that story.

sighI disagree, and do we include reverse engineering as black-box
testing? For example, maybe straight up RFC-style fuzzer wouldn't have hit
this one immediately, but there is the possibility that it could have
eventually found that code path, even a dumb fuzzer *could* have. Now let's
take something like Demott's EFS system which uses code-coverage and a
genetic algorithm to hammer further and further into the code. As it would
have hit this basic block of assembly, it may have found that the necessary
characters to continue through this code path had to be mutated or included
in a recombination for the next generation (it's fitness score would be
higher),it's not unreasonableI have seen it do it myself! 

Now if a RE guy would have looked at this (and some of us prefer
disassembled binaries over C-source), its VERY plausible that they would
have found that path, and found the way to exploit it. Take a look at my
blog posting on
http://www.openrce.org/blog/view/775/Tracing_Naughty_Functions where I drop
some subtle hints on how to quickly find these dangerous functions, and
begin determining the best path towards them. Definitely not a new
approach...

This is a perfect example of how a source code analysis tool failed, because
you let a developer tell it to NOT scan it. :) I wonder if there are flags
like that in Fortify?

JS

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Interesting tidbit in iDefense Security Advisory 06.26.07

2007-06-26 Thread Steven M. Christey

 On 6/26/07 4:25 PM, Wall, Kevin [EMAIL PROTECTED] wrote:

 I mean, was the fix really rocket science that it had to take THAT
 LONG??? IMHO, no excuse for taking that long.

Some major vendor organizations, most notably Oracle and Microsoft, have
frequently stated that they can't always fix even simple vulnerabilities
instantly, because they have batteries of tests and platforms to verify
that the fix won't damage anything else.  I can see why this would be the
case, although I rarely hear vendors talk about what they're doing to make
their response time faster.  Open source vendors likely have similar
challenges, though maybe not on such a large scale.

I'd be interested to hear from the SDLC/CMM consultant types who work with
vendors on process, about *why* this is the case.

And in terms of future challenges: how can the lifecycle process be
changed so that developers can quickly and correctly fix show-stopping
issues (including/especially vulnerabilities)?  It would seem to me that
one way that vendors can compete, but don't, is in how quickly and
smoothly they fix issues in existing functionality, which might be a large
part of the operational expenses for an IT consumer.

- Steve
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___