[SC-L] Comparing Software Vendors

2007-06-28 Thread McGovern, James F (HTSC, IT)
 Jerry Leichter commented on flaws in scanning tools but I have a
different question. Lots of folks love to attack MS while letting other
vendors off the hook.Is there merit in terms of comparing vendor
offerings within a particular product line. For example is EMC's
Documentum product more secure than say an open source ECM vendor such
as Alfresco?

The industry analysts tend not to actually touch tools and rely on
others. There is some value in terms of quantifying which products are
more secure than others, so shouldn't we as a community help them figure
this out?


*
This communication, including attachments, is
for the exclusive use of addressee and may contain proprietary,
confidential and/or privileged information.  If you are not the intended
recipient, any use, copying, disclosure, dissemination or distribution is
strictly prohibited.  If you are not the intended recipient, please notify
the sender immediately by return e-mail, delete this communication and
destroy all copies.
*


___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] The Next Frontier

2007-06-28 Thread McGovern, James F (HTSC, IT)
Would Fortify consider making their schema open source and donating it
to OWASP? Likewise, would Ouncelabs, coverity and others be willing to
adapt their product to it?


 

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Paco Hope
Sent: Wednesday, June 27, 2007 4:38 PM
To: Secure Coding
Subject: Re: [SC-L] The Next Frontier

On 6/26/07 5:00 PM, "McGovern, James F (HTSC, IT)"
<[EMAIL PROTECTED]> wrote:

Would there be value in terms of defining an XML schema that all tools
could emit audit information to?

You might want to take a look at what the Fortify guys already do. Their
"FVDL" (Fortify Vulnerability Description Language) is XML written to a
specific schema. Here's a snippet:


http://www.w3.org/2001/XMLSchema-instance"; version="1.5"
xsi:type="FVDL">  
curl-7.11.1
42
23572
 
/Users/paco/Documents/Fortify/curl-7.11.1/lib

connect.c
krb4.c
[..snip..]


28424EC3-FFAC-40C0-94D9-3D8283B2F57C
Input Validation and Representation
Buffer Overflow
dataflow
4.0


005542ED81D54F3C72BF3669EA8D130A
4.0
3.4

[..snip..]

Some of their XML seems quite reusable to me, and some of it seems
pretty proprietary. It doesn't seem like they share a DTD or a schema
publicly. Perhaps a little coaxing would get them to release it.

Paco
--
Paco Hope, CISSP
Technical Manager, Cigital, Inc
http://www.cigital.com/ * +1.703.585.7868 Software Confidence. Achieved.

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org List
information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC
(http://www.KRvW.com) as a free, non-commercial service to the software
security community.
___


*
This communication, including attachments, is
for the exclusive use of addressee and may contain proprietary,
confidential and/or privileged information.  If you are not the intended
recipient, any use, copying, disclosure, dissemination or distribution is
strictly prohibited.  If you are not the intended recipient, please notify
the sender immediately by return e-mail, delete this communication and
destroy all copies.
*


___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


[SC-L] Instead of the next frontier, how about another frontier

2007-06-28 Thread McGovern, James F (HTSC, IT)
I was thinking, Instead of the next frontier, how about another
frontier? Many software vendors pretend that the entire world is either
Java or .NET without acknowledging that all of the really good data in
many enterprises is sitting on a big ugly mainframe running COBOL, IMS,
PL/1, etc. It is assumed at this level that most folks in this space
have zero knowledge of these platforms and hence the reason their tools
don't support. 

It is my current thought that us folks in enterprises could do several
things. First, we have the environment that others may not have access
to. Second, we have employees that can help write specifications for
detecting some aspects (they are not secure coding oriented but do
understand things like buffer overflows) in PL1, COBOL, Assembler, etc. 

If the later is of value, we would of course like to figure out how to
make this happen with one effort where every vendor could potentially
consume and not do it for a single vendor.


*
This communication, including attachments, is
for the exclusive use of addressee and may contain proprietary,
confidential and/or privileged information.  If you are not the intended
recipient, any use, copying, disclosure, dissemination or distribution is
strictly prohibited.  If you are not the intended recipient, please notify
the sender immediately by return e-mail, delete this communication and
destroy all copies.
*

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Interesting tidbit in iDefense Security Advisory 06.26.07

2007-06-28 Thread David A. Wheeler
On the comment:
> | I am not disagreeing with the fact the static source analysis is a
> | good thing, I am just saying that this is a case where it failed (or
> | maybe the user/developer of it failed or misunderstood it's use). Fair
> | enough that on this particular list you are going to defend source
> | analysis over any other method, it is about secure coding after all,
> | but I definitely still strongly disagree that other methods wouldn't
> | have found this bug.

Actually, I am _not_ of the opinion that analysis tools are always 
"better" than any other method.  I don't really believe in a silver 
bullet, but if I had to pick one, "developer education" would be my 
silver bullet, not analysis tools.  (Basically, a "fool with a tool is 
still a fool".)  I believe that for secure software you need a SET of 
methods, and tool use is just a part of it.

That said, I think tools that search for vulnerabilities usually need to 
be PART of the answer for secure software in today's world.  Customers 
are generally _unwilling_ to reduce the amount of functionality they 
want to something we can easily prove correct, and formally proving 
programs correct has not scaled well yet (though I commend the work to 
overcome this).   No language can prevent all vulnerabilities from being 
written in the first place.  Human review is _great_, but it's costly in 
many circumstances and it often misses things that tools _can_ pick up. 
So we end up needing analysis tools as part of the process, even though 
current tools have a HUGE list of problems because NOT using them is 
often worse. Other methods may have found the bug, but other methods 
typically don't scale well.

> As with almost everything in software engineering, sad to say, there is
> very little objective evidence.  It's hard and expensive to gather, and
> those who are in a position to pay for the effort rarely see a major
> payoff from making it.

This is a serious problem with software development as a whole; there's 
almost no science behind it at all.   Science requires repeatable 
experiments with measurable results, but most decisions in software 
development are based on guesses and fads, not hard scientific data.

I'm not saying educated guesses are always bad; when you don't have 
data, and you need to do something NOW, educated guesses are often the 
best you can do... and we'll never know EVERYTHING.  The problem is that 
we rarely perform or publish the experiments to get better, so we don't 
make real PROGRESS.  And we don't do the experiments in part because few 
organizations fund actual, publishable scientific work to determine 
which software development approaches work best in software development. 
  (There are exceptions, but it sure isn't the norm.)  We have some good 
science on very low-level stuff like big-O/complexity theory, syntactic 
models, and so on - hey, we can prove trivial programs correct! But 
we're hopeless if you want to use science to make decisions about 50 
million line programs.  Which design approach or processes are best, and 
for what purpose - and can you show me the experimental results to 
justify the claim?  There IS some, but not much.  We lack the scientific 
information necessary to make decisions about many real-world (big) 
applications, and what's worse, we lack a societal process to grow that 
pool of information.  I've no idea how to fix that.

--- David A. Wheeler


___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Interesting tidbit in iDefense Security Advisory 06.26.07

2007-06-28 Thread Leichter, Jerry
On Thu, 28 Jun 2007, J. M. Seitz wrote:
| Hey there,
|  
| > If you couldn't insert "ignore" directives, many people 
| > wouldn't use such tools at all, and would release code with 
| > vulnerabilities that WOULD be found by such tools.
| 
| Of course, much like an IDS, you have to find the baseline and adjust
| your ruleset according to the norm, if it is constantly firing on
| someone accessing /index.html of your website, then that's working
| against you.
| 
| I am not disagreeing with the fact the static source analysis is a
| good thing, I am just saying that this is a case where it failed (or
| maybe the user/developer of it failed or misunderstood it's use). Fair
| enough that on this particular list you are going to defend source
| analysis over any other method, it is about secure coding after all,
| but I definitely still strongly disagree that other methods wouldn't
| have found this bug.
| 
| Shall we take a look at the customer lists of the big source analyzer
| companies, and then cross-map that to the number of vulnerabilities
| released? 
It would be great to have that information.  Even better would be to
classify the vulnerabilities in two buckets:  Those that the analyzer
was expected to find, and those that it's not ready for.

You would have to allow for the time it takes from when the analyzer
starts being used to when software that actually went through, not just
the analyzer, but the remediation process, actually hits the streets.
For large companies and widely-used commerical products, that can be a
long time.

However ... I think the chances of getting that kind for commercial
projects is just about nil.  Companies generally consider the details of
their software development processes proprietary, and at most will give
you broad generalities about the kinds of tools and processes they use.
(Given the cost of these analyzers, you'd think that potential customers
would want some data about actual payoffs.  However, I think they
recognize that no such data exists at this point.  A prudent customer
might well collect such data for himself to help in deciding whether the
contract for the analyzer is worth renewing - though even this kind of
careful analysis is very rare in the industry.)

An OSS project might be a better starting place for this kind of study.
I know that Coverity has analyzed a couple of significant pieces of OSS
for free (including, I believe, the Linux and NetBSD kernels).  It's
likely that other analyzer vendors have done something similar, though I
haven't heard about it.  A study showing how using an analyzer led to an
x% decrease in reported bugs would make for great sales copy.  (Of
course, there's always the risk that the analyzer doesn't actually help
make the software more secure!)

|   Why are we still finding bugs in software that have the SDL?
| Why are we still finding bugs in software that have been analyzed
| before the compiler has run? Why are these companies like Fortify
| charging an arm and a leg for such a technology when the bughunters
| are still beating the snot out of this stuff? 
I'm not sure that's a fair comparison.  The defenders have to plug *all*
the holes, including those using techniques that weren't known at the
time the software was produced.  The attackers only have to find one
hole.

|   You guys all have much
| more experience on that end, so I am looking forward to your
| responses!
As with almost everything in software engineering, sad to say, there is
very little objective evidence.  It's hard and expensive to gather, and
those who are in a position to pay for the effort rarely see a major
payoff from making it.  Which would you rather do:  Put up $1,000,000 to
do a study which *might* show your software/methodology/whatever helps,
or pay $1,000,000 to hire a bunch of "feet on the street" to sell more
copies?  So we have to go by gut feel and engineering judgement.  Those
certainly say, for most people, that static analyzers will help.  Then
again, I'm sure most people on this list will argue that strong static
typing is essential for secure, reliable software.  "Everyone has known
that" for 20 years or more.  Except that ... if you look around, you'll
find many arguments these days that the run-time typing characteristic
of languages like Python or Ruby is just as good and lets you produce
software much faster.  Which argument is true?  You'd think that after
50 years of software development, we'd at least know how to frame a
test ... but even that seems beyond the state of the art.

-- Jerry

| Cheers! 
| 
| JS
| 
| ___
| Secure Coding mailing list (SC-L) SC-L@securecoding.org
| List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
| List charter available at - http://www.securecoding.org/list/charter.php
| SC-L is hosted and moderated by KRvW Associat

Re: [SC-L] Interesting tidbit in iDefense Security Advisory 06.26.07

2007-06-28 Thread J. M. Seitz
 
Hey there,
 
> If you couldn't insert "ignore" directives, many people 
> wouldn't use such tools at all, and would release code with 
> vulnerabilities that WOULD be found by such tools.

Of course, much like an IDS, you have to find the baseline and adjust your
ruleset according to the norm, if it is constantly firing on someone
accessing /index.html of your website, then that's working against you. 

I am not disagreeing with the fact the static source analysis is a good
thing, I am just saying that this is a case where it failed (or maybe the
user/developer of it failed or misunderstood it's use). Fair enough that on
this particular list you are going to defend source analysis over any other
method, it is about secure coding after all, but I definitely still strongly
disagree that other methods wouldn't have found this bug. 

Shall we take a look at the customer lists of the big source analyzer
companies, and then cross-map that to the number of vulnerabilities
released? Why are we still finding bugs in software that have the SDL? Why
are we still finding bugs in software that have been analyzed before the
compiler has run? Why are these companies like Fortify charging an arm and a
leg for such a technology when the bughunters are still beating the snot out
of this stuff? You guys all have much more experience on that end, so I am
looking forward to your responses!

Cheers! 

JS

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Interesting tidbit in iDefense Security Advisory 06.26.07

2007-06-28 Thread David A. Wheeler
In this discussion:
> | This is a perfect example of how a source code analysis tool failed,
> | because you let a developer tell it to NOT scan it. :) I wonder if
> | there are flags like that in Fortify?
> There are flags like that in *every* source code scanner I know of.  The
> state of the art is just not at a point where you don't need a way to
> turn off warnings for false positives.

That's exactly right, unfortunately.  To compensate for the problem of 
people inserting bad ignore directives, many scanning tools _also_ 
include an "ignore the ignores" command.  For example, flawfinder has a 
--neverignore (-n) flag that "ignores the ignore command".  I believe 
that such an option ("ignore ignores") is critically necessary for any 
tool that has "ignore" directives, to address this very problem.

If you couldn't insert "ignore" directives, many people wouldn't use 
such tools at all, and would release code with vulnerabilities that 
WOULD be found by such tools.

--- David A. Wheeler


___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___