Re: [SC-L] BSIMM update (informIT)

2010-02-04 Thread Steven M. Christey


On Wed, 3 Feb 2010, Gary McGraw wrote:

Popularity contests are not the kind of data we should count on.  But 
maybe we'll make some progress on that one day.


That's my hope, too, but I'm comfortable with making baby steps along the 
way.



Ultimately, I would love to see the kind of linkage between the collected
data (evidence) and some larger goal (higher security whatever THAT
means in quantitative terms) but if it's out there, I don't see it


Neither do I, and that is a serious issue with models like the BSIMM 
that measure second order effects like activities.  Do the activities 
actually do any good?  Important question!


And one we can't answer without more data that comes from the developers 
who adopt any particular practice, and without some independent measure of 
what success means.  For example: I am a big fan of the attack surface 
metric originally proposed by Michael Howard and taken up by Jeanette Wing 
et al. at CMU (still need to find the time to read Manadhata's thesis, 
alas...)  It seems like common sense that if you reduce attack surface, 
you reduce the number of security problems, but how do you KNOW!?



The 2010 OWASP Top 10 RC1 is more data-driven than previous versions; same
with the 2010 Top 25 (whose release has been delayed to Feb 16, btw).
Unlike last year's Top 25 effort, this time I received several sources of
raw prevalence data, but unfortunately it wasn't in sufficiently
consumable form to combine.


I was with you up until that last part.  Combining the prevalence data 
is something you guys should definitely do.  BTW, how is the 2010 CWE-25 
(which doesn't yet exist) more data driven??


I guess you could call it a more refined version of the popularity 
contest that you already referred to (with the associated limitations, 
and thus subject to some of the same criticisms as those pointed at 
BSIMM): we effectively conducted a survey of a diverse set of 
organizations/individuals from various parts of the software security 
industry, asking what was most important to them, and what they saw the 
most often.  This year, I intentionally designed the Top 25 under the 
assumption that we would not have hard-core quantitative data, recognizing 
that people WANTED hard-core data, and that the few people who actually 
had this data, would not want to share it.  (After all, as a software 
vendor you may know what your own problems are, but you might not want to 
share that with anyone else.)


It was a bit of a surprise when a handful of participants actually had 
real data - but, then the problem I'm referring to with respect to 
consumable form reared its ugly head.  One third-party consultant had 
statistics for a broad set of about 10 high-level categories representing 
hundreds of evaluations; one software vendor gave us a specific weakness 
history - representing dozens of different CWE entries across a broad 
spectrum of issues, sometimes at very low levels of detail and even 
branching into the GUI part of CWE which almost nobody pays attention to - 
but only for 3 products.  Another vendor rep evaluated the dozen or two 
publicly-disclosed vulnerabilities that were most severe according to 
associated CVSS scores.  Those three data sets, plus the handful of others 
based on some form of analysis of hard-core data, are not merge-able. 
The irony with CWE (and many of the making-security-measurable efforts) is 
that it brings sufficient clarity to recognize when there is no clarity... 
the known unknowns to quote Donald Rumsfeld.  I saw this in 1999 in the 
early days of CVE, too, and it's still going on - observers of the 
oss-security list see this weekly.


For data collection at such a specialized level, the situation is not 
unlike the breach-data problem faced by the Open Security Foundation in 
their Data Loss DB work - sometimes you have details, sometimes you don't. 
The Data Loss people might be able to say well, based on this 100-page 
report we examined, we think it MIGHT have been SQL injection but that's 
the kind of data we're dealing with right now.


Now, a separate exercise in which we compare/contrast the customized top-n 
lists of those who have actually progressed to the point of making them... 
that smells like opportunity to me.



I for one am pretty satisfied with the rate at which things are
progressing and am delighted to see that we're finally getting some raw
data, as good (or as bad) as it may be.  The data collection process,
source data, metrics, and conclusions associated with the 2010 Top 25 will
probably be controversial, but at least there's some data to argue about.


Cool!


To clarify to others who have commented on this part - I'm talking 
specifically about the rate in which the software security industry seems 
to be maturing, independently of how quickly the threat landscape is 
changing.  That's a whole different, depressing problem.


- Steve
___
Secure Coding mailing list 

Re: [SC-L] BSIMM update (informIT)

2010-02-04 Thread Mike Boberski
I for one am pretty satisfied with the rate at which things are
progressing

I dunno...

Again, trying to keep it pithy: I for one welcome our eventual new [insert
hostile nation state here] overlords. /joke

What I see from my vantage point is a majority of people who (1)should know
better given their leadership positions that don't or (2)who willingly
ignore security-related concerns to advance their personal business goals,
trusting in the availability of lawyers or the ability to punch out before
stuff hits the fan, speculating (perhaps) on motives.

Excuse me now while I get back go my Rosetta Stone lesson. /joke

Mike


On Wed, Feb 3, 2010 at 3:04 PM, Gary McGraw g...@cigital.com wrote:

 Hi Steve (and sc-l),

 I'll invoke my skiing with Eli excuse again on this thread as well...

 On Tue, 2 Feb 2010, Wall, Kevin wrote:
  To study something scientifically goes _beyond_ simply gathering
  observable and measurable evidence. Not only does data needs to be
  collected, but it also needs to be tested against a hypotheses that
 offers
  a tentative *explanation* of the observed phenomena;
  i.e., the hypotheses should offer some predictive value.

 On 2/2/10 4:12 PM, Steven M. Christey co...@linus.mitre.org wrote:
 I believe that the cross-industry efforts like BSIMM, ESAPI, top-n lists,
 SAMATE, etc. are largely at the beginning of the data collection phase.

 I agree 100%.  It's high time we gathered some data to back up our claims.
  I would love to see the top-n lists do more with data.

 Here's an example.  In the BSIMM,  10 of 30 firms have built top-N bug
 lists based on their own data culled from their own code.  I would love to
 see how those top-n lists compare to the OWASP top ten or the CWE-25.  I
 would also love to see whether the union of these lists is even remotely
 interesting.  One of my (many) worries about top-n lists that are NOT bound
 to a particular code base is that the lists are so generic as to be useless
 and maybe even unhelpful if adopted wholesale without understanding what's
 actually going on in a codebase. [see 
 http://www.informit.com/articles/article.aspx?p=1322398].

 Note for the record that asking lots of people what they think should be
 in the top-10 is not quite the same as taking the union of particular top-n
 lists which are tied to particular code bases.  Popularity contests are not
 the kind of data we should count on.  But maybe we'll make some progress on
 that one day.

 Ultimately, I would love to see the kind of linkage between the collected
 data (evidence) and some larger goal (higher security whatever THAT
 means in quantitative terms) but if it's out there, I don't see it

 Neither do I, and that is a serious issue with models like the BSIMM that
 measure second order effects like activities.  Do the activities actually
 do any good?  Important question!

 The 2010 OWASP Top 10 RC1 is more data-driven than previous versions; same
 with the 2010 Top 25 (whose release has been delayed to Feb 16, btw).
 Unlike last year's Top 25 effort, this time I received several sources of
 raw prevalence data, but unfortunately it wasn't in sufficiently
 consumable form to combine.

 I was with you up until that last part.  Combining the prevalence data is
 something you guys should definitely do.  BTW, how is the 2010 CWE-25 (which
 doesn't yet exist) more data driven??

 I for one am pretty satisfied with the rate at which things are
 progressing and am delighted to see that we're finally getting some raw
 data, as good (or as bad) as it may be.  The data collection process,
 source data, metrics, and conclusions associated with the 2010 Top 25 will
 probably be controversial, but at least there's some data to argue about.

 Cool!

 So in that sense, I see Gary's article not so much as a clarion call for
 action to a reluctant and primitive industry, but an early announcement of
 a shift that is already underway.

 Well put.

 gem

 company www.cigital.com
 podcast www.cigital.com/~gem http://www.cigital.com/%7Egem
 blog www.cigital.com/justiceleague
 book www.swsec.com


 ___
 Secure Coding mailing list (SC-L) SC-L@securecoding.org
 List information, subscriptions, etc -
 http://krvw.com/mailman/listinfo/sc-l
 List charter available at - http://www.securecoding.org/list/charter.php
 SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
 as a free, non-commercial service to the software security community.
 ___

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] BSIMM update (informIT)

2010-02-04 Thread McGovern, James F. (eBusiness)
When comparing BSIMM to SAMM are we suffering from the Mayberry Paradox? Did 
you know that Apple is more secure than Microsoft simply because there are more 
successful attacks on MS products? Of course, we should ignore the fact that 
the number of attackers doesn't prove that one product is more secure than 
another.

Whenever I bring in either vendors or consultancies to write about my 
organization, do I only publish the positives and only slip in a few negatives 
in order to maintain the façade of integrity? Would BSIMM be a better approach 
if the audience wasn't so self-selecting? At no time did it include 
corporations who use Ounce Labs or Coverity or even other well-known security 
consultancies.

OWASP on the other hand received feedback from folks such as myself on not the 
things that work, but on a ton of stuff that didn't work for us. This type of 
filtering provides more value in that it helps other organizations avoid 
repeating things that we didn't do so well without necessarily encouraging 
others to do it the McGovern way.

Corporations are dynamic entities and what won't work vs what will is highly 
contextual. I prefer a list of things that could possibly work over the effort 
to simply pull something off the shelf that another organization got to work 
with a lot of missing context. The best security decisions are made when you 
can provide an enterprise with choice in recommendations and I think SAMM in 
this regard does a better job than other approaches.

-Original Message-
From: sc-l-boun...@securecoding.org [mailto:sc-l-boun...@securecoding.org] On 
Behalf Of Kenneth Van Wyk
Sent: Wednesday, February 03, 2010 4:08 PM
To: Secure Coding
Subject: Re: [SC-L] BSIMM update (informIT)

On Jan 28, 2010, at 10:34 AM, Gary McGraw wrote:
 Among other things, David and I discussed the difference between descriptive 
 models like BSIMM and prescriptive models which purport to tell you what you 
 should do. 

Thought I'd chime in on this a bit, FWIW...  From my perspective, I welcome 
BSIMM and I welcome SAMM.  I don't see it in the least as a one or the other 
debate.

A decade(ish) since the first texts on various aspects of software security 
started appearing, it's great to have a BSIMM that surveys some of the largest 
software groups on the planet to see what they're doing.  What actually works.  
That's fabulously useful.  On the other hand, it is possible that ten thousand 
lemmings can be wrong.  Following the herd isn't always what's best.

SAMM, by contrast, was written by some bright, motivated folks, and provides us 
all with a set of targets to aspire to.  Some will work, and some won't, 
without a doubt.

To me, both models are useful as guide posts to help a software group--an SSG 
if you will--decide what practices will work best in their enterprise.

But as useful as both SAMM and BSIMM are, I think we're all fooling ourselves 
if we consider these to be standards or even maturity models.  Any other 
engineering discipline on the planet would laugh us all out of the room by the 
mere suggestion.  There's value to them, don't get me wrong.  But we're still 
in the larval mode of building an engineering discipline here folks.  After 
all, as a species, we didn't start (successfully) building bridges in a decade.

For now, my suggestion is to read up, try things that seem reasonable, and 
build a set of practices that work for _you_.  

Cheers,

Ken

-
Kenneth R. van Wyk
KRvW Associates, LLC
http://www.KRvW.com


This communication, including attachments, is for the exclusive use of 
addressee and may contain proprietary, confidential and/or privileged 
information.  If you are not the intended recipient, any use, copying, 
disclosure, dissemination or distribution is strictly prohibited.  If you are 
not the intended recipient, please notify the sender immediately by return 
e-mail, delete this communication and destroy all copies.



___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] BSIMM update (informIT)

2010-02-04 Thread Jim Manico
Why are we holding up the statistics from Google, Adobe and Microsoft ( 
http://www.bsi-mm.com/participate/ ) in BDSIMM?


These companies are examples of recent epic security failure. Probably 
the most financially damaging infosec attack, ever. Microsoft let a 
plain-vanilla 0-day slip through ie6 for years, Google has a pretty 
basic network segmentation and policy problem, and Adobe continues to be 
the laughing stock of client side security. Why are we holding up these 
companies as BDSIMM champions?


- Jim



On Wed, 3 Feb 2010, Gary McGraw wrote:

Popularity contests are not the kind of data we should count on.  But 
maybe we'll make some progress on that one day.


That's my hope, too, but I'm comfortable with making baby steps along 
the way.


Ultimately, I would love to see the kind of linkage between the 
collected

data (evidence) and some larger goal (higher security whatever THAT
means in quantitative terms) but if it's out there, I don't see it


Neither do I, and that is a serious issue with models like the BSIMM 
that measure second order effects like activities.  Do the 
activities actually do any good?  Important question!


And one we can't answer without more data that comes from the 
developers who adopt any particular practice, and without some 
independent measure of what success means.  For example: I am a big 
fan of the attack surface metric originally proposed by Michael Howard 
and taken up by Jeanette Wing et al. at CMU (still need to find the 
time to read Manadhata's thesis, alas...)  It seems like common sense 
that if you reduce attack surface, you reduce the number of security 
problems, but how do you KNOW!?


The 2010 OWASP Top 10 RC1 is more data-driven than previous 
versions; same

with the 2010 Top 25 (whose release has been delayed to Feb 16, btw).
Unlike last year's Top 25 effort, this time I received several 
sources of

raw prevalence data, but unfortunately it wasn't in sufficiently
consumable form to combine.


I was with you up until that last part.  Combining the prevalence 
data is something you guys should definitely do.  BTW, how is the 
2010 CWE-25 (which doesn't yet exist) more data driven??


I guess you could call it a more refined version of the popularity 
contest that you already referred to (with the associated 
limitations, and thus subject to some of the same criticisms as those 
pointed at BSIMM): we effectively conducted a survey of a diverse set 
of organizations/individuals from various parts of the software 
security industry, asking what was most important to them, and what 
they saw the most often.  This year, I intentionally designed the Top 
25 under the assumption that we would not have hard-core quantitative 
data, recognizing that people WANTED hard-core data, and that the few 
people who actually had this data, would not want to share it.  (After 
all, as a software vendor you may know what your own problems are, but 
you might not want to share that with anyone else.)


It was a bit of a surprise when a handful of participants actually had 
real data - but, then the problem I'm referring to with respect to 
consumable form reared its ugly head.  One third-party consultant 
had statistics for a broad set of about 10 high-level categories 
representing hundreds of evaluations; one software vendor gave us a 
specific weakness history - representing dozens of different CWE 
entries across a broad spectrum of issues, sometimes at very low 
levels of detail and even branching into the GUI part of CWE which 
almost nobody pays attention to - but only for 3 products.  Another 
vendor rep evaluated the dozen or two publicly-disclosed 
vulnerabilities that were most severe according to associated CVSS 
scores.  Those three data sets, plus the handful of others based on 
some form of analysis of hard-core data, are not merge-able. The irony 
with CWE (and many of the making-security-measurable efforts) is that 
it brings sufficient clarity to recognize when there is no clarity... 
the known unknowns to quote Donald Rumsfeld.  I saw this in 1999 in 
the early days of CVE, too, and it's still going on - observers of the 
oss-security list see this weekly.


For data collection at such a specialized level, the situation is not 
unlike the breach-data problem faced by the Open Security Foundation 
in their Data Loss DB work - sometimes you have details, sometimes you 
don't. The Data Loss people might be able to say well, based on this 
100-page report we examined, we think it MIGHT have been SQL 
injection but that's the kind of data we're dealing with right now.


Now, a separate exercise in which we compare/contrast the customized 
top-n lists of those who have actually progressed to the point of 
making them... that smells like opportunity to me.



I for one am pretty satisfied with the rate at which things are
progressing and am delighted to see that we're finally getting some raw
data, as good (or as bad) as it may be.  The data 

Re: [SC-L] BSIMM update (informIT)

2010-02-04 Thread Brian Chess
 At no time did it include corporations who use Ounce Labs or Coverity

Bzzzt.  False.  While there are plenty of Fortify customers represented in
BSIMM, there are also plenty of participants who aren't Fortify customers.
I don't think there are any hard numbers on market share in this realm, but
my hunch is that BSIMM is not far off from a uniform sample in this regard.

Brian


 -Original Message-
 From: sc-l-boun...@securecoding.org [mailto:sc-l-boun...@securecoding.org] On
 Behalf Of Kenneth Van Wyk
 Sent: Wednesday, February 03, 2010 4:08 PM
 To: Secure Coding
 Subject: Re: [SC-L] BSIMM update (informIT)
 
 On Jan 28, 2010, at 10:34 AM, Gary McGraw wrote:
 Among other things, David and I discussed the difference between descriptive
 models like BSIMM and prescriptive models which purport to tell you what you
 should do. 
 
 Thought I'd chime in on this a bit, FWIW...  From my perspective, I welcome
 BSIMM and I welcome SAMM.  I don't see it in the least as a one or the other
 debate.
 
 A decade(ish) since the first texts on various aspects of software security
 started appearing, it's great to have a BSIMM that surveys some of the largest
 software groups on the planet to see what they're doing.  What actually works.
 That's fabulously useful.  On the other hand, it is possible that ten thousand
 lemmings can be wrong.  Following the herd isn't always what's best.
 
 SAMM, by contrast, was written by some bright, motivated folks, and provides
 us all with a set of targets to aspire to.  Some will work, and some won't,
 without a doubt.
 
 To me, both models are useful as guide posts to help a software group--an SSG
 if you will--decide what practices will work best in their enterprise.
 
 But as useful as both SAMM and BSIMM are, I think we're all fooling ourselves
 if we consider these to be standards or even maturity models.  Any other
 engineering discipline on the planet would laugh us all out of the room by the
 mere suggestion.  There's value to them, don't get me wrong.  But we're still
 in the larval mode of building an engineering discipline here folks.  After
 all, as a species, we didn't start (successfully) building bridges in a
 decade.
 
 For now, my suggestion is to read up, try things that seem reasonable, and
 build a set of practices that work for _you_.
 
 Cheers,
 
 Ken
 
 -
 Kenneth R. van Wyk
 KRvW Associates, LLC
 http://www.KRvW.com
 
 
 This communication, including attachments, is for the exclusive use of
 addressee and may contain proprietary, confidential and/or privileged
 information.  If you are not the intended recipient, any use, copying,
 disclosure, dissemination or distribution is strictly prohibited.  If you are
 not the intended recipient, please notify the sender immediately by return
 e-mail, delete this communication and destroy all copies.
 
 
 
 ___
 Secure Coding mailing list (SC-L) SC-L@securecoding.org
 List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
 List charter available at - http://www.securecoding.org/list/charter.php
 SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
 as a free, non-commercial service to the software security community.
 ___

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] BSIMM update (informIT)

2010-02-04 Thread Steven M. Christey


On Thu, 4 Feb 2010, Jim Manico wrote:

These companies are examples of recent epic security failure. Probably 
the most financially damaging infosec attack, ever. Microsoft let a 
plain-vanilla 0-day slip through ie6 for years


Actually, it was a not-so-vanilla use-after-free, which once upon a time 
was only thought of as a reliability problem, but lately, exploit and 
detection techniques have recently begun bearing fruit for the small 
number of people who actually know how to get code execution out of these 
bugs.  In general, Microsoft (and others) have gotten their software to 
the point where attackers and researchers have to spend a lot of time and 
$$$ to find obscure vuln types, then spend some more time and $$$ to work 
around the various protection mechanisms that exist in order to get code 
execution instead of a crash.


I can't remember the last time I saw a Microsoft product have a 
mind-numbingly-obvious problem in it.  It would be nice if statistics were 
available that measured how many person-hours and CPU-hours were used to 
find new vulnerabilities - then you could determine the ratio of 
level-of-effort to number-of-vulns-found.  That data's not available, 
though - we only have anecdotal evidence by people such as Dave Aitel and 
David Litchfield saying it's getting more difficult and time-consuming.


- Steve
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] BSIMM update (informIT)

2010-02-04 Thread Gary McGraw
hi jim,

We chose organizations that in our opinion are doing a superior job with 
software security.  You are welcome to disagree with our choices.

Microsoft has a shockingly good approach to software security that they are 
kind enough to share with the world through the SDL books and websites.  Google 
has a much different approach with more attention focused on open source risk 
and testing (and much less on code review with tools).  Adobe has a newly 
reinvigorated approach under new leadership that is making some much needed 
progress.

The three firms that you cited were all members of the original nine whose data 
allowed us to construct the model.  There are now 30 firms in the BSIMM study, 
and their BSIMM data vary as much as you might expect...about which more soon.

gem

company www.cigital.com
podcast www.cigital.com/silverbullet
blog www.cigital.com/justiceleague
book www.swsec.com


On 2/4/10 12:50 PM, Jim Manico j...@manico.net wrote:

Why are we holding up the statistics from Google, Adobe and Microsoft (
http://www.bsi-mm.com/participate/ ) in BDSIMM?

These companies are examples of recent epic security failure. Probably
the most financially damaging infosec attack, ever. Microsoft let a
plain-vanilla 0-day slip through ie6 for years, Google has a pretty
basic network segmentation and policy problem, and Adobe continues to be
the laughing stock of client side security. Why are we holding up these
companies as BDSIMM champions?

- Jim


 On Wed, 3 Feb 2010, Gary McGraw wrote:

 Popularity contests are not the kind of data we should count on.  But
 maybe we'll make some progress on that one day.

 That's my hope, too, but I'm comfortable with making baby steps along
 the way.

 Ultimately, I would love to see the kind of linkage between the
 collected
 data (evidence) and some larger goal (higher security whatever THAT
 means in quantitative terms) but if it's out there, I don't see it

 Neither do I, and that is a serious issue with models like the BSIMM
 that measure second order effects like activities.  Do the
 activities actually do any good?  Important question!

 And one we can't answer without more data that comes from the
 developers who adopt any particular practice, and without some
 independent measure of what success means.  For example: I am a big
 fan of the attack surface metric originally proposed by Michael Howard
 and taken up by Jeanette Wing et al. at CMU (still need to find the
 time to read Manadhata's thesis, alas...)  It seems like common sense
 that if you reduce attack surface, you reduce the number of security
 problems, but how do you KNOW!?

 The 2010 OWASP Top 10 RC1 is more data-driven than previous
 versions; same
 with the 2010 Top 25 (whose release has been delayed to Feb 16, btw).
 Unlike last year's Top 25 effort, this time I received several
 sources of
 raw prevalence data, but unfortunately it wasn't in sufficiently
 consumable form to combine.

 I was with you up until that last part.  Combining the prevalence
 data is something you guys should definitely do.  BTW, how is the
 2010 CWE-25 (which doesn't yet exist) more data driven??

 I guess you could call it a more refined version of the popularity
 contest that you already referred to (with the associated
 limitations, and thus subject to some of the same criticisms as those
 pointed at BSIMM): we effectively conducted a survey of a diverse set
 of organizations/individuals from various parts of the software
 security industry, asking what was most important to them, and what
 they saw the most often.  This year, I intentionally designed the Top
 25 under the assumption that we would not have hard-core quantitative
 data, recognizing that people WANTED hard-core data, and that the few
 people who actually had this data, would not want to share it.  (After
 all, as a software vendor you may know what your own problems are, but
 you might not want to share that with anyone else.)

 It was a bit of a surprise when a handful of participants actually had
 real data - but, then the problem I'm referring to with respect to
 consumable form reared its ugly head.  One third-party consultant
 had statistics for a broad set of about 10 high-level categories
 representing hundreds of evaluations; one software vendor gave us a
 specific weakness history - representing dozens of different CWE
 entries across a broad spectrum of issues, sometimes at very low
 levels of detail and even branching into the GUI part of CWE which
 almost nobody pays attention to - but only for 3 products.  Another
 vendor rep evaluated the dozen or two publicly-disclosed
 vulnerabilities that were most severe according to associated CVSS
 scores.  Those three data sets, plus the handful of others based on
 some form of analysis of hard-core data, are not merge-able. The irony
 with CWE (and many of the making-security-measurable efforts) is that
 it brings sufficient clarity to recognize when there is no clarity...
 the known 

Re: [SC-L] BSIMM update (informIT)

2010-02-04 Thread McGovern, James F. (eBusiness)
Merely hoping to understand more about the thinking behind BSIMM. 

Here is a quote from the page: Of the thirty-five large-scale software 
security initiatives we are aware of, we chose nine that we considered the most 
advanced how can the reader tell why others were filtered?

When you visit the link: http://www.bsi-mm.com/participate/ it doesn't show any 
of the vendors you mentioned below? Should they be shown somewhere?

The BSIMM download link requires registration. Does this become a lead for 
some company?


-Original Message-
From: Gary McGraw [mailto:g...@cigital.com] 
Sent: Thursday, February 04, 2010 2:18 PM
To: McGovern, James F. (P+C Technology); Secure Code Mailing List
Subject: Re: [SC-L] BSIMM update (informIT)

hi james,

I'm afraid you are completely wrong about this paragraph which you have 
completely fabricated.  Please check your facts.  This one borders on slander 
and I have no earthly idea why you believe what you said.

 Would BSIMM be a better approach if the audience wasn't so 
 self-selecting? At no time did it include corporations who use Ounce Labs or 
 Coverity or even other well-known security consultancies.

BSIMM covers many organizations who use Ounce, Appscan, SPI dev inspect, 
Coverity, Klocwork, Veracode, and a slew of consultancies including iSec, 
Aspect, Leviathan, Aitel, and so on.

gem


On 2/4/10 10:29 AM, McGovern, James F. (eBusiness) 
james.mcgov...@thehartford.com wrote:

When comparing BSIMM to SAMM are we suffering from the Mayberry Paradox? Did 
you know that Apple is more secure than Microsoft simply because there are more 
successful attacks on MS products? Of course, we should ignore the fact that 
the number of attackers doesn't prove that one product is more secure than 
another.

Whenever I bring in either vendors or consultancies to write about my 
organization, do I only publish the positives and only slip in a few negatives 
in order to maintain the façade of integrity? Would BSIMM be a better approach 
if the audience wasn't so self-selecting? At no time did it include 
corporations who use Ounce Labs or Coverity or even other well-known security 
consultancies.

OWASP on the other hand received feedback from folks such as myself on not the 
things that work, but on a ton of stuff that didn't work for us. This type of 
filtering provides more value in that it helps other organizations avoid 
repeating things that we didn't do so well without necessarily encouraging 
others to do it the McGovern way.

Corporations are dynamic entities and what won't work vs what will is highly 
contextual. I prefer a list of things that could possibly work over the effort 
to simply pull something off the shelf that another organization got to work 
with a lot of missing context. The best security decisions are made when you 
can provide an enterprise with choice in recommendations and I think SAMM in 
this regard does a better job than other approaches.

-Original Message-
From: sc-l-boun...@securecoding.org [mailto:sc-l-boun...@securecoding.org] On 
Behalf Of Kenneth Van Wyk
Sent: Wednesday, February 03, 2010 4:08 PM
To: Secure Coding
Subject: Re: [SC-L] BSIMM update (informIT)

On Jan 28, 2010, at 10:34 AM, Gary McGraw wrote:
 Among other things, David and I discussed the difference between descriptive 
 models like BSIMM and prescriptive models which purport to tell you what you 
 should do.

Thought I'd chime in on this a bit, FWIW...  From my perspective, I welcome 
BSIMM and I welcome SAMM.  I don't see it in the least as a one or the other 
debate.

A decade(ish) since the first texts on various aspects of software security 
started appearing, it's great to have a BSIMM that surveys some of the largest 
software groups on the planet to see what they're doing.  What actually works.  
That's fabulously useful.  On the other hand, it is possible that ten thousand 
lemmings can be wrong.  Following the herd isn't always what's best.

SAMM, by contrast, was written by some bright, motivated folks, and provides us 
all with a set of targets to aspire to.  Some will work, and some won't, 
without a doubt.

To me, both models are useful as guide posts to help a software group--an SSG 
if you will--decide what practices will work best in their enterprise.

But as useful as both SAMM and BSIMM are, I think we're all fooling ourselves 
if we consider these to be standards or even maturity models.  Any other 
engineering discipline on the planet would laugh us all out of the room by the 
mere suggestion.  There's value to them, don't get me wrong.  But we're still 
in the larval mode of building an engineering discipline here folks.  After 
all, as a species, we didn't start (successfully) building bridges in a decade.

For now, my suggestion is to read up, try things that seem reasonable, and 
build a set of practices that work for _you_.

Cheers,

Ken

-
Kenneth R. van Wyk
KRvW Associates, LLC
http://www.KRvW.com


Re: [SC-L] BSIMM update (informIT)

2010-02-04 Thread Arian J. Evans
Hola Gary, inline:


On Wed, Feb 3, 2010 at 12:05 PM, Gary McGraw g...@cigital.com wrote:

Strategic folks (VP, CxO) ...Initially ...ask for descriptive information, 
but once they get
going they need strategic prescriptions.

 Please see my response to Kevin.  I hope it's clear what the BSIMM is for.
  It's for measuring your initiative and comparing it to others.  Given some
 solid BSIMM data, I believe you can do a superior job with strategy...and
 results measurement.  It is a tool for strategic people to use to build an 
 initiative that works.


My response was regarding what people need today. I think BSIMM is too
much for most organization's needs and interests.


Tactical folks tend to ask:
+ What should we fix first? (prescriptive)
+ What steps can I take to reduce XSS attack surface by 80%?

 The BSIMM is not for tactical folks.

That's too bad. Security is largely tactical, like it or not.


 But should you base your decision regarding what to fix first on goat 
sacrifice?
 What should drive that decision?  Moon phase?


It doesn't take much thinking to move beyond moon phase to pragmatic
things like:

+ What is being attacked? (the most | or | targeting you)
+ What do I have the most of?
+ What issues present the most risk of impact or loss?
+ etc.

Definitely doesn't take Feynman. Or moon phase melodrama.


 Implementation level folks ask:
+ What do I do about this specific attack/weakness?
+ How do I make my compensating control (WAF, IPS) block this specific attack?

 BSIMM != code review tool, top-n list, book, coding experience, ...

Sure. Again, I was sharing with folks on SC-L what people out in IRL
at what layers of an organization actually care about.


BSIMM is probably useful for government agencies, or some large
organizations. But the vast majority of clients I work with don't have
the time or need or ability to take advantage of BSIMM. Nor should
they. They don't need a software security group.

 Where to start.  All I can say about BSIMM so far is that is appears
 to be useful for 30 large commercial organizations carrying out real
 software security initiatives.


BSIMM might be useful. I don't think it's necessary. More power to
BSIMM though. I think everyone on SC-L would appreciate more good
data, and BSIMM certainly can collect some interesting data.


 But what about SMB (small to medium sized business)?

I don't deal a lot with SMB, but certainly they don't need BSIMM. They
might make use of the metrics (?) though I doubt it. They want, and
probably need, Top(n) lists and prescriptive guidance.


 Arian, who are your clients?

Mostly fortune-listed (100/500/2000, etc.), but including a broad
spectrum from small online startups to east coast financial
institutions. Mostly people who do business on the Internet, and care
about that business, and security (to try and put them all in a
singular bucket).


 How many developers do they have?

From a handful to thousands, to tens of thousands. Why?


  Who do you report to as a consultant?

I haven't done consulting in years.


  How do you help them make business decisions?

With Math, mostly, and pragmatic prioritization so they can move on
and focus on their business, and get security out of the way as much
as possible.


 Regarding the existence of an SSG, see this article
 http://www.informit.com/articles/article.aspx?p=1434903.
  Are your customers too small to have an SSG?  Are YOU the SSG?
  Are your customers not mature enough for an SSG?  Data would be great.

Not many organizations need an SSG today, unless they have a TON of
developers and are an ISV, or a SaaS version of an old-school ISV
(Salesforce.com).

I do think they benefit highly from a developer-turned-SSP. But I
don't think there are enough of those to go around. So the network and
widget security folks, and even the policy wanks, are going to
probably play a role in software security.


But, as should be no surprise, I cateogrically disagree with the
entire concluding paragraph of the article. Sadly it's just more faith
and magic from Gary's end. We all can do better than that.

 You guys and your personal attacks.  Yeesh.

Gary -- you've been a bit preachy and didactic lately; maybe Obama's
demagoguery has been inspiring you. So be prepared to duck. I'll
define my tomatoes below. Alternately you might consider ending your
articles with Amen. :)


 I am pretty sure you meant the next to last paragraph

You are correct.


 As I have said before, the time has come to put away the bug parade boogeyman
 http://www.informit.com/articles/article.aspx?p=1248057,
 the top 25 tea leaves 
 http://www.informit.com/articles/article.aspx?p=1322398,
 black box web app goat sacrifice, and the occult reading of pen testing 
 entrails.
 It's science time.  And the more descriptive and data driven we are, the 
 better.

 Can you be more specific about your disagreements please?


Yes, I think, quite simply: that paragraph has a sign swinging over it
that says out to 

[SC-L] Thread is dead -- Re: BSIMM update (informIT)

2010-02-04 Thread Kenneth Van Wyk
OK, so this thread has heated up substantially and is on the verge of flare-up. 
 So, I'm declaring the thread to be dead and expunging the extant queue.

If anyone has any civil and value-added points to add, feel free to submit 
them, of course.  As always, I encourage free and open debate here, so long as 
it remains civil and on topic.

Cheers,

Ken

-
Kenneth R. van Wyk
SC-L Moderator



smime.p7s
Description: S/MIME cryptographic signature
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___