Re: [SC-L] BSIMM update (informIT)

2010-02-04 Thread Steven M. Christey


On Wed, 3 Feb 2010, Gary McGraw wrote:

Popularity contests are not the kind of data we should count on.  But 
maybe we'll make some progress on that one day.


That's my hope, too, but I'm comfortable with making baby steps along the 
way.



Ultimately, I would love to see the kind of linkage between the collected
data (evidence) and some larger goal (higher security whatever THAT
means in quantitative terms) but if it's out there, I don't see it


Neither do I, and that is a serious issue with models like the BSIMM 
that measure second order effects like activities.  Do the activities 
actually do any good?  Important question!


And one we can't answer without more data that comes from the developers 
who adopt any particular practice, and without some independent measure of 
what success means.  For example: I am a big fan of the attack surface 
metric originally proposed by Michael Howard and taken up by Jeanette Wing 
et al. at CMU (still need to find the time to read Manadhata's thesis, 
alas...)  It seems like common sense that if you reduce attack surface, 
you reduce the number of security problems, but how do you KNOW!?



The 2010 OWASP Top 10 RC1 is more data-driven than previous versions; same
with the 2010 Top 25 (whose release has been delayed to Feb 16, btw).
Unlike last year's Top 25 effort, this time I received several sources of
raw prevalence data, but unfortunately it wasn't in sufficiently
consumable form to combine.


I was with you up until that last part.  Combining the prevalence data 
is something you guys should definitely do.  BTW, how is the 2010 CWE-25 
(which doesn't yet exist) more data driven??


I guess you could call it a more refined version of the popularity 
contest that you already referred to (with the associated limitations, 
and thus subject to some of the same criticisms as those pointed at 
BSIMM): we effectively conducted a survey of a diverse set of 
organizations/individuals from various parts of the software security 
industry, asking what was most important to them, and what they saw the 
most often.  This year, I intentionally designed the Top 25 under the 
assumption that we would not have hard-core quantitative data, recognizing 
that people WANTED hard-core data, and that the few people who actually 
had this data, would not want to share it.  (After all, as a software 
vendor you may know what your own problems are, but you might not want to 
share that with anyone else.)


It was a bit of a surprise when a handful of participants actually had 
real data - but, then the problem I'm referring to with respect to 
consumable form reared its ugly head.  One third-party consultant had 
statistics for a broad set of about 10 high-level categories representing 
hundreds of evaluations; one software vendor gave us a specific weakness 
history - representing dozens of different CWE entries across a broad 
spectrum of issues, sometimes at very low levels of detail and even 
branching into the GUI part of CWE which almost nobody pays attention to - 
but only for 3 products.  Another vendor rep evaluated the dozen or two 
publicly-disclosed vulnerabilities that were most severe according to 
associated CVSS scores.  Those three data sets, plus the handful of others 
based on some form of analysis of hard-core data, are not merge-able. 
The irony with CWE (and many of the making-security-measurable efforts) is 
that it brings sufficient clarity to recognize when there is no clarity... 
the known unknowns to quote Donald Rumsfeld.  I saw this in 1999 in the 
early days of CVE, too, and it's still going on - observers of the 
oss-security list see this weekly.


For data collection at such a specialized level, the situation is not 
unlike the breach-data problem faced by the Open Security Foundation in 
their Data Loss DB work - sometimes you have details, sometimes you don't. 
The Data Loss people might be able to say well, based on this 100-page 
report we examined, we think it MIGHT have been SQL injection but that's 
the kind of data we're dealing with right now.


Now, a separate exercise in which we compare/contrast the customized top-n 
lists of those who have actually progressed to the point of making them... 
that smells like opportunity to me.



I for one am pretty satisfied with the rate at which things are
progressing and am delighted to see that we're finally getting some raw
data, as good (or as bad) as it may be.  The data collection process,
source data, metrics, and conclusions associated with the 2010 Top 25 will
probably be controversial, but at least there's some data to argue about.


Cool!


To clarify to others who have commented on this part - I'm talking 
specifically about the rate in which the software security industry seems 
to be maturing, independently of how quickly the threat landscape is 
changing.  That's a whole different, depressing problem.


- Steve
___
Secure Coding mailing list 

Re: [SC-L] BSIMM update (informIT)

2010-02-04 Thread Mike Boberski
I for one am pretty satisfied with the rate at which things are
progressing

I dunno...

Again, trying to keep it pithy: I for one welcome our eventual new [insert
hostile nation state here] overlords. /joke

What I see from my vantage point is a majority of people who (1)should know
better given their leadership positions that don't or (2)who willingly
ignore security-related concerns to advance their personal business goals,
trusting in the availability of lawyers or the ability to punch out before
stuff hits the fan, speculating (perhaps) on motives.

Excuse me now while I get back go my Rosetta Stone lesson. /joke

Mike


On Wed, Feb 3, 2010 at 3:04 PM, Gary McGraw g...@cigital.com wrote:

 Hi Steve (and sc-l),

 I'll invoke my skiing with Eli excuse again on this thread as well...

 On Tue, 2 Feb 2010, Wall, Kevin wrote:
  To study something scientifically goes _beyond_ simply gathering
  observable and measurable evidence. Not only does data needs to be
  collected, but it also needs to be tested against a hypotheses that
 offers
  a tentative *explanation* of the observed phenomena;
  i.e., the hypotheses should offer some predictive value.

 On 2/2/10 4:12 PM, Steven M. Christey co...@linus.mitre.org wrote:
 I believe that the cross-industry efforts like BSIMM, ESAPI, top-n lists,
 SAMATE, etc. are largely at the beginning of the data collection phase.

 I agree 100%.  It's high time we gathered some data to back up our claims.
  I would love to see the top-n lists do more with data.

 Here's an example.  In the BSIMM,  10 of 30 firms have built top-N bug
 lists based on their own data culled from their own code.  I would love to
 see how those top-n lists compare to the OWASP top ten or the CWE-25.  I
 would also love to see whether the union of these lists is even remotely
 interesting.  One of my (many) worries about top-n lists that are NOT bound
 to a particular code base is that the lists are so generic as to be useless
 and maybe even unhelpful if adopted wholesale without understanding what's
 actually going on in a codebase. [see 
 http://www.informit.com/articles/article.aspx?p=1322398].

 Note for the record that asking lots of people what they think should be
 in the top-10 is not quite the same as taking the union of particular top-n
 lists which are tied to particular code bases.  Popularity contests are not
 the kind of data we should count on.  But maybe we'll make some progress on
 that one day.

 Ultimately, I would love to see the kind of linkage between the collected
 data (evidence) and some larger goal (higher security whatever THAT
 means in quantitative terms) but if it's out there, I don't see it

 Neither do I, and that is a serious issue with models like the BSIMM that
 measure second order effects like activities.  Do the activities actually
 do any good?  Important question!

 The 2010 OWASP Top 10 RC1 is more data-driven than previous versions; same
 with the 2010 Top 25 (whose release has been delayed to Feb 16, btw).
 Unlike last year's Top 25 effort, this time I received several sources of
 raw prevalence data, but unfortunately it wasn't in sufficiently
 consumable form to combine.

 I was with you up until that last part.  Combining the prevalence data is
 something you guys should definitely do.  BTW, how is the 2010 CWE-25 (which
 doesn't yet exist) more data driven??

 I for one am pretty satisfied with the rate at which things are
 progressing and am delighted to see that we're finally getting some raw
 data, as good (or as bad) as it may be.  The data collection process,
 source data, metrics, and conclusions associated with the 2010 Top 25 will
 probably be controversial, but at least there's some data to argue about.

 Cool!

 So in that sense, I see Gary's article not so much as a clarion call for
 action to a reluctant and primitive industry, but an early announcement of
 a shift that is already underway.

 Well put.

 gem

 company www.cigital.com
 podcast www.cigital.com/~gem http://www.cigital.com/%7Egem
 blog www.cigital.com/justiceleague
 book www.swsec.com


 ___
 Secure Coding mailing list (SC-L) SC-L@securecoding.org
 List information, subscriptions, etc -
 http://krvw.com/mailman/listinfo/sc-l
 List charter available at - http://www.securecoding.org/list/charter.php
 SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
 as a free, non-commercial service to the software security community.
 ___

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] BSIMM update (informIT)

2010-02-04 Thread McGovern, James F. (eBusiness)
When comparing BSIMM to SAMM are we suffering from the Mayberry Paradox? Did 
you know that Apple is more secure than Microsoft simply because there are more 
successful attacks on MS products? Of course, we should ignore the fact that 
the number of attackers doesn't prove that one product is more secure than 
another.

Whenever I bring in either vendors or consultancies to write about my 
organization, do I only publish the positives and only slip in a few negatives 
in order to maintain the façade of integrity? Would BSIMM be a better approach 
if the audience wasn't so self-selecting? At no time did it include 
corporations who use Ounce Labs or Coverity or even other well-known security 
consultancies.

OWASP on the other hand received feedback from folks such as myself on not the 
things that work, but on a ton of stuff that didn't work for us. This type of 
filtering provides more value in that it helps other organizations avoid 
repeating things that we didn't do so well without necessarily encouraging 
others to do it the McGovern way.

Corporations are dynamic entities and what won't work vs what will is highly 
contextual. I prefer a list of things that could possibly work over the effort 
to simply pull something off the shelf that another organization got to work 
with a lot of missing context. The best security decisions are made when you 
can provide an enterprise with choice in recommendations and I think SAMM in 
this regard does a better job than other approaches.

-Original Message-
From: sc-l-boun...@securecoding.org [mailto:sc-l-boun...@securecoding.org] On 
Behalf Of Kenneth Van Wyk
Sent: Wednesday, February 03, 2010 4:08 PM
To: Secure Coding
Subject: Re: [SC-L] BSIMM update (informIT)

On Jan 28, 2010, at 10:34 AM, Gary McGraw wrote:
 Among other things, David and I discussed the difference between descriptive 
 models like BSIMM and prescriptive models which purport to tell you what you 
 should do. 

Thought I'd chime in on this a bit, FWIW...  From my perspective, I welcome 
BSIMM and I welcome SAMM.  I don't see it in the least as a one or the other 
debate.

A decade(ish) since the first texts on various aspects of software security 
started appearing, it's great to have a BSIMM that surveys some of the largest 
software groups on the planet to see what they're doing.  What actually works.  
That's fabulously useful.  On the other hand, it is possible that ten thousand 
lemmings can be wrong.  Following the herd isn't always what's best.

SAMM, by contrast, was written by some bright, motivated folks, and provides us 
all with a set of targets to aspire to.  Some will work, and some won't, 
without a doubt.

To me, both models are useful as guide posts to help a software group--an SSG 
if you will--decide what practices will work best in their enterprise.

But as useful as both SAMM and BSIMM are, I think we're all fooling ourselves 
if we consider these to be standards or even maturity models.  Any other 
engineering discipline on the planet would laugh us all out of the room by the 
mere suggestion.  There's value to them, don't get me wrong.  But we're still 
in the larval mode of building an engineering discipline here folks.  After 
all, as a species, we didn't start (successfully) building bridges in a decade.

For now, my suggestion is to read up, try things that seem reasonable, and 
build a set of practices that work for _you_.  

Cheers,

Ken

-
Kenneth R. van Wyk
KRvW Associates, LLC
http://www.KRvW.com


This communication, including attachments, is for the exclusive use of 
addressee and may contain proprietary, confidential and/or privileged 
information.  If you are not the intended recipient, any use, copying, 
disclosure, dissemination or distribution is strictly prohibited.  If you are 
not the intended recipient, please notify the sender immediately by return 
e-mail, delete this communication and destroy all copies.



___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] BSIMM update (informIT)

2010-02-04 Thread Jim Manico
Why are we holding up the statistics from Google, Adobe and Microsoft ( 
http://www.bsi-mm.com/participate/ ) in BDSIMM?


These companies are examples of recent epic security failure. Probably 
the most financially damaging infosec attack, ever. Microsoft let a 
plain-vanilla 0-day slip through ie6 for years, Google has a pretty 
basic network segmentation and policy problem, and Adobe continues to be 
the laughing stock of client side security. Why are we holding up these 
companies as BDSIMM champions?


- Jim



On Wed, 3 Feb 2010, Gary McGraw wrote:

Popularity contests are not the kind of data we should count on.  But 
maybe we'll make some progress on that one day.


That's my hope, too, but I'm comfortable with making baby steps along 
the way.


Ultimately, I would love to see the kind of linkage between the 
collected

data (evidence) and some larger goal (higher security whatever THAT
means in quantitative terms) but if it's out there, I don't see it


Neither do I, and that is a serious issue with models like the BSIMM 
that measure second order effects like activities.  Do the 
activities actually do any good?  Important question!


And one we can't answer without more data that comes from the 
developers who adopt any particular practice, and without some 
independent measure of what success means.  For example: I am a big 
fan of the attack surface metric originally proposed by Michael Howard 
and taken up by Jeanette Wing et al. at CMU (still need to find the 
time to read Manadhata's thesis, alas...)  It seems like common sense 
that if you reduce attack surface, you reduce the number of security 
problems, but how do you KNOW!?


The 2010 OWASP Top 10 RC1 is more data-driven than previous 
versions; same

with the 2010 Top 25 (whose release has been delayed to Feb 16, btw).
Unlike last year's Top 25 effort, this time I received several 
sources of

raw prevalence data, but unfortunately it wasn't in sufficiently
consumable form to combine.


I was with you up until that last part.  Combining the prevalence 
data is something you guys should definitely do.  BTW, how is the 
2010 CWE-25 (which doesn't yet exist) more data driven??


I guess you could call it a more refined version of the popularity 
contest that you already referred to (with the associated 
limitations, and thus subject to some of the same criticisms as those 
pointed at BSIMM): we effectively conducted a survey of a diverse set 
of organizations/individuals from various parts of the software 
security industry, asking what was most important to them, and what 
they saw the most often.  This year, I intentionally designed the Top 
25 under the assumption that we would not have hard-core quantitative 
data, recognizing that people WANTED hard-core data, and that the few 
people who actually had this data, would not want to share it.  (After 
all, as a software vendor you may know what your own problems are, but 
you might not want to share that with anyone else.)


It was a bit of a surprise when a handful of participants actually had 
real data - but, then the problem I'm referring to with respect to 
consumable form reared its ugly head.  One third-party consultant 
had statistics for a broad set of about 10 high-level categories 
representing hundreds of evaluations; one software vendor gave us a 
specific weakness history - representing dozens of different CWE 
entries across a broad spectrum of issues, sometimes at very low 
levels of detail and even branching into the GUI part of CWE which 
almost nobody pays attention to - but only for 3 products.  Another 
vendor rep evaluated the dozen or two publicly-disclosed 
vulnerabilities that were most severe according to associated CVSS 
scores.  Those three data sets, plus the handful of others based on 
some form of analysis of hard-core data, are not merge-able. The irony 
with CWE (and many of the making-security-measurable efforts) is that 
it brings sufficient clarity to recognize when there is no clarity... 
the known unknowns to quote Donald Rumsfeld.  I saw this in 1999 in 
the early days of CVE, too, and it's still going on - observers of the 
oss-security list see this weekly.


For data collection at such a specialized level, the situation is not 
unlike the breach-data problem faced by the Open Security Foundation 
in their Data Loss DB work - sometimes you have details, sometimes you 
don't. The Data Loss people might be able to say well, based on this 
100-page report we examined, we think it MIGHT have been SQL 
injection but that's the kind of data we're dealing with right now.


Now, a separate exercise in which we compare/contrast the customized 
top-n lists of those who have actually progressed to the point of 
making them... that smells like opportunity to me.



I for one am pretty satisfied with the rate at which things are
progressing and am delighted to see that we're finally getting some raw
data, as good (or as bad) as it may be.  The data 

Re: [SC-L] BSIMM update (informIT)

2010-02-04 Thread Brian Chess
 At no time did it include corporations who use Ounce Labs or Coverity

Bzzzt.  False.  While there are plenty of Fortify customers represented in
BSIMM, there are also plenty of participants who aren't Fortify customers.
I don't think there are any hard numbers on market share in this realm, but
my hunch is that BSIMM is not far off from a uniform sample in this regard.

Brian


 -Original Message-
 From: sc-l-boun...@securecoding.org [mailto:sc-l-boun...@securecoding.org] On
 Behalf Of Kenneth Van Wyk
 Sent: Wednesday, February 03, 2010 4:08 PM
 To: Secure Coding
 Subject: Re: [SC-L] BSIMM update (informIT)
 
 On Jan 28, 2010, at 10:34 AM, Gary McGraw wrote:
 Among other things, David and I discussed the difference between descriptive
 models like BSIMM and prescriptive models which purport to tell you what you
 should do. 
 
 Thought I'd chime in on this a bit, FWIW...  From my perspective, I welcome
 BSIMM and I welcome SAMM.  I don't see it in the least as a one or the other
 debate.
 
 A decade(ish) since the first texts on various aspects of software security
 started appearing, it's great to have a BSIMM that surveys some of the largest
 software groups on the planet to see what they're doing.  What actually works.
 That's fabulously useful.  On the other hand, it is possible that ten thousand
 lemmings can be wrong.  Following the herd isn't always what's best.
 
 SAMM, by contrast, was written by some bright, motivated folks, and provides
 us all with a set of targets to aspire to.  Some will work, and some won't,
 without a doubt.
 
 To me, both models are useful as guide posts to help a software group--an SSG
 if you will--decide what practices will work best in their enterprise.
 
 But as useful as both SAMM and BSIMM are, I think we're all fooling ourselves
 if we consider these to be standards or even maturity models.  Any other
 engineering discipline on the planet would laugh us all out of the room by the
 mere suggestion.  There's value to them, don't get me wrong.  But we're still
 in the larval mode of building an engineering discipline here folks.  After
 all, as a species, we didn't start (successfully) building bridges in a
 decade.
 
 For now, my suggestion is to read up, try things that seem reasonable, and
 build a set of practices that work for _you_.
 
 Cheers,
 
 Ken
 
 -
 Kenneth R. van Wyk
 KRvW Associates, LLC
 http://www.KRvW.com
 
 
 This communication, including attachments, is for the exclusive use of
 addressee and may contain proprietary, confidential and/or privileged
 information.  If you are not the intended recipient, any use, copying,
 disclosure, dissemination or distribution is strictly prohibited.  If you are
 not the intended recipient, please notify the sender immediately by return
 e-mail, delete this communication and destroy all copies.
 
 
 
 ___
 Secure Coding mailing list (SC-L) SC-L@securecoding.org
 List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
 List charter available at - http://www.securecoding.org/list/charter.php
 SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
 as a free, non-commercial service to the software security community.
 ___

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] BSIMM update (informIT)

2010-02-04 Thread Steven M. Christey


On Thu, 4 Feb 2010, Jim Manico wrote:

These companies are examples of recent epic security failure. Probably 
the most financially damaging infosec attack, ever. Microsoft let a 
plain-vanilla 0-day slip through ie6 for years


Actually, it was a not-so-vanilla use-after-free, which once upon a time 
was only thought of as a reliability problem, but lately, exploit and 
detection techniques have recently begun bearing fruit for the small 
number of people who actually know how to get code execution out of these 
bugs.  In general, Microsoft (and others) have gotten their software to 
the point where attackers and researchers have to spend a lot of time and 
$$$ to find obscure vuln types, then spend some more time and $$$ to work 
around the various protection mechanisms that exist in order to get code 
execution instead of a crash.


I can't remember the last time I saw a Microsoft product have a 
mind-numbingly-obvious problem in it.  It would be nice if statistics were 
available that measured how many person-hours and CPU-hours were used to 
find new vulnerabilities - then you could determine the ratio of 
level-of-effort to number-of-vulns-found.  That data's not available, 
though - we only have anecdotal evidence by people such as Dave Aitel and 
David Litchfield saying it's getting more difficult and time-consuming.


- Steve
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] BSIMM update (informIT)

2010-02-04 Thread Gary McGraw
hi jim,

We chose organizations that in our opinion are doing a superior job with 
software security.  You are welcome to disagree with our choices.

Microsoft has a shockingly good approach to software security that they are 
kind enough to share with the world through the SDL books and websites.  Google 
has a much different approach with more attention focused on open source risk 
and testing (and much less on code review with tools).  Adobe has a newly 
reinvigorated approach under new leadership that is making some much needed 
progress.

The three firms that you cited were all members of the original nine whose data 
allowed us to construct the model.  There are now 30 firms in the BSIMM study, 
and their BSIMM data vary as much as you might expect...about which more soon.

gem

company www.cigital.com
podcast www.cigital.com/silverbullet
blog www.cigital.com/justiceleague
book www.swsec.com


On 2/4/10 12:50 PM, Jim Manico j...@manico.net wrote:

Why are we holding up the statistics from Google, Adobe and Microsoft (
http://www.bsi-mm.com/participate/ ) in BDSIMM?

These companies are examples of recent epic security failure. Probably
the most financially damaging infosec attack, ever. Microsoft let a
plain-vanilla 0-day slip through ie6 for years, Google has a pretty
basic network segmentation and policy problem, and Adobe continues to be
the laughing stock of client side security. Why are we holding up these
companies as BDSIMM champions?

- Jim


 On Wed, 3 Feb 2010, Gary McGraw wrote:

 Popularity contests are not the kind of data we should count on.  But
 maybe we'll make some progress on that one day.

 That's my hope, too, but I'm comfortable with making baby steps along
 the way.

 Ultimately, I would love to see the kind of linkage between the
 collected
 data (evidence) and some larger goal (higher security whatever THAT
 means in quantitative terms) but if it's out there, I don't see it

 Neither do I, and that is a serious issue with models like the BSIMM
 that measure second order effects like activities.  Do the
 activities actually do any good?  Important question!

 And one we can't answer without more data that comes from the
 developers who adopt any particular practice, and without some
 independent measure of what success means.  For example: I am a big
 fan of the attack surface metric originally proposed by Michael Howard
 and taken up by Jeanette Wing et al. at CMU (still need to find the
 time to read Manadhata's thesis, alas...)  It seems like common sense
 that if you reduce attack surface, you reduce the number of security
 problems, but how do you KNOW!?

 The 2010 OWASP Top 10 RC1 is more data-driven than previous
 versions; same
 with the 2010 Top 25 (whose release has been delayed to Feb 16, btw).
 Unlike last year's Top 25 effort, this time I received several
 sources of
 raw prevalence data, but unfortunately it wasn't in sufficiently
 consumable form to combine.

 I was with you up until that last part.  Combining the prevalence
 data is something you guys should definitely do.  BTW, how is the
 2010 CWE-25 (which doesn't yet exist) more data driven??

 I guess you could call it a more refined version of the popularity
 contest that you already referred to (with the associated
 limitations, and thus subject to some of the same criticisms as those
 pointed at BSIMM): we effectively conducted a survey of a diverse set
 of organizations/individuals from various parts of the software
 security industry, asking what was most important to them, and what
 they saw the most often.  This year, I intentionally designed the Top
 25 under the assumption that we would not have hard-core quantitative
 data, recognizing that people WANTED hard-core data, and that the few
 people who actually had this data, would not want to share it.  (After
 all, as a software vendor you may know what your own problems are, but
 you might not want to share that with anyone else.)

 It was a bit of a surprise when a handful of participants actually had
 real data - but, then the problem I'm referring to with respect to
 consumable form reared its ugly head.  One third-party consultant
 had statistics for a broad set of about 10 high-level categories
 representing hundreds of evaluations; one software vendor gave us a
 specific weakness history - representing dozens of different CWE
 entries across a broad spectrum of issues, sometimes at very low
 levels of detail and even branching into the GUI part of CWE which
 almost nobody pays attention to - but only for 3 products.  Another
 vendor rep evaluated the dozen or two publicly-disclosed
 vulnerabilities that were most severe according to associated CVSS
 scores.  Those three data sets, plus the handful of others based on
 some form of analysis of hard-core data, are not merge-able. The irony
 with CWE (and many of the making-security-measurable efforts) is that
 it brings sufficient clarity to recognize when there is no clarity...
 the known 

Re: [SC-L] BSIMM update (informIT)

2010-02-04 Thread McGovern, James F. (eBusiness)
Merely hoping to understand more about the thinking behind BSIMM. 

Here is a quote from the page: Of the thirty-five large-scale software 
security initiatives we are aware of, we chose nine that we considered the most 
advanced how can the reader tell why others were filtered?

When you visit the link: http://www.bsi-mm.com/participate/ it doesn't show any 
of the vendors you mentioned below? Should they be shown somewhere?

The BSIMM download link requires registration. Does this become a lead for 
some company?


-Original Message-
From: Gary McGraw [mailto:g...@cigital.com] 
Sent: Thursday, February 04, 2010 2:18 PM
To: McGovern, James F. (P+C Technology); Secure Code Mailing List
Subject: Re: [SC-L] BSIMM update (informIT)

hi james,

I'm afraid you are completely wrong about this paragraph which you have 
completely fabricated.  Please check your facts.  This one borders on slander 
and I have no earthly idea why you believe what you said.

 Would BSIMM be a better approach if the audience wasn't so 
 self-selecting? At no time did it include corporations who use Ounce Labs or 
 Coverity or even other well-known security consultancies.

BSIMM covers many organizations who use Ounce, Appscan, SPI dev inspect, 
Coverity, Klocwork, Veracode, and a slew of consultancies including iSec, 
Aspect, Leviathan, Aitel, and so on.

gem


On 2/4/10 10:29 AM, McGovern, James F. (eBusiness) 
james.mcgov...@thehartford.com wrote:

When comparing BSIMM to SAMM are we suffering from the Mayberry Paradox? Did 
you know that Apple is more secure than Microsoft simply because there are more 
successful attacks on MS products? Of course, we should ignore the fact that 
the number of attackers doesn't prove that one product is more secure than 
another.

Whenever I bring in either vendors or consultancies to write about my 
organization, do I only publish the positives and only slip in a few negatives 
in order to maintain the façade of integrity? Would BSIMM be a better approach 
if the audience wasn't so self-selecting? At no time did it include 
corporations who use Ounce Labs or Coverity or even other well-known security 
consultancies.

OWASP on the other hand received feedback from folks such as myself on not the 
things that work, but on a ton of stuff that didn't work for us. This type of 
filtering provides more value in that it helps other organizations avoid 
repeating things that we didn't do so well without necessarily encouraging 
others to do it the McGovern way.

Corporations are dynamic entities and what won't work vs what will is highly 
contextual. I prefer a list of things that could possibly work over the effort 
to simply pull something off the shelf that another organization got to work 
with a lot of missing context. The best security decisions are made when you 
can provide an enterprise with choice in recommendations and I think SAMM in 
this regard does a better job than other approaches.

-Original Message-
From: sc-l-boun...@securecoding.org [mailto:sc-l-boun...@securecoding.org] On 
Behalf Of Kenneth Van Wyk
Sent: Wednesday, February 03, 2010 4:08 PM
To: Secure Coding
Subject: Re: [SC-L] BSIMM update (informIT)

On Jan 28, 2010, at 10:34 AM, Gary McGraw wrote:
 Among other things, David and I discussed the difference between descriptive 
 models like BSIMM and prescriptive models which purport to tell you what you 
 should do.

Thought I'd chime in on this a bit, FWIW...  From my perspective, I welcome 
BSIMM and I welcome SAMM.  I don't see it in the least as a one or the other 
debate.

A decade(ish) since the first texts on various aspects of software security 
started appearing, it's great to have a BSIMM that surveys some of the largest 
software groups on the planet to see what they're doing.  What actually works.  
That's fabulously useful.  On the other hand, it is possible that ten thousand 
lemmings can be wrong.  Following the herd isn't always what's best.

SAMM, by contrast, was written by some bright, motivated folks, and provides us 
all with a set of targets to aspire to.  Some will work, and some won't, 
without a doubt.

To me, both models are useful as guide posts to help a software group--an SSG 
if you will--decide what practices will work best in their enterprise.

But as useful as both SAMM and BSIMM are, I think we're all fooling ourselves 
if we consider these to be standards or even maturity models.  Any other 
engineering discipline on the planet would laugh us all out of the room by the 
mere suggestion.  There's value to them, don't get me wrong.  But we're still 
in the larval mode of building an engineering discipline here folks.  After 
all, as a species, we didn't start (successfully) building bridges in a decade.

For now, my suggestion is to read up, try things that seem reasonable, and 
build a set of practices that work for _you_.

Cheers,

Ken

-
Kenneth R. van Wyk
KRvW Associates, LLC
http://www.KRvW.com

Re: [SC-L] BSIMM update (informIT)

2010-02-04 Thread Arian J. Evans
Hola Gary, inline:


On Wed, Feb 3, 2010 at 12:05 PM, Gary McGraw g...@cigital.com wrote:

Strategic folks (VP, CxO) ...Initially ...ask for descriptive information, 
but once they get
going they need strategic prescriptions.

 Please see my response to Kevin.  I hope it's clear what the BSIMM is for.
  It's for measuring your initiative and comparing it to others.  Given some
 solid BSIMM data, I believe you can do a superior job with strategy...and
 results measurement.  It is a tool for strategic people to use to build an 
 initiative that works.


My response was regarding what people need today. I think BSIMM is too
much for most organization's needs and interests.


Tactical folks tend to ask:
+ What should we fix first? (prescriptive)
+ What steps can I take to reduce XSS attack surface by 80%?

 The BSIMM is not for tactical folks.

That's too bad. Security is largely tactical, like it or not.


 But should you base your decision regarding what to fix first on goat 
sacrifice?
 What should drive that decision?  Moon phase?


It doesn't take much thinking to move beyond moon phase to pragmatic
things like:

+ What is being attacked? (the most | or | targeting you)
+ What do I have the most of?
+ What issues present the most risk of impact or loss?
+ etc.

Definitely doesn't take Feynman. Or moon phase melodrama.


 Implementation level folks ask:
+ What do I do about this specific attack/weakness?
+ How do I make my compensating control (WAF, IPS) block this specific attack?

 BSIMM != code review tool, top-n list, book, coding experience, ...

Sure. Again, I was sharing with folks on SC-L what people out in IRL
at what layers of an organization actually care about.


BSIMM is probably useful for government agencies, or some large
organizations. But the vast majority of clients I work with don't have
the time or need or ability to take advantage of BSIMM. Nor should
they. They don't need a software security group.

 Where to start.  All I can say about BSIMM so far is that is appears
 to be useful for 30 large commercial organizations carrying out real
 software security initiatives.


BSIMM might be useful. I don't think it's necessary. More power to
BSIMM though. I think everyone on SC-L would appreciate more good
data, and BSIMM certainly can collect some interesting data.


 But what about SMB (small to medium sized business)?

I don't deal a lot with SMB, but certainly they don't need BSIMM. They
might make use of the metrics (?) though I doubt it. They want, and
probably need, Top(n) lists and prescriptive guidance.


 Arian, who are your clients?

Mostly fortune-listed (100/500/2000, etc.), but including a broad
spectrum from small online startups to east coast financial
institutions. Mostly people who do business on the Internet, and care
about that business, and security (to try and put them all in a
singular bucket).


 How many developers do they have?

From a handful to thousands, to tens of thousands. Why?


  Who do you report to as a consultant?

I haven't done consulting in years.


  How do you help them make business decisions?

With Math, mostly, and pragmatic prioritization so they can move on
and focus on their business, and get security out of the way as much
as possible.


 Regarding the existence of an SSG, see this article
 http://www.informit.com/articles/article.aspx?p=1434903.
  Are your customers too small to have an SSG?  Are YOU the SSG?
  Are your customers not mature enough for an SSG?  Data would be great.

Not many organizations need an SSG today, unless they have a TON of
developers and are an ISV, or a SaaS version of an old-school ISV
(Salesforce.com).

I do think they benefit highly from a developer-turned-SSP. But I
don't think there are enough of those to go around. So the network and
widget security folks, and even the policy wanks, are going to
probably play a role in software security.


But, as should be no surprise, I cateogrically disagree with the
entire concluding paragraph of the article. Sadly it's just more faith
and magic from Gary's end. We all can do better than that.

 You guys and your personal attacks.  Yeesh.

Gary -- you've been a bit preachy and didactic lately; maybe Obama's
demagoguery has been inspiring you. So be prepared to duck. I'll
define my tomatoes below. Alternately you might consider ending your
articles with Amen. :)


 I am pretty sure you meant the next to last paragraph

You are correct.


 As I have said before, the time has come to put away the bug parade boogeyman
 http://www.informit.com/articles/article.aspx?p=1248057,
 the top 25 tea leaves 
 http://www.informit.com/articles/article.aspx?p=1322398,
 black box web app goat sacrifice, and the occult reading of pen testing 
 entrails.
 It's science time.  And the more descriptive and data driven we are, the 
 better.

 Can you be more specific about your disagreements please?


Yes, I think, quite simply: that paragraph has a sign swinging over it
that says out to 

Re: [SC-L] BSIMM update (informIT)

2010-02-03 Thread Benjamin Tomhave
soapboxWhile I can't disagree with this based on modern reality, I'm
increasingly hesitant to allow the conversation to bring in risk, since
it's almost complete garbage these days. Nobody really understands it,
nobody really does it very well (especially if we redact out financial
services and insurance - and even then, look what happened to Wall
Street risk models!), and more importantly, it's implemented so shoddily
that there's no real, reasonable way to actually demonstrate risk
remediation/reduction because talking about it means bringing in a whole
other range of discussions (what is most important to the business?
and how are risk levels defined in business terms? and what role do
data and systems play in the business strategy? and how does data flow
into and out of the environment? and so on). Anyway... the long-n-short
is this: let's stop fooling ourselves by pretending that risk has
anything to do with these conversations./soapbox

I think:
 - yes to prescriptive!
 - yes to legal/regulatory mandates!
 - caution: we need some sort of evolving maturity framework to which
the previous two points can be pegged!

cheers,

-ben

On 2/2/10 4:32 PM, Arian J. Evans wrote:
 100% agree with the first half of your response, Kevin. Here's what
 people ask and need:
 
 
 Strategic folks (VP, CxO) most frequently ask:
 
 + What do I do next? / What should we focus on next? (prescriptive)
 
 + How do we tell if we are reducing risk? (prescriptive guidance again)
 
 Initially they ask for descriptive information, but once they get
 going they need strategic prescriptions.
 
 
 Tactical folks tend to ask:
 
 + What should we fix first? (prescriptive)
 
 + What steps can I take to reduce XSS attack surface by 80%? (yes, a
 prescriptive blacklist can work here)
 
 
  Implementation level folks ask:
 
 + What do I do about this specific attack/weakness?
 
 + How do I make my compensating control (WAF, IPS) block this specific attack?
 
 etc.
 
 BSIMM is probably useful for government agencies, or some large
 organizations. But the vast majority of clients I work with don't have
 the time or need or ability to take advantage of BSIMM. Nor should
 they. They don't need a software security group.
 
 They need a clear-cut tree of prescriptive guidelines that work in a
 measurable fashion. I agree and strongly empathize with Gary on many
 premises of his article - including that not many folks have metrics,
 and tend to have more faith and magic.
 
 But, as should be no surprise, I cateogrically disagree with the
 entire concluding paragraph of the article. Sadly it's just more faith
 and magic from Gary's end. We all can do better than that.
 
 There are other ways to gather and measure useful metrics easily
 without BSIMM. Black Box and Pen Test metrics, and Top(n) List metrics
 are metrics, and highly useful metrics. And definitely better than no
 metrics.
 
 Pragmatically, I think Ralph Nader fits better than Feynman for this 
 discussion.
 
 Nader's Top(n) lists and Bug Parades earned us many safer-society
 (cars, water, etc.) features over the last five decades.
 
 Feynman didn't change much in terms of business SOP.
 
 Good day then,
 
 ---
 Arian Evans
 capitalist marksman. eats animals.
 
 
 
 On Tue, Feb 2, 2010 at 9:30 AM, Wall, Kevin kevin.w...@qwest.com wrote:
 On Thu, 28 Jan 2010 10:34:30 -0500, Gary McGraw wrote:

 Among other things, David [Rice] and I discussed the difference between
 descriptive models like BSIMM and prescriptive models which purport to
 tell you what you should do.  I just wrote an article about that for
 informIT.  The title is

 Cargo Cult Computer Security: Why we need more description and less
 prescription.
 http://www.informit.com/articles/article.aspx?p=1562220

 First, let me say that I have been the team lead of a small Software
 Security Group (specifically, an Application Security team) at a
 large telecom company for the past 11 years, so I am writing this from
 an SSG practitioner's perspective.

 Second, let me say that I appreciate descriptive holistic approaches to
 security such as BSIMM and OWASP's OpenSAMM. I think they are much
 needed, though seldom heeded.

 Which brings me to my third point. In my 11 years of experience working
 on this SSG, it is very rare that application development teams are
 looking for a _descriptive_ approach. Almost always, they are
 looking for a _prescriptive_ one. They want specific solutions
 to specific problems, not some general formula to an approach that will
 make them more secure. To those application development teams, something
 like OWASP's ESAPI is much more valuable than something like BSIMM or
 OpenSAMM. In fact, I you confirm that you BSIMM research would indicate that
 many companies' SSGs have developed their own proprietary security APIs
 for use by their application development teams. Therefore, to that end,
 I would not say we need less _prescriptive_ and more _descriptive_
 approaches. Both are useful and ideally 

Re: [SC-L] BSIMM update (informIT)

2010-02-03 Thread Mike Boberski
Fun article. To try to be equally pithy in my response: the article reads to
me like a high-tech, application security-specific form of McCarthyism.

To explain...

The amount of reinvention and discussion about the problems in this space is
spectacular.

If one has something to start from which one can then tailor for one's own
purposes, why wouldn't one do this? Does one need to discover SQL injection
on one's own before deciding to do some escaping?

It's crazy in my opinion to think that the majority of the planet has the
expertise let alone the bandwidth (Agile, anyone?) to thoughtfully research
and derive anything that results in a net effect of a targeted, measurable,
comparable level of security.

To all the good folks out there, here is some advice for free: don't start
from scratch, whether it's at the program level, the project level, or the
toolkit level. Use the top x lists to make sure whatever you're doing is up
to date with the latest best practices and technologies. On the subject of
tools and products specifically since the article veers there very
specifically: if you're looking to build or buy a product that provides
security functions, go look into CC. If you're looking at a cryptomodule, go
look into FIPS 140. If you're looking at an enterprise app, go look into
ASVS. If you need a toolkit that validates form input data strings in PHP
using a whitelist because you're trying to provide a first layer of defense
against XSS and SQLi, use BSIMM. Just kidding. Yes, use ESAPI in those
cases.

FWIW,

Best,

Mike


On Tue, Feb 2, 2010 at 4:32 PM, Arian J. Evans
arian.ev...@anachronic.comwrote:

 100% agree with the first half of your response, Kevin. Here's what
 people ask and need:


 Strategic folks (VP, CxO) most frequently ask:

 + What do I do next? / What should we focus on next? (prescriptive)

 + How do we tell if we are reducing risk? (prescriptive guidance again)

 Initially they ask for descriptive information, but once they get
 going they need strategic prescriptions.


 Tactical folks tend to ask:

 + What should we fix first? (prescriptive)

 + What steps can I take to reduce XSS attack surface by 80%? (yes, a
 prescriptive blacklist can work here)


  Implementation level folks ask:

 + What do I do about this specific attack/weakness?

 + How do I make my compensating control (WAF, IPS) block this specific
 attack?

 etc.

 BSIMM is probably useful for government agencies, or some large
 organizations. But the vast majority of clients I work with don't have
 the time or need or ability to take advantage of BSIMM. Nor should
 they. They don't need a software security group.

 They need a clear-cut tree of prescriptive guidelines that work in a
 measurable fashion. I agree and strongly empathize with Gary on many
 premises of his article - including that not many folks have metrics,
 and tend to have more faith and magic.

 But, as should be no surprise, I cateogrically disagree with the
 entire concluding paragraph of the article. Sadly it's just more faith
 and magic from Gary's end. We all can do better than that.

 There are other ways to gather and measure useful metrics easily
 without BSIMM. Black Box and Pen Test metrics, and Top(n) List metrics
 are metrics, and highly useful metrics. And definitely better than no
 metrics.

 Pragmatically, I think Ralph Nader fits better than Feynman for this
 discussion.

 Nader's Top(n) lists and Bug Parades earned us many safer-society
 (cars, water, etc.) features over the last five decades.

 Feynman didn't change much in terms of business SOP.

 Good day then,

 ---
 Arian Evans
 capitalist marksman. eats animals.



 On Tue, Feb 2, 2010 at 9:30 AM, Wall, Kevin kevin.w...@qwest.com wrote:
  On Thu, 28 Jan 2010 10:34:30 -0500, Gary McGraw wrote:
 
  Among other things, David [Rice] and I discussed the difference between
  descriptive models like BSIMM and prescriptive models which purport to
  tell you what you should do.  I just wrote an article about that for
  informIT.  The title is
 
  Cargo Cult Computer Security: Why we need more description and less
  prescription.
  http://www.informit.com/articles/article.aspx?p=1562220
 
  First, let me say that I have been the team lead of a small Software
  Security Group (specifically, an Application Security team) at a
  large telecom company for the past 11 years, so I am writing this from
  an SSG practitioner's perspective.
 
  Second, let me say that I appreciate descriptive holistic approaches to
  security such as BSIMM and OWASP's OpenSAMM. I think they are much
  needed, though seldom heeded.
 
  Which brings me to my third point. In my 11 years of experience working
  on this SSG, it is very rare that application development teams are
  looking for a _descriptive_ approach. Almost always, they are
  looking for a _prescriptive_ one. They want specific solutions
  to specific problems, not some general formula to an approach that will
  make them more secure. To those 

Re: [SC-L] BSIMM update (informIT)

2010-02-03 Thread Mike Boberski
 But the vast majority of clients I work with don't have the time or need
or ability to take advantage of BSIMM

Mike's Top 5 Web Application Security Countermeasures:

1. Add a security guy or gal who has a software development background to
your application's software development team.

2. Turn SSL/TLS on for all connections (including both external and backend
connections) that are authenticated or that involve sensitive data or
functions.

3. Build an Enterprise Security API (a.k.a. an ESAPI, e.g. OWASP's several
different ESAPI toolkits) that is specific to your solution stack and
minimally provides input validation controls that use whitelists, output
encoding/escaping controls (optionally use parameterized interfaces for
SQL), and authentication controls. Build your ESAPI to target a specific
level of overall security when all of your security controls are viewed as a
whole (e.g. an OWASP Application Security Verification Standard (ASVS)
level).

4. Write a programming manual (i.e. a secure coding standard that is
specific to your solution stack that is organized by vulnerability type or
security requirement with before and after code snippets, e.g. a cookbook
that provides before and after code snippets and links to API documentation)
that contains step-by-step instructions for using your ESAPI to both
proactively guard against vulnerabilities, and to act as a quick reference
when the time comes to make fixes.

5. Gate releases of your ESAPI library (e.g. if it is being packaged in a
wrapper for subsequent use by other developers throughout the application)
with security functional tests that include sufficient negative test cases
to demonstrate the security controls are working using data that is specific
to your application. Gate releases of your application (ideally gate source
control checkins) with security-focused code reviews of all new or updated
application code produced during the release (looking out for where new or
updated security controls/security control configuration updates are
needed).

Mike


On Tue, Feb 2, 2010 at 7:23 PM, Steven M. Christey co...@linus.mitre.orgwrote:


 On Tue, 2 Feb 2010, Arian J. Evans wrote:

  BSIMM is probably useful for government agencies, or some large
 organizations. But the vast majority of clients I work with don't have
 the time or need or ability to take advantage of BSIMM. Nor should
 they. They don't need a software security group.


 I'm looking forward to what BSIMM Basic discovers when talking to small and
 mid-size developers.  Many of the questions in the survey PDF assume that
 the respondent has at least thought of addressing software security, but not
 all questions assume the presence of an SSG, and there are even questions
 about the use of general top-n lists vs. customized top-n lists that may be
 informative.

 - Steve

 ___
 Secure Coding mailing list (SC-L) SC-L@securecoding.org
 List information, subscriptions, etc -
 http://krvw.com/mailman/listinfo/sc-l
 List charter available at - http://www.securecoding.org/list/charter.php
 SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
 as a free, non-commercial service to the software security community.
 ___

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] BSIMM update (informIT)

2010-02-03 Thread Benjamin Tomhave
I challenge the validity of any risk assessment/rating approach in use
today in infosec circles, whether it be OWASP or FAIR or IAM/ISAM or
whatever. They are all fundamentally flawed in that they are based on
qualitative values the introduce subjectivity, and they lack the
historical data seen in the actuarial science to make the probability
estimates even remotely reasonable. FAIR tries to compensate for this by
using Bayesian statistics, but the qualitative-quantitative conversion
is still highly problematic.

On prescriptive... the problem is this: businesses will not spend money
unless they're required to do so. Security will never succeed without at
least an initial increased spend. It is exceedingly difficult to make a
well-understood business case for proper security measures and spend. I
think this is something you guys in insurance (you, Chris Hayes, etc.)
perhaps take for granted. The other businesses - especially SMBs - don't
even understand what we're talking about, and they certainly don't have
any interest in dropping a penny on security without seeing a direct
benefit.

Do I trust regulators to do things right? Of course not, but that's only
one possible fork. The other possible fork is relying on the courts to
finally catch-up such that case law can develop around defining
reasonable standard of care and then evolving it over time. In either
case, you need to set a definitive mark that says you must do THIS MUCH
or you will be negligent and held accountable. I hate standards like
PCI as much as the next guy because I hate being told how I should be
doing security, but in the short-to-mid-term it's the right approach
because it tells people the expectation for performance. If you never
set expectations for performance, then you shouldn't be disappointed
when people don't achieve them. The bottom line here is that we need to
get far more proactive in the regulatory space so that we can influence
sensible regulations that mandate change rather than relying on
businesses to do the right thing without understand the underlying
business value.

Conceptually, I agree with the idealist approach, but in reality I don't
find that it works well at all. I've worked with a half-dozen or more
companies of varying size in the last couple years and NONE of them
understood risk, risk management, current security theory, or how the
implicit AND explicit value of security changes. It's just not intuitive
to most people, not the least of which because bad behaviors are
generally divorced from tangible consequences. Anyway... :)

I can go on forever on this topic... :)

-ben

On 2/3/10 10:06 AM, McGovern, James F. (eBusiness) wrote:
 While Wall Street's definition of risk collapsed, the insurance model of
 risk stood the test of time :-)
 
 Should we explore your question of how are risk levels defined in
 business terms more deeply or can we simply say that if you don't have
 your own industry-specific regulatory way of quantifying, a good
 starting point may be to leverage the OWASP Risk Rating system?
 
 I also would like to challenge and say NO to prescriptive. Security
 people are not Vice Presidents of the NO department. Instead we need to
 figure out how to align with other value systems (Think Agile
 Manifesto). We can be secure without being prescriptive. One example is
 to do business exercises such as Protection Poker.
 
 Finally, we shouldn't say yes to regulatory mandates as most of them are
 misses on the real risk at hand. The challenge here is that they always
 mandate process but never competency. If a regulation said that I should
 have someone with a fancy title overseeing a program, the business world
 would immediately fill the slot with some non-technical resource who is
 really good at PowerPoint but nothing else. In other words a figurehead.
 Likewise, while regulations cause people to do things that they should
 be doing independently, it has a negative side effect on our economy by
 causing folks to spend money in non-strategic ways.
 
 -Original Message-
 From: sc-l-boun...@securecoding.org
 [mailto:sc-l-boun...@securecoding.org] On Behalf Of Benjamin Tomhave
 Sent: Tuesday, February 02, 2010 10:19 PM
 To: Arian J. Evans
 Cc: Secure Code Mailing List
 Subject: Re: [SC-L] BSIMM update (informIT)
 
 soapboxWhile I can't disagree with this based on modern reality, I'm
 increasingly hesitant to allow the conversation to bring in risk, since
 it's almost complete garbage these days. Nobody really understands it,
 nobody really does it very well (especially if we redact out financial
 services and insurance - and even then, look what happened to Wall
 Street risk models!), and more importantly, it's implemented so shoddily
 that there's no real, reasonable way to actually demonstrate risk
 remediation/reduction because talking about it means bringing in a whole
 other range of discussions (what is most important to the business?
 and how are risk levels defined in business terms

Re: [SC-L] BSIMM update (informIT)

2010-02-03 Thread McGovern, James F. (eBusiness)
OK, being the insurance enterprisey security guy I think you may be onto
something. One of the many reasons why actuarial science can work in
insurance is the fact that there is a lot more public data than in IT
security. If you smash your car into a wall, your chosen carrier doesn't
just pay the claim. This information is shared in what we refer to as
the CLUE database. Other carriers should you decide to switch carriers
will also know the characteristics of your loss. 

CLUE works because folks have figured out that sharing of negative
information can benefit the business. Likewise, CLUE did enough homework
to figure out the right taxonomy and metadata in order to make it
happen. Have security professionals ever figured out how to turn
something bad into something good for the same organization? Have
security professionals ever figured out even how to describe a security
event in a consistent enough way such that acturial type calculations
could occur...

FYI. Clue is successful and isn't done for regulatory reasons. It is
done for sound business practice. The same model we should operate
within...

-Original Message-
From: Benjamin Tomhave [mailto:list-s...@secureconsulting.net] 
Sent: Wednesday, February 03, 2010 11:07 AM
To: McGovern, James F. (P+C Technology)
Cc: Secure Code Mailing List
Subject: Re: [SC-L] BSIMM update (informIT)

I challenge the validity of any risk assessment/rating approach in use
today in infosec circles, whether it be OWASP or FAIR or IAM/ISAM or
whatever. They are all fundamentally flawed in that they are based on
qualitative values the introduce subjectivity, and they lack the
historical data seen in the actuarial science to make the probability
estimates even remotely reasonable. FAIR tries to compensate for this by
using Bayesian statistics, but the qualitative-quantitative conversion
is still highly problematic.

On prescriptive... the problem is this: businesses will not spend money
unless they're required to do so. Security will never succeed without at
least an initial increased spend. It is exceedingly difficult to make a
well-understood business case for proper security measures and spend. I
think this is something you guys in insurance (you, Chris Hayes, etc.)
perhaps take for granted. The other businesses - especially SMBs - don't
even understand what we're talking about, and they certainly don't have
any interest in dropping a penny on security without seeing a direct
benefit.

Do I trust regulators to do things right? Of course not, but that's only
one possible fork. The other possible fork is relying on the courts to
finally catch-up such that case law can develop around defining
reasonable standard of care and then evolving it over time. In either
case, you need to set a definitive mark that says you must do THIS MUCH
or you will be negligent and held accountable. I hate standards like
PCI as much as the next guy because I hate being told how I should be
doing security, but in the short-to-mid-term it's the right approach
because it tells people the expectation for performance. If you never
set expectations for performance, then you shouldn't be disappointed
when people don't achieve them. The bottom line here is that we need to
get far more proactive in the regulatory space so that we can influence
sensible regulations that mandate change rather than relying on
businesses to do the right thing without understand the underlying
business value.

Conceptually, I agree with the idealist approach, but in reality I don't
find that it works well at all. I've worked with a half-dozen or more
companies of varying size in the last couple years and NONE of them
understood risk, risk management, current security theory, or how the
implicit AND explicit value of security changes. It's just not intuitive
to most people, not the least of which because bad behaviors are
generally divorced from tangible consequences. Anyway... :)

I can go on forever on this topic... :)

-ben

On 2/3/10 10:06 AM, McGovern, James F. (eBusiness) wrote:
 While Wall Street's definition of risk collapsed, the insurance model 
 of risk stood the test of time :-)
 
 Should we explore your question of how are risk levels defined in 
 business terms more deeply or can we simply say that if you don't 
 have your own industry-specific regulatory way of quantifying, a good 
 starting point may be to leverage the OWASP Risk Rating system?
 
 I also would like to challenge and say NO to prescriptive. Security 
 people are not Vice Presidents of the NO department. Instead we need 
 to figure out how to align with other value systems (Think Agile 
 Manifesto). We can be secure without being prescriptive. One example 
 is to do business exercises such as Protection Poker.
 
 Finally, we shouldn't say yes to regulatory mandates as most of them 
 are misses on the real risk at hand. The challenge here is that they 
 always mandate process but never competency. If a regulation said that

 I should have

Re: [SC-L] BSIMM update (informIT)

2010-02-03 Thread Kenneth Van Wyk
On Jan 28, 2010, at 10:34 AM, Gary McGraw wrote:
 Among other things, David and I discussed the difference between descriptive 
 models like BSIMM and prescriptive models which purport to tell you what you 
 should do. 

Thought I'd chime in on this a bit, FWIW...  From my perspective, I welcome 
BSIMM and I welcome SAMM.  I don't see it in the least as a one or the other 
debate.

A decade(ish) since the first texts on various aspects of software security 
started appearing, it's great to have a BSIMM that surveys some of the largest 
software groups on the planet to see what they're doing.  What actually works.  
That's fabulously useful.  On the other hand, it is possible that ten thousand 
lemmings can be wrong.  Following the herd isn't always what's best.

SAMM, by contrast, was written by some bright, motivated folks, and provides us 
all with a set of targets to aspire to.  Some will work, and some won't, 
without a doubt.

To me, both models are useful as guide posts to help a software group--an SSG 
if you will--decide what practices will work best in their enterprise.

But as useful as both SAMM and BSIMM are, I think we're all fooling ourselves 
if we consider these to be standards or even maturity models.  Any other 
engineering discipline on the planet would laugh us all out of the room by the 
mere suggestion.  There's value to them, don't get me wrong.  But we're still 
in the larval mode of building an engineering discipline here folks.  After 
all, as a species, we didn't start (successfully) building bridges in a decade.

For now, my suggestion is to read up, try things that seem reasonable, and 
build a set of practices that work for _you_.  

Cheers,

Ken

-
Kenneth R. van Wyk
KRvW Associates, LLC
http://www.KRvW.com



smime.p7s
Description: S/MIME cryptographic signature
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] BSIMM update (informIT)

2010-02-03 Thread Gary McGraw
hi kevin (and sc-l),

Sorry for the delay responding to this.  I was skiing yesterday with my son Eli 
and just flew across the country for the SANS summit this morning (leaving 
behind 6 inches of new snow in VA).  Anyway, better late than never.

I'll interleave responses below.

On Thu, 28 Jan 2010 10:34:30 -0500, Gary McGraw wrote:
 Cargo Cult Computer Security: Why we need more description and less
 prescription.  http://www.informit.com/articles/article.aspx?p=1562220

  On 2/2/10 12:30 PM, Wall, Kevin kevin.w...@qwest.com wrote:
In my 11 years of experience working
on this SSG, it is very rare that application development teams are
looking for a _descriptive_ approach. Almost always, they are
looking for a _prescriptive_ one. They want specific solutions
to specific problems, not some general formula to an approach that will
  make them more secure.

Absolutely.  I think as an SSG lead in a particular company environment you 
must have a prescriptive approach but that the approach you develop will be 
better if informed by data from a descriptive model like BSIMM.  (For the 
record, I see SAMM as a prescriptive model that tells you often in great detail 
what your initiative should be doing without knowing one whit about how your 
organization ticks.)   If you read the article carefully, there are two 
paragraphs that together should make this clear.

Here's the first:
Prescriptive models purport to tell you what you should do.  Promulgators of 
such models say things more like, the model is chocked full of value 
judgements [sic] about what organizations SHOULD be doing.   That's just 
dandy, as long as any prescriptive model only became prescriptive over time 
based on sufficient observation and testing.

And here's the second:
Also worthy of mention in this section is the one size fits all problem that 
many prescriptive models suffer from.  The fact is, nobody knows your 
organizational culture like you do. A descriptive comparison allows you to 
gather descriptive data and adapt good ideas from others while taking your 
culture into account.

BSIMM is meant to be a tool for the people running and SSG (and for that 
matter, strategizing about a company's software security initiative).  The 
article is really about the differences between BSIMM and SAMM than anything 
else.  It's not really about the difference between BSIMM and ESAPI.  BSIMM and 
things like ESAPI fit together.

Both are useful and ideally should go together like hand and glove.

Exactly right.

I suspect that this apparent dichotomy in our perception of the
usefulness of the prescriptive vs. descriptive approaches is explained
in part by the different audiences with whom we associate.

Agreed.  See above.   BSIMM is a tool for executives to help build, measure, 
and maintain a software security initiative.

If our SSG were to hand them something like
BSIMM, they would come away telling their management that we didn't help
them at all.

Please do NOT even think about handing the BSIMM to developers as a solution!  
The BSIMM is a yardstick for an initiative, and it's meant for a guy like you.  
The notion is to measure your own initiative and most importantly of all 
compare your initiative to your peers.

This brings me to my fourth, and likely most controversial point. Despite
the interesting historical story about Feynman, I question whether BSIMM
is really scientific as the BSIMM community claims. I would contend
that we are only fooling ourselves if we claim otherwise.

I think this is a valid criticism.  The only thing that makes BSIMM more 
scientific than other methodologies like the Touchoints, SDL, CLASP, or SAMM, 
is that the BSIMM uses real data and real measurement.  However the measurement 
technique is certainly not foolproof.  (Incidentally, I state that view pretty 
clearly in the article...computer science, and other fields with science in 
their name are usually not.)

While I am certainly not privy to the exact method used to arrive at the
BSIMM data (I have read through the BSIMM Begin survey, but have not
been involved in a full BSIMM assessment), I would contend that the
process is not repeatable to the necessary degree required by science.

This criticism holds some water, but you are shooting from the hip and it is 
pretty clear that you have not read the BSIMM itself.   That, and the first 
article we wrote about the BSIMM explain our methods pretty clearly. Please 
read those two things and lets continue this line of questioning.

I challenge [the BSIMM team] to put forth additional information explaining 
their data collection
process and in particular, describing how it avoids unintentional bias. (E.g., 
Are assessment participants choose at random? By whom?  How do you know you 
have a representative sample of
a company? Etc.)

This is pretty clearly explained in the BSIMM itself.

In my opinion, comparison of observations from two companies is not
worth the paper that 

Re: [SC-L] BSIMM update (informIT)

2010-02-03 Thread Gary McGraw
hi mike,

On 2/2/10 9:28 PM, Mike Boberski mike.bober...@gmail.com wrote:
Fun article. To try to be equally pithy in my response: the article reads to 
me like a high-tech, application security-specific form of McCarthyism.

As a die hard liberal, I take offense to the McCarthy comment (hah).  Anyway 
some interleaved thoughts...sorry for the delay...etc and so on.

The amount of reinvention and discussion about the problems in this space is 
spectacular.  If one has something to start from which one can then tailor 
for one's own purposes, why wouldn't one do this? Does one need to discover 
SQL injection on one's own before deciding to do some escaping?

I am with you on this.

It's crazy in my opinion to think that the majority of the planet has the 
expertise let alone the bandwidth (Agile, anyone?) to thoughtfully research 
and derive anything that results in a net effect of a targeted, measurable, 
comparable level of security.

Who is arguing that?  Is this supposed to be some straw man for the BSIMM?  I'm 
lost.  What the heck are you talking about?

gem

company www.cigital.com
podcast www.cigital.com/silverbullet
blog www.cigital.com/justiceleague
book www.swsec.com


On Tue, Feb 2, 2010 at 4:32 PM, Arian J. Evans arian.ev...@anachronic.com 
wrote:
100% agree with the first half of your response, Kevin. Here's what
people ask and need:


Strategic folks (VP, CxO) most frequently ask:

+ What do I do next? / What should we focus on next? (prescriptive)

+ How do we tell if we are reducing risk? (prescriptive guidance again)

Initially they ask for descriptive information, but once they get
going they need strategic prescriptions.


Tactical folks tend to ask:

+ What should we fix first? (prescriptive)

+ What steps can I take to reduce XSS attack surface by 80%? (yes, a
prescriptive blacklist can work here)


 Implementation level folks ask:

+ What do I do about this specific attack/weakness?

+ How do I make my compensating control (WAF, IPS) block this specific attack?

etc.

BSIMM is probably useful for government agencies, or some large
organizations. But the vast majority of clients I work with don't have
the time or need or ability to take advantage of BSIMM. Nor should
they. They don't need a software security group.

They need a clear-cut tree of prescriptive guidelines that work in a
measurable fashion. I agree and strongly empathize with Gary on many
premises of his article - including that not many folks have metrics,
and tend to have more faith and magic.

But, as should be no surprise, I cateogrically disagree with the
entire concluding paragraph of the article. Sadly it's just more faith
and magic from Gary's end. We all can do better than that.

There are other ways to gather and measure useful metrics easily
without BSIMM. Black Box and Pen Test metrics, and Top(n) List metrics
are metrics, and highly useful metrics. And definitely better than no
metrics.

Pragmatically, I think Ralph Nader fits better than Feynman for this discussion.

Nader's Top(n) lists and Bug Parades earned us many safer-society
(cars, water, etc.) features over the last five decades.

Feynman didn't change much in terms of business SOP.

Good day then,

---
Arian Evans
capitalist marksman. eats animals.



On Tue, Feb 2, 2010 at 9:30 AM, Wall, Kevin kevin.w...@qwest.com wrote:
 On Thu, 28 Jan 2010 10:34:30 -0500, Gary McGraw wrote:

 Among other things, David [Rice] and I discussed the difference between
 descriptive models like BSIMM and prescriptive models which purport to
 tell you what you should do.  I just wrote an article about that for
 informIT.  The title is

 Cargo Cult Computer Security: Why we need more description and less
 prescription.
 http://www.informit.com/articles/article.aspx?p=1562220

 First, let me say that I have been the team lead of a small Software
 Security Group (specifically, an Application Security team) at a
 large telecom company for the past 11 years, so I am writing this from
 an SSG practitioner's perspective.

 Second, let me say that I appreciate descriptive holistic approaches to
 security such as BSIMM and OWASP's OpenSAMM. I think they are much
 needed, though seldom heeded.

 Which brings me to my third point. In my 11 years of experience working
 on this SSG, it is very rare that application development teams are
 looking for a _descriptive_ approach. Almost always, they are
 looking for a _prescriptive_ one. They want specific solutions
 to specific problems, not some general formula to an approach that will
 make them more secure. To those application development teams, something
 like OWASP's ESAPI is much more valuable than something like BSIMM or
 OpenSAMM. In fact, I you confirm that you BSIMM research would indicate that
 many companies' SSGs have developed their own proprietary security APIs
 for use by their application development teams. Therefore, to that end,
 I would not say we need less _prescriptive_ and more _descriptive_
 approaches. Both are 

Re: [SC-L] BSIMM update (informIT)

2010-02-03 Thread Gary McGraw
Hi again Mike,

Yadda yadda, delay, and so on...

On 2/2/10 9:30 PM, Mike Boberski mike.bober...@gmail.com wrote:
somebody eslse said But the vast majority of clients I work with don't have 
the time or need or ability to take advantage of BSIMM

 Mike's Top 5 Web Application Security Countermeasures:
1. Add a security guy or gal who has a software development background to 
your application's software development team.

Dang, this would have saved Microsoft lots of money.  With 30,000 developers 
that security gal would have been pretty busy though.

3. Build an Enterprise Security API (a.k.a. an ESAPI, e.g. OWASP's several 
different ESAPI toolkits) that is specific to your solution stack and 
minimally provides input validation controls that use whitelists, output 
encoding/escaping controls (optionally use parameterized interfaces for 
SQL), and authentication controls. Build your ESAPI to target a specific 
level of overall security when all of your security controls are viewed as 
a whole (e.g. an OWASP Application Security Verification Standard (ASVS) 
level).

Why do you believe that an ESAPI (which is a good idea) is the best place to 
start?  Why not training?  Why not pen testing by Mike?  Etc.  This was not 
job 1 in any firm I have been involved with.

4. Write a programming manual (i.e. a secure coding standard that is specific 
to your solution stack that is organized by vulnerability type or security 
requirement with before and after code snippets, e.g. a cookbook that 
provides before and after code snippets and links to API documentation) that 
contains step-by-step instructions for using your ESAPI to both proactively 
guard against vulnerabilities, and to act as a quick reference when the 
time comes to make fixes.

Again.  How does this fit into a bigger picture?  The notion of code guidelines 
is a good one.  See [CR2.1] in the BSIMM which 11 of 30 companies we observed 
carry out.  This was not job 2 in any case I am aware of.  How about tying 
such guidance to code review technology.  We've helped multiple clients do that.

How many customers have followed Mike's Way?  What are their results?  How do 
the Mike's Way customers score with the BSIMM?

gem

company www.cigital.com
podcast www.cigital.com/silverbullet
blog www.cigital.com/justiceleague
book www.swsec.com


On Tue, Feb 2, 2010 at 7:23 PM, Steven M. Christey co...@linus.mitre.org 
wrote:

On Tue, 2 Feb 2010, Arian J. Evans wrote:

BSIMM is probably useful for government agencies, or some large
organizations. But the vast majority of clients I work with don't have
the time or need or ability to take advantage of BSIMM. Nor should
they. They don't need a software security group.

I'm looking forward to what BSIMM Basic discovers when talking to small and 
mid-size developers.  Many of the questions in the survey PDF assume that the 
respondent has at least thought of addressing software security, but not all 
questions assume the presence of an SSG, and there are even questions about the 
use of general top-n lists vs. customized top-n lists that may be informative.

- Steve

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___



___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] BSIMM update (informIT)

2010-02-03 Thread Gary McGraw
Hi Steve (and sc-l),

I'll invoke my skiing with Eli excuse again on this thread as well...

On Tue, 2 Feb 2010, Wall, Kevin wrote:
 To study something scientifically goes _beyond_ simply gathering
 observable and measurable evidence. Not only does data needs to be
 collected, but it also needs to be tested against a hypotheses that offers
 a tentative *explanation* of the observed phenomena;
 i.e., the hypotheses should offer some predictive value.

On 2/2/10 4:12 PM, Steven M. Christey co...@linus.mitre.org wrote:
I believe that the cross-industry efforts like BSIMM, ESAPI, top-n lists,
SAMATE, etc. are largely at the beginning of the data collection phase.

I agree 100%.  It's high time we gathered some data to back up our claims.  I 
would love to see the top-n lists do more with data.

Here's an example.  In the BSIMM,  10 of 30 firms have built top-N bug lists 
based on their own data culled from their own code.  I would love to see how 
those top-n lists compare to the OWASP top ten or the CWE-25.  I would also 
love to see whether the union of these lists is even remotely interesting.  One 
of my (many) worries about top-n lists that are NOT bound to a particular code 
base is that the lists are so generic as to be useless and maybe even unhelpful 
if adopted wholesale without understanding what's actually going on in a 
codebase. [see http://www.informit.com/articles/article.aspx?p=1322398].

Note for the record that asking lots of people what they think should be in 
the top-10 is not quite the same as taking the union of particular top-n lists 
which are tied to particular code bases.  Popularity contests are not the kind 
of data we should count on.  But maybe we'll make some progress on that one day.

Ultimately, I would love to see the kind of linkage between the collected
data (evidence) and some larger goal (higher security whatever THAT
means in quantitative terms) but if it's out there, I don't see it

Neither do I, and that is a serious issue with models like the BSIMM that 
measure second order effects like activities.  Do the activities actually do 
any good?  Important question!

The 2010 OWASP Top 10 RC1 is more data-driven than previous versions; same
with the 2010 Top 25 (whose release has been delayed to Feb 16, btw).
Unlike last year's Top 25 effort, this time I received several sources of
raw prevalence data, but unfortunately it wasn't in sufficiently
consumable form to combine.

I was with you up until that last part.  Combining the prevalence data is 
something you guys should definitely do.  BTW, how is the 2010 CWE-25 (which 
doesn't yet exist) more data driven??

I for one am pretty satisfied with the rate at which things are
progressing and am delighted to see that we're finally getting some raw
data, as good (or as bad) as it may be.  The data collection process,
source data, metrics, and conclusions associated with the 2010 Top 25 will
probably be controversial, but at least there's some data to argue about.

Cool!

So in that sense, I see Gary's article not so much as a clarion call for
action to a reluctant and primitive industry, but an early announcement of
a shift that is already underway.

Well put.

gem

company www.cigital.com
podcast www.cigital.com/~gem
blog www.cigital.com/justiceleague
book www.swsec.com


___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] BSIMM update (informIT)

2010-02-03 Thread Gary McGraw
Hi Arian,

Some more particulars regarding your posting.  Sorry for the delay...

On 2/2/10 4:32 PM, Arian J. Evans arian.ev...@anachronic.com wrote:
Strategic folks (VP, CxO) ...Initially ...ask for descriptive information, but 
once they get
going they need strategic prescriptions.

Please see my response to Kevin.  I hope it's clear what the BSIMM is for.  
It's for measuring your initiative and comparing it to others.  Given some 
solid BSIMM data, I believe you can do a superior job with strategy...and 
results measurement.  It is a tool for strategic people to use to build an 
initiative that works.

Tactical folks tend to ask:
+ What should we fix first? (prescriptive)
+ What steps can I take to reduce XSS attack surface by 80%?

The BSIMM is not for tactical folks.  But should you base your decision 
regarding what to fix first on goat sacrifice?  What should drive that 
decision?  Moon phase?

 Implementation level folks ask:
+ What do I do about this specific attack/weakness?
+ How do I make my compensating control (WAF, IPS) block this specific attack?

BSIMM != code review tool, top-n list, book, coding experience, ...

BSIMM is probably useful for government agencies, or some large
organizations. But the vast majority of clients I work with don't have
the time or need or ability to take advantage of BSIMM. Nor should
they. They don't need a software security group.

Where to start.  All I can say about BSIMM so far is that is appears to be 
useful for 30 large commercial organizations carrying out real software 
security initiatives.  We have studied 0 (count 'em...none) government 
organizations to date.  In my experience, the government is always lagging when 
it comes to software security.  I'm hoping to gather some government data forth 
with, starting with the US Air Force.  We shall see.

But what about SMB (small to medium sized business)?  Arian, who are your 
clients?  How many developers do they have?  Who do you report to as a 
consultant?  How do you help them make business decisions?

Regarding the existence of an SSG, see this article 
http://www.informit.com/articles/article.aspx?p=1434903.  Are your customers 
too small to have an SSG?  Are YOU the SSG?  Are your customers not mature 
enough for an SSG?  Data would be great.

I agree and strongly empathize with Gary on many
premises of his article - including that not many folks have metrics,
and tend to have more faith and magic.

Sadly I think we're stuck with second order metrics like the BSIMM.  Heck, we 
even studied the metrics that real initiatives use in the BSIMM (bugs per 
square inch anyone?), but you know what?  Everyone has different metrics.  
Really.

But, as should be no surprise, I cateogrically disagree with the
entire concluding paragraph of the article. Sadly it's just more faith
and magic from Gary's end. We all can do better than that.

You guys and your personal attacks.  Yeesh.  I am pretty sure you meant the 
next to last paragraph, because Feynman wrote the entire last one.  Here is 
the next to last one:

As I have said before, the time has come to put away the bug parade boogeyman 
http://www.informit.com/articles/article.aspx?p=1248057, the top 25 tea 
leaves http://www.informit.com/articles/article.aspx?p=1322398, black box web 
app goat sacrifice, and the occult reading of pen testing entrails. It's 
science time.  And the more descriptive and data driven we are, the better.

Can you be more specific about your disagreements please?  Did you read 
articles at the end of the pointers?  Where am I wrong?  Better yet, why?

We'll just ignore the Nader  Feynman stuff.

gem

company www.cigital.com
podcast www.cigital.com/silverbullet
blog www.cigital.com/justiceleague
book www.swsec.com


On Tue, Feb 2, 2010 at 9:30 AM, Wall, Kevin kevin.w...@qwest.com wrote:
 On Thu, 28 Jan 2010 10:34:30 -0500, Gary McGraw wrote:

 Among other things, David [Rice] and I discussed the difference between
 descriptive models like BSIMM and prescriptive models which purport to
 tell you what you should do.  I just wrote an article about that for
 informIT.  The title is

 Cargo Cult Computer Security: Why we need more description and less
 prescription.
 http://www.informit.com/articles/article.aspx?p=1562220

 First, let me say that I have been the team lead of a small Software
 Security Group (specifically, an Application Security team) at a
 large telecom company for the past 11 years, so I am writing this from
 an SSG practitioner's perspective.

 Second, let me say that I appreciate descriptive holistic approaches to
 security such as BSIMM and OWASP's OpenSAMM. I think they are much
 needed, though seldom heeded.

 Which brings me to my third point. In my 11 years of experience working
 on this SSG, it is very rare that application development teams are
 looking for a _descriptive_ approach. Almost always, they are
 looking for a _prescriptive_ one. They want specific solutions
 to specific problems, not some 

Re: [SC-L] BSIMM update (informIT)

2010-02-02 Thread Wall, Kevin
On Thu, 28 Jan 2010 10:34:30 -0500, Gary McGraw wrote:

 Among other things, David [Rice] and I discussed the difference between
 descriptive models like BSIMM and prescriptive models which purport to
 tell you what you should do.  I just wrote an article about that for
 informIT.  The title is

 Cargo Cult Computer Security: Why we need more description and less
 prescription.
 http://www.informit.com/articles/article.aspx?p=1562220

First, let me say that I have been the team lead of a small Software
Security Group (specifically, an Application Security team) at a
large telecom company for the past 11 years, so I am writing this from
an SSG practitioner's perspective.

Second, let me say that I appreciate descriptive holistic approaches to
security such as BSIMM and OWASP's OpenSAMM. I think they are much
needed, though seldom heeded.

Which brings me to my third point. In my 11 years of experience working
on this SSG, it is very rare that application development teams are
looking for a _descriptive_ approach. Almost always, they are
looking for a _prescriptive_ one. They want specific solutions
to specific problems, not some general formula to an approach that will
make them more secure. To those application development teams, something
like OWASP's ESAPI is much more valuable than something like BSIMM or
OpenSAMM. In fact, I you confirm that you BSIMM research would indicate that
many companies' SSGs have developed their own proprietary security APIs
for use by their application development teams. Therefore, to that end,
I would not say we need less _prescriptive_ and more _descriptive_
approaches. Both are useful and ideally should go together like hand and
glove. (To that end, I also ask that you overlook some of my somewhat
overzealous ESAPI developer colleagues who in the past made claims that
ESAPI was the greatest thing since sliced beer. While I am an ardent
ESAPI supporter and contributor, I proclaim it will *NOT* solve our pandemic
security issues alone, nor for the record will it solve world hunger. ;-)

I suspect that this apparent dichotomy in our perception of the
usefulness of the prescriptive vs. descriptive approaches is explained
in part by the different audiences with whom we associate. Hang out with
VPs, CSOs, and executive directors and they likely are looking for advice on
an SSDLC or broad direction to cover their specifically identified
security gaps. However, in the trenches--where my team works--they want
specifics. They ask us How can you help us to eliminate our specific
XSS or CSRF issues?, Can you provide us with a secure SSO solution
that is compliant with both corporate information security policies and
regulatory compliance?, etc. If our SSG were to hand them something like
BSIMM, they would come away telling their management that we didn't help
them at all.

This brings me to my fourth, and likely most controversial point. Despite
the interesting historical story about Feynman, I question whether BSIMM
is really scientific as the BSIMM community claims. I would contend
that we are only fooling ourselves if we claim otherwise. And while
BSIMM is a refreshing approach opposed to the traditional FUD modus
operandi taken by most security vendors hyping their security products,
I would argue that BSIMM is no more scientific than the those
who gather common quality metrics of counting defects/KLOC. Certainly
there is some correlation there, but cause and effect relationships
are far from obvious and seem to have little predictive accuracy.

Sure, BSIMM _looks_ scientific on the outside, but simply collecting
specific quantifiable data alone does not make something a scientific
endeavor.  Yes, it is a start, but we've been collecting quantifiable
data for decades on things like software defects and I would contend
BSIMM is no more scientific than those efforts. Is BSIMM moving in
the right direction? I think so. But BSIMM is no more scientific
than most of the other areas of computer science.

To study something scientifically goes _beyond_ simply gathering
observable and measurable evidence. Not only does data needs to be
collected, but it also needs to be tested against a hypotheses that offers
a tentative *explanation* of the observed phenomena;
i.e., the hypotheses should offer some predictive value. Furthermore,
the steps of the experiment must be _repeatable_, not just by
those currently involved in the attempted scientific endeavor, but by
*anyone* who would care to repeat the experiment. If the
steps are not repeatable, then any predictive value of the study is lost.

While I am certainly not privy to the exact method used to arrive at the
BSIMM data (I have read through the BSIMM Begin survey, but have not
been involved in a full BSIMM assessment), I would contend that the
process is not repeatable to the necessary degree required by science.
In fact, I would claim in most organizations, you could take any group
of BSIMM interviewers and have them question different 

Re: [SC-L] BSIMM update (informIT)

2010-02-02 Thread Steven M. Christey


On Tue, 2 Feb 2010, Wall, Kevin wrote:


To study something scientifically goes _beyond_ simply gathering
observable and measurable evidence. Not only does data needs to be
collected, but it also needs to be tested against a hypotheses that offers
a tentative *explanation* of the observed phenomena;
i.e., the hypotheses should offer some predictive value. Furthermore,
the steps of the experiment must be _repeatable_, not just by
those currently involved in the attempted scientific endeavor, but by
*anyone* who would care to repeat the experiment. If the
steps are not repeatable, then any predictive value of the study is lost.


I believe that the cross-industry efforts like BSIMM, ESAPI, top-n lists, 
SAMATE, etc. are largely at the beginning of the data collection phase. 
It shouldn't be much of a surprise that the many companies participate in 
two or more of these efforts (although simultaneously disconcerting, but 
that's probably what happens in brand-new areas).


Ultimately, I would love to see the kind of linkage between the collected 
data (evidence) and some larger goal (higher security whatever THAT 
means in quantitative terms) but if it's out there, I don't see it, or 
it's in tiny pieces... and it may be a few years before we get to that 
point.  CVE data and trends have been used in recent years, or should I 
say abused or misused, because of inherent bias problems that I'm too lazy 
to talk about at the moment.


In CWE, one aspect of our research is to tie attacks to weaknesses, 
weaknesses to mitigations, etc. so that there is better understanding of 
all the inter-related pieces.  So when you look at the CERT C coding 
standard and its ties back to CWE, you see which rules directly 
reduce/affect which weaknesses, and which ones don't.  (Or, you *could*, 
if you wanted to look at it closely enough).


The 2010 OWASP Top 10 RC1 is more data-driven than previous versions; same 
with the 2010 Top 25 (whose release has been delayed to Feb 16, btw). 
Unlike last year's Top 25 effort, this time I received several sources of 
raw prevalence data, but unfortunately it wasn't in sufficiently 
consumable form to combine.


In tool analysis efforts such as SAMATE, we are still wrestling with the 
notion of what a false positive really means, not to mention the 
challenge of analyzing mountains of raw data, using tools that were 
intended for developers in a third-party consulting context, combined with 
the multitude of perspectives in how weaknesses are described (e.g., what 
do you do if there's a chain from weakness X to Y, and tool 1 reports X, 
and tool 2 reports Y?)


In fact, I am willing to bet that the different members of my 
Application Security team who have all worked together for about 8 years 
would answer a significant number of the BSIMM Begin survey questions 
quite differently.


Even surveys using much lower-level detailed questions - such as which 
weaknesses on a nominee list of 41 are the most important and prevalent 
- have had distinct responses from multiple people within the same 
organization. (I'll touch on this a little more when the 2010 Top 25 is 
released).  Arguably many of these differences in opinion come down to 
variations in context and experience, but unless and until we can model 
context in a way that makes our results somewhat shareable, we can't get 
beyond the data collection phase.


I for one am pretty satisfied with the rate at which things are 
progressing and am delighted to see that we're finally getting some raw 
data, as good (or as bad) as it may be.  The data collection process, 
source data, metrics, and conclusions associated with the 2010 Top 25 will 
probably be controversial, but at least there's some data to argue about. 
So in that sense, I see Gary's article not so much as a clarion call for 
action to a reluctant and primitive industry, but an early announcement of 
a shift that is already underway.


- Steve
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] BSIMM update (informIT)

2010-02-02 Thread Arian J. Evans
100% agree with the first half of your response, Kevin. Here's what
people ask and need:


Strategic folks (VP, CxO) most frequently ask:

+ What do I do next? / What should we focus on next? (prescriptive)

+ How do we tell if we are reducing risk? (prescriptive guidance again)

Initially they ask for descriptive information, but once they get
going they need strategic prescriptions.


Tactical folks tend to ask:

+ What should we fix first? (prescriptive)

+ What steps can I take to reduce XSS attack surface by 80%? (yes, a
prescriptive blacklist can work here)


 Implementation level folks ask:

+ What do I do about this specific attack/weakness?

+ How do I make my compensating control (WAF, IPS) block this specific attack?

etc.

BSIMM is probably useful for government agencies, or some large
organizations. But the vast majority of clients I work with don't have
the time or need or ability to take advantage of BSIMM. Nor should
they. They don't need a software security group.

They need a clear-cut tree of prescriptive guidelines that work in a
measurable fashion. I agree and strongly empathize with Gary on many
premises of his article - including that not many folks have metrics,
and tend to have more faith and magic.

But, as should be no surprise, I cateogrically disagree with the
entire concluding paragraph of the article. Sadly it's just more faith
and magic from Gary's end. We all can do better than that.

There are other ways to gather and measure useful metrics easily
without BSIMM. Black Box and Pen Test metrics, and Top(n) List metrics
are metrics, and highly useful metrics. And definitely better than no
metrics.

Pragmatically, I think Ralph Nader fits better than Feynman for this discussion.

Nader's Top(n) lists and Bug Parades earned us many safer-society
(cars, water, etc.) features over the last five decades.

Feynman didn't change much in terms of business SOP.

Good day then,

---
Arian Evans
capitalist marksman. eats animals.



On Tue, Feb 2, 2010 at 9:30 AM, Wall, Kevin kevin.w...@qwest.com wrote:
 On Thu, 28 Jan 2010 10:34:30 -0500, Gary McGraw wrote:

 Among other things, David [Rice] and I discussed the difference between
 descriptive models like BSIMM and prescriptive models which purport to
 tell you what you should do.  I just wrote an article about that for
 informIT.  The title is

 Cargo Cult Computer Security: Why we need more description and less
 prescription.
 http://www.informit.com/articles/article.aspx?p=1562220

 First, let me say that I have been the team lead of a small Software
 Security Group (specifically, an Application Security team) at a
 large telecom company for the past 11 years, so I am writing this from
 an SSG practitioner's perspective.

 Second, let me say that I appreciate descriptive holistic approaches to
 security such as BSIMM and OWASP's OpenSAMM. I think they are much
 needed, though seldom heeded.

 Which brings me to my third point. In my 11 years of experience working
 on this SSG, it is very rare that application development teams are
 looking for a _descriptive_ approach. Almost always, they are
 looking for a _prescriptive_ one. They want specific solutions
 to specific problems, not some general formula to an approach that will
 make them more secure. To those application development teams, something
 like OWASP's ESAPI is much more valuable than something like BSIMM or
 OpenSAMM. In fact, I you confirm that you BSIMM research would indicate that
 many companies' SSGs have developed their own proprietary security APIs
 for use by their application development teams. Therefore, to that end,
 I would not say we need less _prescriptive_ and more _descriptive_
 approaches. Both are useful and ideally should go together like hand and
 glove. (To that end, I also ask that you overlook some of my somewhat
 overzealous ESAPI developer colleagues who in the past made claims that
 ESAPI was the greatest thing since sliced beer. While I am an ardent
 ESAPI supporter and contributor, I proclaim it will *NOT* solve our pandemic
 security issues alone, nor for the record will it solve world hunger. ;-)

 I suspect that this apparent dichotomy in our perception of the
 usefulness of the prescriptive vs. descriptive approaches is explained
 in part by the different audiences with whom we associate. Hang out with
 VPs, CSOs, and executive directors and they likely are looking for advice on
 an SSDLC or broad direction to cover their specifically identified
 security gaps. However, in the trenches--where my team works--they want
 specifics. They ask us How can you help us to eliminate our specific
 XSS or CSRF issues?, Can you provide us with a secure SSO solution
 that is compliant with both corporate information security policies and
 regulatory compliance?, etc. If our SSG were to hand them something like
 BSIMM, they would come away telling their management that we didn't help
 them at all.

 This brings me to my fourth, and likely most 

Re: [SC-L] BSIMM update (informIT)

2010-02-02 Thread Steven M. Christey


On Tue, 2 Feb 2010, Arian J. Evans wrote:


BSIMM is probably useful for government agencies, or some large
organizations. But the vast majority of clients I work with don't have
the time or need or ability to take advantage of BSIMM. Nor should
they. They don't need a software security group.


I'm looking forward to what BSIMM Basic discovers when talking to small 
and mid-size developers.  Many of the questions in the survey PDF assume 
that the respondent has at least thought of addressing software security, 
but not all questions assume the presence of an SSG, and there are even 
questions about the use of general top-n lists vs. customized top-n lists 
that may be informative.


- Steve
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] BSIMM update (informIT)

2010-01-29 Thread Steven M. Christey


Speaking of top 25 tea leaves, the bug parade boogeyman just called 
and reminded me that the 2010 Top 25 is due to be released next Thursday, 
February 4.  Thanks for the plug.


A preview of some of the brand-new features:

1) Data-driven ranking with alternate metrics to feed the brain and
   stimulate wider discussion - featuring special guest star Elizabeth
   Nichols

2) Multiple focus profiles to avoid one-size-fits-all

3) Cross-cutting mitigations that expand far beyond the Top 25 - AND show
   which mitigations address which Top 25's

4) References to resources such as BSIMM (and even that controversial
   bad-boy ESAPI) to get people thinking even more about systematic
   software security

... and a few more tidbits.

This particular Cargo-Culting pseudoscientist has dutifully listened to 
his fellow islanders.  This year we've made shiny new airstrips and 
control towers, and apparently we've already started some fires.  The 
planes will TOTALLY come back!  Or maybe I'm just feeling a little 
whimsical.


- Steve

P.S.  I can't wait until software security becomes an actual science, 
because as we all know, scientists are much too rational to ever indulge 
in self-destructive infighting and name-calling that hinders opportunities 
for progress in their field.

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___