Re: [SC-L] BSIMM update (informIT)

2010-02-05 Thread Steven M. Christey
On Thu, 4 Feb 2010, Brad Arkin wrote: As a result, the count per ISV of bugs submitted to the Tipping Point program is more of an indicator of what technology is popular in the research community rather than how secure a product is. Using anecdotal evidence from about 40,000 [sic] published

[SC-L] Metrics

2010-02-05 Thread McGovern, James F. (eBusiness)
> Here's an example. In the BSIMM, 10 of 30 firms have built top-N bug lists based on their >own data culled from their >own code. I would love to see how those top-n lists compare to the > OWASP top ten or the CWE-25. I would also love to see whether the union of these lists is >even remotely

[SC-L] OWASP Podcast Series

2010-02-05 Thread Jim Manico
Hello SC-L, We have released 3 OWASP podcasts over the last few days for your listening pleasure: #60 Interview with Jeremiah Grossman and Robert Hansen (Google pays for vulns) http://www.owasp.org/download/jmanico/owasp_podcast_60.mp3 #59 AppSec round table with Dan Cornell, Boaz Gelbord,

Re: [SC-L] Metrics

2010-02-05 Thread Arian J. Evans
In the web security world it doesn't seem to matter much. Top(n) Lists are Top(n). There is much ideological disagreement over what goes in those lists and why, but the ratios of defects are fairly consistent. Both with managed code and with "scripting" languages. The WhiteHat Security statistics

Re: [SC-L] Metrics

2010-02-05 Thread Steven M. Christey
On Fri, 5 Feb 2010, McGovern, James F. (eBusiness) wrote: One of the general patterns I noted while providing feedback to the OWASP Top Ten listserv is that top ten lists do sort differently. Within an enterprise setting, it is typical for enterprise applications to be built on Java, .NET or