> Here's an example. In the BSIMM, 10 of 30 firms have built top-N bug lists based on their >own data culled from their >own code. I would love to see how those top-n lists compare to the > OWASP top ten or the CWE-25. I would also love to see whether the union of these lists is >even remotely interesting.
One of the general patterns I noted while providing feedback to the OWASP Top Ten listserv is that top ten lists do sort differently. Within an enterprise setting, it is typical for enterprise applications to be built on Java, .NET or other compiled languages where as if I were doing an Internet startup I may leverage more scripting approaches. So, if different demographics have different behaviors what would a converged list or even a separate list tell us? ************************************************************ This communication, including attachments, is for the exclusive use of addressee and may contain proprietary, confidential and/or privileged information. If you are not the intended recipient, any use, copying, disclosure, dissemination or distribution is strictly prohibited. If you are not the intended recipient, please notify the sender immediately by return e-mail, delete this communication and destroy all copies. ************************************************************ _______________________________________________ Secure Coding mailing list (SC-L) SC-L@securecoding.org List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l List charter available at - http://www.securecoding.org/list/charter.php SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com) as a free, non-commercial service to the software security community. _______________________________________________