On Wed, 2009-05-27 at 10:31 -0400, Roland Dowdeswell wrote: > I have noticed in my years as a security practitioner, that in my > experience non-security people seem to assume that a system is > perfectly secure until it is demonstrated that it is not with an > example of an exploit. Until an exploit is generated, any discussion > of insecurity is filed in their minds as ``academic'', ``theoretical'' > or ``not real world''.
This matches my experience as well. "Have any exploits of this particular scheme been found in the wild?" is always one of the first three questions, and the answer is one of the best predictors of whether the questioner actually does anything. For best results one must be able to say something like, "Yes, six times in the last year" and start naming companies, products, dates, and independent sources that can be used to verify the incidents. To really make the point one should also be able to cite financial costs and losses incurred. Because companies don't like talking about cracks and exploits involving their own products, nor support third parties who attempt systematic documentation of same, it is frequently very hard to produce sufficient evidence to convince and deter new reinventors of the same technology. This failure to track and document exploits and cracks is a cultural failure that, IMO, is currently one of the biggest nontechnical obstacles to software security. Bear --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com