On Wed, 16 Jun 2004 17:35:18 -0400, Thor Lancelot Simon <[EMAIL PROTECTED]> wrote:
> On Wed, Jun 16, 2004 at 02:12:18PM -0700, Eric Rescorla wrote: > > > Let's assume for the sake of argument that two people auditing the same code section will find the same set of bugs. > > Actually, I think that in this regard the answer lies in a sort of critical mass of interest. Ideas about where to look for what sort of bug (...) tend to propagate through the population of experts who might find bugs slowly at first, and then faster and faster in what's probably an exponential way. That's interesting. Now consider what happens if I search for and find a bug and disclose it. All blackhats can start using it right away, but some people will patch. This is described eloquently in Figure 1 from Eric's paper. However, if I don't disclose it, then by your argument someone will find it very soon afterwards. For the sake of argument, lets assume that its a blackhat. Since I didn't disclose it, no one has patched their systems yet, and the blackhat has a period of private exploitation until it gets publicly known, at which point it gets disclosed and patched. This scenario is illustrated by Figure 2 in the paper, and since the rediscovery is very fast, the starting point on both figures is the same. Of course the blackhat could wait for the most opportune time to use the bug, but by your assumption someone else will find the bug soon and cause it to be patched, so waiting isn't worth it for him. The private exploitation time is the only difference between the two figures. However, it definately has a cost in both time and money for me to go searching for bugs. If the blackhat is going to find it and use it fairly soon anyway, why should I go through the effort of finding the bug in the first place? We as a community are going to have the blackhats do our work for us soon if we don't go searching. I think Eric's claim is that *proactive* finding and disclose of bugs is not worthwhile. However, it seems to me he advocates *reactive* disclose for when bugs are already being exploited by blackhats. Proactively finding bugs doesn't increase security much because it doesn't make software more secure overall, and he assumes that most of the damage from bugs (even bugs discovered by blackhats) comes after they become public. One issue I had with this paper is that he didn't take on the question of which bugs were proactively found (by researchers looking for bugs) and which bugs were reactively found (by observing the blackhat community). I know this is a very difficult thing to determine, but his paper brought this question to light. Of course, all of this is assuming the bugs are from known classe New classes of vulnerabilities have many other worthwhile benefits. > I think that this sort of thing is going to turn out to be _very_ hard to tease out evidence for or against using naive studies of bug commission, discovery, or rediscovery rates; but it is my intuition based on many years of making, finding, and fixing bugs, and of watching others eventually redo my work in the cases in which I'd managed to fail to let them know about it. I would argue that in fact this pattern is not the exception; it is the rule. I agree its going to be very hard, but obviously from this conversation people disagree whether this pattern is the exception or the rule. Some actual evidence (and not just anecdotal evidence) is warranted. Rick --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]