RE: [SC-L] Theoretical question about vulnerabilities
-Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of der Mouse Sent: 12 April 2005 05:15 To: SC-L@securecoding.org Subject: Re: [SC-L] Theoretical question about vulnerabilities [B]uffer overflows can always be avoided, because if there is ANY input whatsoever that can produce a buffer overflow, the proofs will fail and the problem will be identified. Then either (a) there exist programs which never access out-of-bounds but which the checker incorrectly flags as doing so, or (b) there exist programs for which the checker never terminates (quite possibly both). (This is simply the Halting Theorem rephrased.) Could you explain why you believe that proof of a specific property in a constrained environment is equivalent to the Halting Problem? When I explain and even demonstrate the benefits of formal reasoning I find that, increasingly often, someone pops up with Godel or Turing to prove that it is all a waste of time. Having done so they then, presumably, go back to using the less rigorous approaches which are a demonstrable cause of failure in the systems our industry builds. I really do find this line of argument rather irritating; the theoretical limits of proof are quite a different thing from the practical application of proof-based technology in a suitably constrained environment. For the record, I am quite confident that we can prove freedom from buffer overflow in a large and useful class of programs without hitting any of the limits you suggest. Indeed we, and our customers, are routinely doing so. Peter ** This email is confidential and intended solely for the use of the individual to whom it is addressed. If you are not the intended recipient, be advised that you have received this email in error and that any use, disclosure, copying or distribution or any action taken or omitted to be taken in reliance on it is strictly prohibited. If you have received this email in error please contact the sender. Any views or opinions presented in this email are solely those of the author and do not necessarily represent those of Praxis High Integrity Systems Ltd (Praxis HIS). Although this email and any attachments are believed to be free of any virus or other defect, no responsibility is accepted by Praxis HIS or any of its associated companies for any loss or damage arising in any way from the receipt or use thereof. The IT Department at Praxis HIS can be contacted at [EMAIL PROTECTED] **
Re: [SC-L] Re: Application Insecurity --- Who is at Fault?
On 4/13/05, der Mouse [EMAIL PROTECTED] wrote: I would question you if you suggested to me that you always assume to _NOT_ include 'security' and only _DO_ include security if someone asks. Security is not a single thing that is included or omitted. Again, in my experience that is not true. Programs that are labelled 'Secure' vs something that isn't. *Labelling as* secure _is_ (or at least can be) something that is boolean, included or not. The actual security behind it, if any, is what I was talking about. In this case, there is a single thing - Security - that has been included in one and not the other [in theory]. Rather, I would say, there is a cluster of things that have been boxed up and labeled security, and included or not. What that box includes may not be the same between the two cases, even, never mind whether there are any security aspects that aren't in the box, or non-security aspects that are. Also, anyone requesting software from a development company may say: Oh, is it 'Secure'? Again, the implication is that it is a single thing included or omitted. Yes, that is the implication. It is wrong. I couldn't agree more! This is my whole point. Security isn't 'one thing', but it seems the original article [that started this discussion] implied that so that the blame could be spread out. If you actually look at the actual problems you can easily blame the programmers :) -- Michael
Re: [SC-L] Re: Application Insecurity --- Who is at Fault?
So you blame the grunts in the trenches if you lose the war? I mean, that thinking worked out so well with Vietnam and all... ;-) regards, -dsp I couldn't agree more! This is my whole point. Security isn't 'one thing', but it seems the original article [that started this discussion] implied that so that the blame could be spread out. If you actually look at the actual problems you can easily blame the programmers :)
Re: [SC-L] Theoretical question about vulnerabilities
[B]uffer overflows can always be avoided, because if there is ANY input whatsoever that can produce a buffer overflow, the proofs will fail and the problem will be identified. Then either (a) there exist programs which never access out-of-bounds but which the checker incorrectly flags as doing so, or (b) there exist programs for which the checker never terminates (quite possibly both). (This is simply the Halting Theorem rephrased.) Could you explain why you believe that proof of a specific property in a constrained environment is equivalent to the Halting Problem? Take the code for the checker. Wrap it in a small amount of code that makes a deliberate out-of-bounds access if the checker outputs `no problem'. Then write another small bit of code which ignores its input and runs the wrapped checker on itself. Run the original checker on the result. This is basically the same diagonalization argument Turing used to demonstrate that the Halting Problem was unsolvable; that's the sense in which I call it the Halting Theorem rephrased. I really do find this line of argument rather irritating; the theoretical limits of proof are quite a different thing from the practical application of proof-based technology in a suitably constrained environment. Entirely true. But if you use theoretical language like proof, you have to expect to be held to a theroetical standard of correctness. /~\ The ASCIIder Mouse \ / Ribbon Campaign X Against HTML [EMAIL PROTECTED] / \ Email! 7D C8 61 52 5D E7 2D 39 4E F1 31 3E E8 B3 27 4B
Re: [SC-L] Theoretical question about vulnerabilities
der Mouse wrote: [B]uffer overflows can always be avoided, because if there is ANY input whatsoever that can produce a buffer overflow, the proofs will fail and the problem will be identified. Then either (a) there exist programs which never access out-of-bounds but which the checker incorrectly flags as doing so, or (b) there exist programs for which the checker never terminates (quite possibly both). (This is simply the Halting Theorem rephrased.) Precisely because statically proven array bounds checking is Turing Hard, that is not how such languages work. Rather, languages that guarantee array bounds insert dynamic checks on every array reference, and then use static checking to remove all of the dynamic checks that can be proven to be unnecessary. For instance, it is often the case that a tight inner loop has hard-coded static bounds, and so a static checker can prove that the dynamic checks can be removed from the inner loop, hoisting them to the outer loop and saving a large proportion of the execution cost of dynamic array checks. How much of this optimization can be done is arguable: * The JonesKelly GCC enhancement that does full array bounds checking makes (nearly?) no attempt at this optimization, and suffers slowdowns of 10X to 30X on real applications. * The Bounded Pointers GCC enhancement that does full array bounds checking but with a funky incompatible implementation that makes pointers bigger than a machine word, does some of these optimizations and suffers a slowdown of 3X to 5X. Some have argued that it can be improved from there, but how much remains to be seen. * Java compilers get the advantage of a language that was actually designed for type safety, in contrast with C that aggressively makes static type checking difficult. The last data I remember on Java is that turning array bounds checking on and off makes a 30% difference in performance. Crispin -- Crispin Cowan, Ph.D. http://immunix.com/~crispin/ CTO, Immunix http://immunix.com
Re: [SC-L] Theoretical question about vulnerabilities
David Crocker wrote: Exactly. I'm not interested in trying to write a program prover that will prove that an arbitrary program is correct, if indeed it is. I am only interested in proving that well-structured programs are correct. The Hermes programming language took this approach http://www.research.ibm.com/people/d/dfb/hermes.html Hermes proved a safety property called Type State Checking in the course of compiling programs. Type State offers very nice safety properties for correctness, including proving that no variable will be used before it is initialized. But the Hermes Type State Checker was not formally complete; there were valid programs that the checker could not *prove* were correct, and so it would reject them. Here's an example of a case it cannot prove: if X then Y - initial value endif ... if X then Z - Y + 1 endif The above code is correct in that Y's value is taken only when it has been initialized. But to prove the code correct, an analyzer would have to be flow sensitive, which is hard to do. Here's where it gets interesting. The authors of Type State went and analyzed a big pile of existing code that was in production but that the Type State checker failed to prove correct. In (nearly?) every case, they found a *latent bug* associated with the code that failed to pass the Checker. We can infer from that result that code that depends on flow sensitivity for its correctness is hard for humans to reason about, and therefore likely to be wrong. Disclaimer: I worked on Hermes as an intern at the IBM Watson lab waay back in 1991 and 1992. Hermes is my favorite type safe programming language, but given the dearth of implementations, applications, and programmers, that is of little practical interest :) Crispin -- Crispin Cowan, Ph.D. http://immunix.com/~crispin/ CTO, Immunix http://immunix.com