Pascal Meunier <[EMAIL PROTECTED]> writes >Do you think it is possible to enumerate all the ways all vulnerabilities >can be created? Is the set of all possible exploitable programming mistakes >bounded?
I believe that one can make a Turing machine halting argument to show that this is impossible. If you include denial of service attacks (and infinite loops are certainly a denial of service), then the halting problem applies immediately and trivially. But even if you exclude denial of service, I think you could construct a proof based on a Turing machine that halts if and only if there is an exploitable programming mistake, or something like that. I'm not really very good at such proofs, so I'll leave that as an exercise for the reader (convenient excuse for not doing the hard bits!) It is the practical impossibility of finding all the existing vulnerabilities that led to the Anderson study of 1972 replacing the "penetrate and patch" stratrategy for security with the "security kernel" approach of developing small and simple code that you could make a convincing argument that it is secure. That in turn led to the development of high assurance techniques for building secure systems that remain today the only way that has been shown to produce code with demonstrably better security than most of what's out there. (I say most deliberately. I'm sure that various people reading this will come up with examples of isolated systems that have good security but didn't use high assurance. No disputes there, so you don't need to fill up SC-L with a list of them. The point is that high assurance is a systematic engineering approach that works and, when followed, has excellent security results. The fact that almost no one uses it says much more about the lack of maturity of our field than about the technique itself. It took a VERY long time for bridge builders to develop enough discipline that most bridges stayed up. The same is true for software security, unfortunately. It also says a lot about whether people are really willing to pay for security.) Although old, Gasser's book probably has the best description of what I'm talking about. These two classic documents should be read by ANYONE trying to do secure coding, and fortunately, they are both online! Thanks to NIST and the University of Nebraska at Omaha for putting them up. (For completeness, the NIST website was created from documents scanned by the University of California at Davis.) citations: 1. Anderson, J.P., Computer Security Technology Planning Study, ESD-TR-73-51, Vols. I and II, October 1972, James P. Anderson and Co., Fort Washington, PA, HQ Electronic Systems Division: Hanscom AFB, MA. URL: http://csrc.nist.gov/publications/history/ande72.pdf 2. Gasser, M., Building Secure Systems. 1988, New York: Van Nostrand Reinhold. URL: http://nucia.ist.unomaha.edu/library/gasser.php - Paul