On Fri, May 20, 2011 at 12:10 AM, Steven D'Aprano <steve+comp.lang.pyt...@pearwood.info> wrote: > On Thu, 19 May 2011 17:56:12 -0700, geremy condra wrote: > >> TL;DR version: large systems have indeed been verified for their >> security properties. > > How confident are we that the verification software is sufficiently bug- > free that we should trust their results?
Pretty confident. Most formal verification systems are developed in terms of a provably correct kernel bootstrapping the larger system. The important thing is that since that kernel doesn't need to be complete (only correct) it can typically be easily verified, and in some cases exhaustively tested. There are also techniques which generate certificates of correctness for verifiers that aren't provably correct, but that isn't an area I know much about, and I don't know if that gets used in practice. The bigger risk is really that the model you're feeding it is wrong. > How confident are we that the verification software tests every possible > vulnerability, as opposed to merely every imaginable one? Formal provers typically don't work by just throwing a bunch of input at a piece of software and then certifying it. They take a set of specifications (the model), a set of assumptions, and the program in question, and provide a proof (in the mathematical sense) that the program is exactly equivalent to the model given the assumptions. Testing the assumptions and model are typically part of the development process, though, and that's definitely a possible source of errors. Geremy Condra -- http://mail.python.org/mailman/listinfo/python-list