On Sat, Aug 17, 2013 at 12:50 PM, Jon Callas <j...@callas.org> wrote: > On Aug 17, 2013, at 12:49 AM, Bryan Bishop <kanz...@gmail.com> wrote: > > Would providing (signed) build vm images solve the problem of distributing > > your toolchain?
A more interesting approach would be to use a variety of independently sourced disassemblers to compare builds and check that object code differences from one build to the next can be accounted for by corresponding changes to the source code or build systems. This is not really tractable when you change compilers or their settings, but at least you can get a pretty good idea as you develop of what object code is being produced. This is terribly time-consuming, but you can automate the comparison process and archive results for post-mortems as a deterrent. You'd have to do this on multiple machines handled by different people, and so on... It's not too farfetched, see http://illumos.org/man/1onbld/wsdiff (Solaris release engineering used to use this tool, and I imagine that they still do). > I *cannot* provide an argument of security that can be verified on its own. > This is Godel's second incompleteness theorem. A set of statements S cannot > be proved consistent on its own. (Yes, that's a minor handwave.) No one can. We're in luck w.r.t. the Thompson attack: it needs care and feeding, as it will rot if not kept up to date. Any effort to make it clever enough to keep up with a changing code base is likely to lead to the attack being revealed. Any effort to maintain it risks detection too. Any effort to use it risks detection. And today a Thompson attack would have to hide from a multiplicity of disassemlers (possibly run on uncompromised systems), decompilers, and, of course, tracing and debugging tools that may work at layers that the generated exploit cannot do anything about (e.g., DTrace) without the bugged compiler having been used to build pretty much all of those tools. That is, I wouldn't worry too much about the Thompson attack. > All is not lost, however. We can say, "Meh, good enough" and the problem is > solved. Someone else can construct a *verifier* that is some set of policies > (I'm using the word "policy" but it could be a program) that verifies the > software. However, the verifier can only be verified by a set of policies > that are constructed to verify it. The only escape is decide at some point, > "meh, good enough." Yes, it's turtles all the way down. You stop worrying about far enough turtles because you have no choice (and hopefully they are too "far" to really affect your world). > I hope I don't sound like a broken record, but a smart attacker isn't going > to attack there, anyway. A smart attacker doesn't break crypto, or suborn > releases. They do traffic analysis and make custom malware. Really. Go look > at what Snowden is telling us. That is precisely what all the bad guys are > doing. Verification is important, but that's not where the attacks come from > (ignoring the notable exceptions, of course). Indeed, the vulnerabilities from the plethora of bugs we unintentionally create, overwhelm (or should, in any reasonable analysis) any concerns about turtles below the one immediately holding up the Earth. Nico -- _______________________________________________ cryptography mailing list cryptography@randombit.net http://lists.randombit.net/mailman/listinfo/cryptography