I wrote: | Indeed, the classic question is "I've just bought this new computer | which claims to have full-disk encryption. Is there any practical | way I can assure myself that there are (likely) no backdoors in/around | the encryption?" | | For open-source software encryption (be it swap-space, file-system, | and/or full-disk), the answer is "yes": I can assess the developers' | reputations, I can read the source code, and/or I can take note of | what other people say who've read the source code.
On Fri, 30 Jan 2009, Brian Gladman asked: > But, unless you are doing it with a pencil and paper, your encryption is still > being done in hardware even if you write it yourself. > > For example, why would you trust an Intel processor given that Intel is one of > the founding members of the TCG and is a major player in its activities? It's instructive to the distinction between "data in motion" encryption (for example, a network-encryption-box (NEB) and "data at rest" encryption (for example, a cryptographic filesystem): A network-encryption box: computer#1 <----> NEB#1 <----> ((network)) <----> NEB#2 <----> computer#2 plaintext ciphertext ciphertext plaintext As described by Henry Spencer in http://www.sandelman.ottawa.on.ca/linux-ipsec/html/1999/09/msg00240.html it's perfectly practical for (say) the NSA to arrange for a backdoor in each NEB which occasionally leaks the keystream into the network, in a way that's very unlikely to be caught in testing, but would make it easy for an eavesdropper on the network to recover the plaintext. A cryptographic filesystem: I could imagine the NSA having arranged to plant some sort of microcode backdoor in the Pentium III processor in my laptop. (The hardest part would probably be persuading all the Intel employees involved that it wouldn't be a PR disaster for Intel if the news leaked out.) In the context of my original message, the backdoor would have to recognize the binary code sequence of the OpenBSD AES routines when invoked by the encrypting-filesystem vnode layer, and somehow compromise the security (maybe arrange to leak keystream bits into free disk sectors??). That's a tricky technical job, but I could imagine it being done, and if it's all in processor microcode, I could even imagine it having stayed a secret. But that's not good enough: What about Matt Blaze's Cryptographic File System? What about all the people using the various Linux encrypting file systems? The backdoor(s) need to cover them, too. And the MacOS ones (if there's not a software backdoor there). And all the other open-source-crypto systems. And the backdoors have to do this without compromising interoperability -- I have CFS directory trees which I created on an old Sparc that I now use on my laptop. But I think the hardest part of all is that the backdoor has to still still recognize the various crypto binary-code-sequences even when the relevant software is recompiled with a newer compiler using a different global optimizer, even though that newer compiler might not even have existed when the backdoor was inserted. It's this variety of different software encryption schemes -- and compilers to turn them into binary code (which is what the NSA/Intel backdoor ultimately has to key on) that, I think, makes it so much harder for a hardware backdoor to work (i.e. to subvert software encryption) in this context. -- -- "Jonathan Thornburg [remove -animal to reply]" <jth...@astro.indiana-zebra.edu> Dept of Astronomy, Indiana University, Bloomington, Indiana, USA "Washing one's hands of the conflict between the powerful and the powerless means to side with the powerful, not to be neutral." -- quote by Freire / poster by Oxfam --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com