[Cryptography] Ars Technica on the Taiwanese National ID smart card break
Weeks after the informal announcement, the Taiwanese National ID smartcard break is finally getting press. It is a great example of a piece of certified crypto hardware that works poorly because of bad random number generation. Good explanation for your technical but not security oriented friends in Ars Technica: http://arstechnica.com/security/2013/09/fatal-crypto-flaw-in-some-government-certified-smartcards-makes-forgery-a-snap/ -- Perry E. Metzgerpe...@piermont.com ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] paranoid cryptoplumbing is a probably not defending the weakest point
On Sep 17, 2013, at 11:54 AM, Perry E. Metzger pe...@piermont.com wrote: I'd like to note quite strongly that (with certain exceptions like RC4) the odds of wholesale failures in ciphers seem rather small compared to the odds of systems problems like bad random number generators, sabotaged accelerator hardware, stolen keys, etc., and a smart attacker goes for the points of weakness Actually, I think there is a potentially interesting issue here: RC4 is faster and requires significantly fewer resources than modern block ciphers. As a result, people would really like to use it - and actually they *will* continue to use it even in the face of the known attacks (which, *so far*, are hardly fatal except in specialized settings). So ... is there some simple way of combining RC4 with *something* that maintains the performance while retaining the speed? How about two separate RC4's (with independent keys) XOR'ed together? That would still be considerably faster than AES. There appear to be two general classes of known attacks: 1. The initial key setup doesn't produce enough randomness; 2. There are long-term biases in the output bytes. The first of these can be eliminated by using AES to generate values to scramble the internal state. The second can be hidden by doing post-whitening, XOR'ing in a byte from AES in (say) counter mode. If you use a given byte 64 times, then use the next byte of the output, you pay 1/64 the cost of actually using AES in counter mode, but any bias in the output would have to play out over a 64-byte segment. (Actually, if you use ideas from other stream ciphers, changing the whitening every 64 bytes probably isn't right - you want the attacker to have to guess where the changeovers take place. There are many ways to do that.) Of course, don't take any of the above and go build code. It's just speculation and likely has serious problems. I toss it out to illustrate the idea. Whether it's actually worthwhile ... I doubt it, but it's worth thinking about. -- Jerry ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] paranoid cryptoplumbing is a probably not defending the weakest point
On Tue, 17 Sep 2013 12:15:48 -0400 Jerry Leichter leich...@lrw.com wrote: Actually, I think there is a potentially interesting issue here: RC4 is faster and requires significantly fewer resources than modern block ciphers. As a result, people would really like to use it - and actually they *will* continue to use it even in the face of the known attacks (which, *so far*, are hardly fatal except in specialized settings). If you are dealing with huge numbers of connections, you probably have hardware and AES is plenty fast -- modern Intel hardware accelerates it, too. (If you really want a fast stream cipher, why not use ChaCha20 or something else that is probably much better than RC4? I mean, if you're going to propose changing it, as you do, it won't interoperate anyway, so you can substitute something better.) In any case, I would continue to suggest that the weakest point (except for RC4) is (probably) not going to be your symmetric cipher. It will be protocol flaws and implementation flaws. No point in making the barn out of titanium if you're not going to put a door on it. Perry -- Perry E. Metzgerpe...@piermont.com ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Radioactive random numbers
On Tue, 17 Sep 2013 11:35:34 -0400 Perry E. Metzger pe...@piermont.com wrote: Added c...@panix.com -- if you want to re-submit this (and maybe not top post it) I will approve it... Gah! Accidentally forwarded that to the whole list, apologies. -- Perry E. Metzgerpe...@piermont.com ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] The paranoid approach to crypto-plumbing
On Mon, Sep 16, 2013 at 12:44 PM, Bill Frantz fra...@pwpconsult.com wrote: Symmetric encryption: Two algorithms give security equal to the best of them. Three protect against meet-in-the-middle attacks. Performing the multiple encryption at the block level allows block cyphers to be combined with stream cyphers. RC4 may have problems, but adding it to the mix isn't very expensive. A paper of mine on combining a stream cipher with a block cipher: http://eprint.iacr.org/2008/473 AES-256 uses 14 rounds vs. 10 for AES-128, so it is about 40% slower. Given 256 bits of key and a stream cipher that is 5x faster than AES, you can use AES-128 and have 128 bits to key the stream cipher. AES-128 plus whitening that changes for every block (two 128-bit blocks of stream cipher output) has roughly the same cost as AES-256. There are several ways to reduce the cost and/or increase the security from there; see the paper for details. I am still working on this notion and will have a new and much improved version of that paper sometime this year. Anyone I know moderately well who wants to review it can contact me off-list for the current draft. ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] The paranoid approach to crypto-plumbing
Tony Arcieri basc...@gmail.com writes: On Mon, Sep 16, 2013 at 9:44 AM, Bill Frantz fra...@pwpconsult.com wrote: After Rijndael was selected as AES, someone suggested the really paranoid should super encrypt with all 5 finalests in the competition. Five level super encryption is probably overkill, but two or three levels can offer some real advantages. I wish there was a term for this sort of design in encryption systems beyond just defense in depth. AFAICT there is not such a term. How about the Failsafe Principle? ;) How about Stannomillinery? Peter. ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
[Cryptography] paranoid cryptoplumbing is a probably not defending the weakest point
On the Paranoid Cryptoplumbing discussion: I'd like to note quite strongly that (with certain exceptions like RC4) the odds of wholesale failures in ciphers seem rather small compared to the odds of systems problems like bad random number generators, sabotaged accelerator hardware, stolen keys, etc., and a smart attacker goes for the points of weakness. I'm not going to put my admin hat on and stop the discussion so long as it remains relatively sane and technical, but for most purposes it is probably just reinforcing a steel door in a paper wall. (Of course, if the endpoints are trusted hardware running a formally verified capability operating system and you still have time on your hands, hey, why not? Of course, when I posted a long message about modern formal verification techniques and how they're now practical, no one bit on the hook.) All that said, even I feel the temptation for low performance applications to do something like Bill Frantz suggests. It is in the nature of people in our community to like playing with such things. Just don't take them *too* seriously please. Perry -- Perry E. Metzgerpe...@piermont.com ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] The paranoid approach to crypto-plumbing
Hi Bill, On 17/09/13 01:20 AM, Bill Frantz wrote: The idea is that when serious problems are discovered with one algorithm, you don't have to scramble to replace the entire crypto suite. The other algorithm will cover your tail while you make an orderly upgrade to your system. Obviously you want to chose algorithms which are likely to have different failure modes -- which I why I suggest that RC4 (or an extension thereof) might still be useful. The added safety also allows you to experiment with less examined algorithms. The problem with adding multiple algorithms is that you are also adding complexity. While you are perhaps ensuring against the failure of one algorithm, you are also adding a cost of failure in the complexity of melding. E.g., as an example, look at the current SSL search for a secure ciphersuite (and try explaining it to the sysadms). As soon as you add an extra algorithm, others are tempted to add their vanity suites, the result is not better but worse. And, as we know, the algorithms rarely fail. The NSA specifically targets the cryptosystem, not the algorithms. It also doesn't like well-constructed and well-implemented systems. (So before getting too exotic with the internals, perhaps we should get the basics right.) In contrast to the component duplication approach, I personally prefer the layering duplication approach (so does the NSA apparently). That is, have a low-level cryptosystem that provides the base encryption and authentication properties, and over that, layer an authorisation layer that adds any additional properties if desired (such as superencryption). One could then choose complementary algorithms at each layer. Having said all that, any duplication is expensive. Do you really have the evidence that such extra effort is required? Remember, while you're building this extra capability, customers aren't being protected at all, and are less likely to be so in the future. iang ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Radioactive random numbers
Added c...@panix.com -- if you want to re-submit this (and maybe not top post it) I will approve it... Perry On Tue, 17 Sep 2013 11:08:43 -0400 Carl Ellison c...@panix.com wrote: If you can examine your setup and determine all possible memory in the device, count that memory in bit-equivalents, and discover that the number of bits is small (e.g., 8), then you can apply Maurer's test: ftp://ftp.inf.ethz.ch/pub/crypto/publications/Maurer92a.pdf Of course, if you're concerned that someone has slipped you a CPU chip with a PRNG replacing the RNG, you can't detect that without ripping the chip apart. On 9/12/13 11:00 AM, Perry E. Metzger pe...@piermont.com wrote: On Wed, 11 Sep 2013 17:06:00 -0700 Tony Arcieri basc...@gmail.com wrote: It seems like Intel's approach of using thermal noise is fairly sound. Is there any reason why it isn't more widely adopted? Actually, I think things like this mostly have been missing because manufacturers didn't understand they were important. Even the Raspberry Pi now has an SoC with a hardware RNG. In addition to getting CPU makers to always include such things, however, a second vital problem is how to gain trust that such RNGs are good -- both that a particular unit isn't subject to a hardware defect and that the design wasn't sabotaged. That's harder to do. Perry -- Perry E. Metzger pe...@piermont.com ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography -- Perry E. Metzgerpe...@piermont.com ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] The paranoid approach to crypto-plumbing
On 17/09/13 01:40 AM, Tony Arcieri wrote: On Mon, Sep 16, 2013 at 9:44 AM, Bill Frantz fra...@pwpconsult.com mailto:fra...@pwpconsult.com wrote: After Rijndael was selected as AES, someone suggested the really paranoid should super encrypt with all 5 finalests in the competition. Five level super encryption is probably overkill, but two or three levels can offer some real advantages. I wish there was a term for this sort of design in encryption systems beyond just defense in depth. AFAICT there is not such a term. How about the Failsafe Principle? ;) A good question. In my work, I've generally modelled it such that the entire system still works if one algorithm fails totally. But I don't have a name for that approach. iang ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] AES [was NSA and cryptanalysis]
On 16/09/2013 23:39, Perry E. Metzger wrote: On Mon, 16 Sep 2013 11:54:13 -1000 Tim Newsham tim.news...@gmail.com wrote: - A backdoor that leaks cryptographic secrets consider for example applications using an intel chip with hardware-assist for AES. You're feeding your AES keys directly into the cpu. Any attacker controlling the cpu has direct access and doesn't have to do any fancy pattern matching to discover the keys. Now if that CPU had a way to export some or all of the bits through some channel that would also be passively observable, the attacker could pull off an offline passive attack. What about RNG output? What if some bits were redundantly encoded in some of the RNG output bits which where then used directly for tcp initial sequence numbers? Such a backdoor would be feasible. It might be feasible in theory (and see the Illinois Malicious Processor as an example) but I think it would be hard to pull off well -- too hard to account for changes in future code, too hard to avoid detection of what you've done. Not sure this is true. If instead of leaking via the RNG, you leak via the cryptographic libraries *and* the windows socket libraries, then while there are probably two different teams involved, there is only one manufacturer - Microsoft. Ok that would exclude non-windows systems, which in this world of BYOD means an increasing number of ios or android devices - but the odds of one end or the other of any given exchange being a MS platform are good. Provided the cryptographic libraries are queried in a specific manner for tcp sequence numbers (which can be enforced) the winsock team never need know how those are generated, leaving just the cryptographic library holding both the input and output. On the other hand, we know from the press reports that several hardware crypto accelerators have been either backdoored or exploited. In those, leaking key material to observers in things like IVs or choices of nonces might be quite feasible. Such devices are built to be tamper resistant so no one will even notice if you add features to try to conceal the extra functionality of the device. For the Intel chips, I suspect that if they've been gimmicked, it will be more subtle, like a skew in the RNG that could be explained away as a manufacturing or design error. That said, things like the IMP do give one pause. And *that* said, if you're willing to go as far as what the IMP does, you no longer need to simply try to leak information via the RNG or other crypto hardware, you can do far far worse. (For those not familiar with the Illinois Malicious Processor: https://www.usenix.org/legacy/event/leet08/tech/full_papers/king/king_html/ ) Perry ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] End to end
On 2013-09-16 Phillip Hallam-Baker hal...@gmail.com wrote: [snip] If people are sending email through the corporate email system then in many cases the corporation has a need/right to see what they are sending/receiving. [snip] Even if an organisation has a need/right to look into people's email, it is necessary to protect the communication on transport and storage. Of course a certain way of key recovery has to be in place. Just my 2 cents -- Website: http://hallambaker.com/ ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] The paranoid approach to crypto-plumbing
On Sep 17, 2013, at 5:49 AM, ianG i...@iang.org wrote: I wish there was a term for this sort of design in encryption systems beyond just defense in depth. AFAICT there is not such a term. How about the Failsafe Principle? ;) A good question. In my work, I've generally modelled it such that the entire system still works if one algorithm fails totally. But I don't have a name for that approach. How about the X Property (Trust No One - with a different slant on One)? -- Jerry :-) ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
[Cryptography] Ivan Ristić blog post on TLS best practices
Recommends phasing out RC4 among other things: http://blog.ivanristic.com/2013/09/updated-best-practices-deprecate-rc4.html -- Perry E. Metzgerpe...@piermont.com ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] End to end
On 17 Sep 2013 15:47, Christoph Gruber gr...@guru.at wrote: On 2013-09-16 Phillip Hallam-Baker hal...@gmail.com wrote: [snip] If people are sending email through the corporate email system then in many cases the corporation has a need/right to see what they are sending/receiving. [snip] Even if an organisation has a need/right to look into people's email, it is necessary to protect the communication on transport and storage. Of course a certain way of key recovery has to be in place. Just my 2 cents I intend to reply in more detail to the draft there's lots of very interesting work there. The most common approach to ILM for email in highly regulated sectors I've seen is to divorce the storage and transport mechanism and associated security from the long term storage. In a corporate environment the message is captured pre encryption and transmission and stored. Whilst key escrow mechanisms do exist the risk is that what gets escrowed isn't what was sent if you maliciously want to tunnel data (imagine not being able to decrypt a message at the request of the SEC or FSA because the key you were sent is wrong by the desktop app, or conversely having to decrypt everything first to check). You have the added issue of having to store all the associated keys and in 7 years (the typical retention period over here for business now regarded as complete, let alone long running contracts still in play) still have software to decrypt it. Hence, store in the clear, keep safe at rest using today's archival mechanism and when that starts to get dated move onto the next one en-masse, for all your media not just emails. Hence for the purposes of your RFC perhaps view that as a problem that doesn't require detailed specification. M -- Website: http://hallambaker.com/ ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] paranoid cryptoplumbing is a probably not defending the weakest point
On Tue, Sep 17, 2013 at 9:28 AM, Perry E. Metzger pe...@piermont.comwrote: In any case, I would continue to suggest that the weakest point (except for RC4) is (probably) not going to be your symmetric cipher. It will be protocol flaws and implementation flaws. No point in making the barn out of titanium if you're not going to put a door on it. If your threat is a patient eavesdropper (particularly one that obsessively archives traffic like the NSA) then combining ciphers can give you long term confidentiality even in the event one of your encryption primitives is compromised. The NSA of course participated in active attacks too, but it seems their main MO was passive traffic collection. But yes, endpoint security is weak, and an active attacker would probably choose that approach over trying to break particular algorithms. -- Tony Arcieri ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] AES [was NSA and cryptanalysis]
Such a backdoor would be feasible. It might be feasible in theory (and see the Illinois Malicious Processor as an example) but I think it would be hard to pull off well -- too hard to account for changes in future code, too hard to avoid detection of what you've done. Not sure this is true. If instead of leaking via the RNG, you leak via the cryptographic libraries *and* the windows socket libraries, then while there are probably two different teams involved, there is only one manufacturer - Microsoft. Ok that would exclude non-windows systems, which in this world of BYOD means an increasing number of ios or android devices - but the odds of one end or the other of any given exchange being a MS platform are good. Provided the cryptographic libraries are queried in a specific manner for tcp sequence numbers (which can be enforced) the winsock team never need know how those are generated, leaving just the cryptographic library holding both the input and output. I think you are over estimating how entrenched Windows is. First, its not widely used on server side. Most of the server side is Linux based so if you are on android or IOS, there is high chance you are not using Windows on both ends. Then they also are not as dominant as they were in the 90s and early 2000s. Apparently, if you consider mobile devices, they make 30% of the computers out there. So, for this to work, it got to be done across vendors. William On the other hand, we know from the press reports that several hardware crypto accelerators have been either backdoored or exploited. In those, leaking key material to observers in things like IVs or choices of nonces might be quite feasible. Such devices are built to be tamper resistant so no one will even notice if you add features to try to conceal the extra functionality of the device. For the Intel chips, I suspect that if they've been gimmicked, it will be more subtle, like a skew in the RNG that could be explained away as a manufacturing or design error. That said, things like the ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] The paranoid approach to crypto-plumbing
On Mon, 16 Sep 2013 17:47:11 -0700 Bill Frantz fra...@pwpconsult.com wrote: Authentication is achieved by signing the entire exchange with DSA. -- Change the protocol to sign the exchange with both RSA and DSA and send and check both signatures. Remember to generate the nonce for DSA using a deterministic method. The current data exchange encryption uses SHA1 in HMAC mode and 3DES in CBC mode with MAC then encrypt. The only saving grace is that the first block of each message is the HMAC, which will make the known plain text attacks on the protocol harder. -- I would replace this protocol with one that encrypts twice and MACs twice. Using one of the modes which encrypt and MAC in one operation as the inner layer is very tempting with a different cypher in counter mode and a HMAC as the outer layer. I confess I'm not sure what the current state of research is on MAC then Encrypt vs. Encrypt then MAC -- you may want to check on that. Also, you may want to generate your IVs deterministically from a block cipher in counter mode, and not actually send them on the wire -- see earlier discussions for why that is good, but in addition to assuring the IVs are unpredictable and do not repeat, it prevents a bad actor from using the IV as a covert channel. (Some would argue against using CBC mode entirely -- see Rogaway's paper on block cipher modes.) Perry -- Perry E. Metzgerpe...@piermont.com ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] paranoid cryptoplumbing is a probably not defending the weakest point
On Tue, 17 Sep 2013 10:07:38 -0700 Tony Arcieri basc...@gmail.com wrote: The NSA of course participated in active attacks too, but it seems their main MO was passive traffic collection. That's not what I've gotten out of the most recent revelations. It would seem that they've been evading rather than breaking the crypto: putting back doors in protocols, stealing keys, encouraging weak RNGs, adding flaws to hardware, etc. -- as well as doing active attacks using stolen or broken CA keys. I don't doubt that they archive everything they can forever, of course. Perry -- Perry E. Metzgerpe...@piermont.com ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] The paranoid approach to crypto-plumbing
On 2013-09-17 07:37, Peter Gutmann wrote: Tony Arcieri basc...@gmail.com writes: On Mon, Sep 16, 2013 at 9:44 AM, Bill Frantz fra...@pwpconsult.com wrote: After Rijndael was selected as AES, someone suggested the really paranoid should super encrypt with all 5 finalests [...]. I wish there was a term for this sort of design in encryption systems beyond just defense in depth. AFAICT there is not such a term. How about the Failsafe Principle? ;) How about Stannomillinery? I like Stannopilosery better, but the first half is a keeper. Or, perhaps a bit incongruously, Stannopsaffery. Fun, Stephan ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
[Cryptography] PRISM-Proofing and PRISM-Hardening
My phrase PRISM-Proofing seems to have created some interest in the press. PRISM-Hardening might be more important, especially in the short term. The objective of PRISM-hardening is not to prevent an attack absolutely, it is to increase the work factor for the attacker attempting ubiquitous surveillance. Examples include: Forward Secrecy: Increases work factor from one public key per host to one public key per TLS session. Smart Cookies: Using cookies as authentication secrets and passing them as plaintext bearer tokens is stupid. It means that all an attacker needs to do is to compromise TLS once and they have the authentication secret. The HTTP Session-ID draft I proposed a while back reduces the window of compromise to the first attack. I am sure there are other ways to increase the work factor. -- Website: http://hallambaker.com/ ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] paranoid cryptoplumbing is a probably not defending the weakest point
On Tue, Sep 17, 2013 at 8:54 AM, Perry E. Metzger pe...@piermont.comwrote: I'd like to note quite strongly that (with certain exceptions like RC4) the odds of wholesale failures in ciphers seem rather small compared to the odds of systems problems like bad random number generators, sabotaged accelerator hardware, stolen keys, etc., and a smart attacker goes for the points of weakness. As a counterpoint to what I was saying earlier, here's a tool that's likely focusing on the wrong problems: https://keybase.io/triplesec/ -- Tony Arcieri ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
[Cryptography] Johns Hopkins round table on NSA and Crypto
Matthew Green tweeted earlier today that Johns Hopkins will be hosting a roundtable at 10am EDT tomorrow (Wednesday, September 18th) to discuss the NSA crypto revelations. Livestream will be at: https://connect.johnshopkins.edu/jhuisicrypto/ Perry -- Perry E. Metzgerpe...@piermont.com ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] PRISM-Proofing and PRISM-Hardening
On Sep 17, 2013, at 2:43 PM, Phillip Hallam-Baker hal...@gmail.com wrote: My phrase PRISM-Proofing seems to have created some interest in the press. PRISM-Hardening might be more important, especially in the short term. The objective of PRISM-hardening is not to prevent an attack absolutely, it is to increase the work factor for the attacker attempting ubiquitous surveillance. Examples include: Forward Secrecy: Increases work factor from one public key per host to one public key per TLS session. How does that work if one of PRISMs objectives is to compromise data _before_ it is transmitted by subverting its storage in one way or another? Forward secrecy does nothing to impact the work factor in that case. Smart Cookies: Using cookies as authentication secrets and passing them as plaintext bearer tokens is stupid. It means that all an attacker needs to do is to compromise TLS once and they have the authentication secret. The HTTP Session-ID draft I proposed a while back reduces the window of compromise to the first attack. I am sure there are other ways to increase the work factor. I think that increasing the work factor would often result in switching the kind of work performed to that which is easier than breaking secrets directly. That may be good. Or it may not. PRISM-Hardening seems like a blunt instrument, or at least one which may only be considered worthwhile in a particular context (technical protection) and which ignores the wider context (in which such technical protections alone are insufficient against this particular adversary). - johnk -- Website: http://hallambaker.com/ ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] The paranoid approach to crypto-plumbing
On 9/17/13 at 2:48 AM, i...@iang.org (ianG) wrote: The problem with adding multiple algorithms is that you are also adding complexity. ... Both Perry and Ian point out: And, as we know, the algorithms rarely fail. [but systems do] ... Absolutely! The techniques I suggested used the simplest combining function I could think of: XOR. But complexity is the mortal enemy of reliability and security. Do you really have the evidence that such extra effort is required? I don't have any evidence, which is why I included Paranoid it the message subject. I do know that NSA is well served when people believe things about cryptography which aren't true. If you believe TLS is broken[1] then you might use something much weaker in its place. If you believe AES/RSA/ECDSA etc. are strong when they aren't you will continue to rely on them. Remember, while you're building this extra capability, customers aren't being protected at all, and are less likely to be so in the future. I see this a the crux of our problem as responsible crypto people. The systems we thought were working are broken. For both professional and political reasons we need to fix them quickly. My morning paper includes the comic Non Sequitur. Today's strip has one of the regular characters being visited by two NSA agents. This story is front and center in the public's attention. There is no better time to press for whatever disruptive changes may be needed. What we need is working code we can get adopted. It can be prototype code with more complete versions to come later. But our best chance of adoption is now. On 9/17/13 at 8:54 AM, pe...@piermont.com (Perry E. Metzger) wrote: (Of course, if the endpoints are trusted hardware running a formally verified capability operating system and you still have time on your hands, hey, why not? Of course, when I posted a long message about modern formal verification techniques and how they're now practical, no one bit on the hook.) And I happen to have one in my back pocket. :-) Yes, CapROS[2] isn't proven, but it is mature enough to build Perry's household encryption box. (There are ports to both X86 and ARM. Device drivers are outside the kernel. The IP stack works. You probably don't need much more.) Any code that works in CapROS will probably port easily to a proven capability OS. [And yes Perry, I was very impressed by your arguments for program proving technology. It is a bit out of my area of expertise. But I have always thought that different ways of looking at programs can only help them be more reliable, and proving is a different way.] All that said, even I feel the temptation for low performance applications to do something like Bill Frantz suggests. It is in the nature of people in our community to like playing with such things. Just don't take them *too* seriously please. Hay, I like playing in the crypto sandbox, and redundancy is a classic technique. I have seen questions about DH -- factoring and key sizes, and EC -- cooked curves. If you worry about these issues, and don't have a third alternative, combining them seems like a good idea. [1] And TLS is big enough to share with the internet the characteristic that it can be two things. The internet is always up somewhere. Some parts of TLS are secure for certain uses. The internet is never all up. Some parts of TLS are seriously broken. [2] http://www.capros.org/ --- Bill Frantz| Concurrency is hard. 12 out | Periwinkle (408)356-8506 | 10 programmers get it wrong. | 16345 Englewood Ave www.pwpconsult.com |- Jeff Frantz | Los Gatos, CA 95032 ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] PRISM-Proofing and PRISM-Hardening
On Tue, 17 Sep 2013 16:52:26 -0400 John Kemp j...@jkemp.net wrote: On Sep 17, 2013, at 2:43 PM, Phillip Hallam-Baker hal...@gmail.com wrote: The objective of PRISM-hardening is not to prevent an attack absolutely, it is to increase the work factor for the attacker attempting ubiquitous surveillance. Examples include: Forward Secrecy: Increases work factor from one public key per host to one public key per TLS session. How does that work if one of PRISMs objectives is to compromise data _before_ it is transmitted by subverting its storage in one way or another? Forward secrecy does nothing to impact the work factor in that case. So, PFS stops attackers from breaking all communications by simply stealing endpoint RSA keys. You need some sort of side channel or reduction of the RNG output space in order break an individual communication then. (Note that this assumes no cryptographic breakthroughs like doing discrete logs over prime fields easily or (completely theoretical since we don't really know how to do it) sabotage of the elliptic curve system in use.) Given that many real organizations have hundreds of front end machines sharing RSA private keys, theft of RSA keys may very well be much easier in many cases than broader forms of sabotage. Perry -- Perry E. Metzgerpe...@piermont.com ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] PRISM-Proofing and PRISM-Hardening
On Tue, Sep 17, 2013 at 05:01:12PM -0400, Perry E. Metzger wrote: (Note that this assumes no cryptographic breakthroughs like doing discrete logs over prime fields easily or (completely theoretical since we don't really know how to do it) sabotage of the elliptic curve system in use.) Given that many real organizations have hundreds of front end machines sharing RSA private keys, theft of RSA keys may very well be much easier in many cases than broader forms of sabotage. There is also I suspect a lot of software with compiled-in EDH primes (RFC 5114 or other). Without breaking EDH generally, perhaps they have better precomputation attacks that were effective against the more popular groups. I would certainly recommend that each server generate its own EDH parameters, and change them from time to time. Sadly when choosing between a 1024-bit or a 2048-bit EDH prime you get one of interoperability or best-practice security but not both. And indeed the FUD around the NIST EC curves is rather unfortunate. Is secp256r1 better or worse than 1024-bit EDH? -- Viktor. ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
[Cryptography] An NSA mathematician shares his from-the-trenches view of the agency's surveillance activities
Forwarded-By: David Farber d...@farber.net Forwarded-By: Annie I. Anton Ph.D. aian...@mindspring.com http://www.zdnet.com/nsa-cryptanalyst-we-too-are-americans-720689/ NSA cryptanalyst: We, too, are Americans Summary: ZDNet Exclusive: An NSA mathematician shares his from-the-trenches view of the agency's surveillance activities. By David Gewirtz for ZDNet Government | September 16, 2013 -- 12:07 GMT (05:07 PDT) An NSA mathematician, seeking to help shape the ongoing debate about the agency's foreign surveillance activities, has contributed this column to ZDNet Government. The author, Roger Barkan, also appeared in last year's National Geographic Channel special about the National Security Agency. The rest of this article contains Roger's words only, edited simply for formatting. Many voices -- from those in the White House to others at my local coffee shop -- have weighed in on NSA's surveillance programs, which have recently been disclosed by the media. As someone deep in the trenches of NSA, where I work on a daily basis with data acquired from these programs, I, too, feel compelled to raise my voice. Do I, as an American, have any concerns about whether the NSA is illegally or surreptitiously targeting or tracking the communications of other Americans? The answer is emphatically, No. NSA produces foreign intelligence for the benefit and defense of our nation. Analysts are not free to wander through all of NSA's collected data willy-nilly, snooping into any communication they please. Rather, analysts' activity is carefully monitored, recorded, and reviewed to ensure that every use of data serves a legitimate foreign intelligence purpose. We're not watching you. We're the ones being watched. Further, NSA's systems are built with several layers of checks and redundancy to ensure that data are not accessed by analysts outside of approved and monitored channels. When even the tiniest analyst error is detected, it is immediately and forthrightly addressed and reported internally and then to NSA's external overseers. Given the mountains of paperwork that the incident reporting process entails, you can be assured that those of us who design and operate these systems are extremely motivated to make sure that mistakes happen as rarely as possible! A myth that truly bewilders me is the notion that the NSA could or would spend time looking into the communications of ordinary Americans. Even if such looking were not illegal or very dangerous to execute within our systems, given the monitoring of our activities, it would not in any way advance our mission. We have more than enough to keep track of -- people who are actively planning to do harm to American citizens and interests -- than to even consider spending time reading recipes that your mother emails you. There's no doubt about it: We all live in a new world of Big Data. Much of the focus of the public debate thus far has been on the amount of data that NSA has access to, which I feel misses the critical point. In today's digital society, the Big Data genie is out of the bottle. Every day, more personal data become available to individuals, corporations, and the government. What matters are the rules that govern how NSA uses this data, and the multiple oversight and compliance efforts that keep us consistent with those rules. I have not only seen but also experienced firsthand, on a daily basis, that these rules and the oversight and compliance practices are stringent. And they work to protect the privacy rights of all Americans. Like President Obama, my Commander-in-Chief, I welcome increased public scrutiny of NSA's intelligence-gathering activities. The President has said that we can and will go further to publicize more information about NSA's operating principles and oversight methodologies. I have every confidence that when this is done, the American people will see what I have seen: that the NSA conducts its work with an uncompromising respect for the rules -- the laws, executive orders, and judicial orders under which we operate. As this national dialogue continues, I look to the American people to reach a consensus on the desired scope of U.S. intelligence activities. If it is determined that the rules should be changed or updated, we at NSA would faithfully and effectively adapt. My NSA colleagues and I stand ready to continue to defend this nation using only the tools that we are authorized to use and in the specific ways that we are authorized to use them. We wouldn't want it any other way. We never forget that we, too, are Americans. Roger Barkan, a Harvard-trained mathematician, has worked as an NSA cryptanalyst since 2002. The views and opinions expressed herein are those of the author and do not necessarily reflect those of the National Security Agency/Central Security Service. ___ The cryptography mailing list
Re: [Cryptography] The paranoid approach to crypto-plumbing
On Sep 17, 2013, at 11:41 AM, Perry E. Metzger pe...@piermont.com wrote: I confess I'm not sure what the current state of research is on MAC then Encrypt vs. Encrypt then MAC -- you may want to check on that. Encrypt then MAC has a couple of big advantages centering around the idea that you don't have to worry about reaction attacks, where I send you a possibly malformed ciphertext and your response (error message, acceptance, or even time differences in when you send an error message) tells me something about your secret internal state. Perry --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] The paranoid approach to crypto-plumbing
On Sep 17, 2013, at 6:21 PM, John Kelsey crypto@gmail.com wrote: I confess I'm not sure what the current state of research is on MAC then Encrypt vs. Encrypt then MAC -- you may want to check on that. Encrypt then MAC has a couple of big advantages centering around the idea that you don't have to worry about reaction attacks, where I send you a possibly malformed ciphertext and your response (error message, acceptance, or even time differences in when you send an error message) tells me something about your secret internal state. On a purely practical level, to reject a damaged message, with decrypt-then-MAC (ordering things on the receiver's side...) I have to pay the cost of a decryption plus a MAC computation; with MAC-then-decrypt, I only pay the cost of the MAC. On top of this, decryption is often more expensive than MAC computation. So decrypt-then-MAC makes DOS attacks easier. One could also imagine side-channel attacks triggered by chosen ciphertext. Decrypt-then-MAC allows an attacker to trigger them; MAC-then-decrypt does not. (Attacks on MAC's seems somewhat less likely to be data dependent, but who knows for sure. In any case, even if you had such an attack, it would get you the authentication key - and at that point you would be able to *start* your attack not the decryption key. MAC'ing the actual data always seemed more logical to me, but once you look at the actual situation, it no longer seems like the right thing to do. -- Jerry ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] The paranoid approach to crypto-plumbing
On Sep 17, 2013, at 7:18 PM, Jerry Leichter wrote: On Sep 17, 2013, at 6:21 PM, John Kelsey crypto@gmail.com wrote: I confess I'm not sure what the current state of research is on MAC then Encrypt vs. Encrypt then MAC -- you may want to check on that. Encrypt then MAC has a couple of big advantages centering around the idea that you don't have to worry about reaction attacks, where I send you a possibly malformed ciphertext and your response (error message, acceptance, or even time differences in when you send an error message) tells me something about your secret internal state. On a purely practical level, to reject a damaged message, with decrypt-then-MAC (ordering things on the receiver's side...) I have to pay the cost of a decryption plus a MAC computation; with MAC-then-decrypt, I only pay the cost of the MAC. On top of this, decryption is often more expensive than MAC computation. So decrypt-then-MAC makes DOS attacks easier. One could also imagine side-channel attacks triggered by chosen ciphertext. Decrypt-then-MAC allows an attacker to trigger them; MAC-then-decrypt does not. (Attacks on MAC's seems somewhat less likely to be data dependent, but who knows for sure. In any case, even if you had such an attack, it would get you the authentication key - and at that point you would be able to *start* your attack not the decryption key. People have made these attacks mildly practical (and note how old this and the cited paper are). http://kebesays.blogspot.com/2010/11/mac-then-encrypt-also-harmful-also-hard.html Dan ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
[Cryptography] Gilmore response to NSA mathematician's make rules for NSA appeal
Re: http://www.zdnet.com/nsa-cryptanalyst-we-too-are-americans-720689/ In his Big Data argument, NSA analyst Roger Barkan carefully skips over the question of what rules there should be for government *collecting* big data, claiming that what matters are the rules for how the data is used, *after* assuming that it will be collected. Governments seldom lose powers; they work to grow their powers, to loosen the rules that govern what they can do. NSA's metadata database has fewer restrictions today than it did when it was collected, all carefully legal and vetted by a unaccountable bureacracy that has its own best interests at heart. My own Senator Feinstein claims from her oversight post that whatever's good for NSA is good for America; my Congresswoman Pelosi worked hard to defeat the bill that would have stopped the NSA phone metadata program in its tracks; and both of them run political machines that have made them lifetime congresspeople, no matter how out-of-step they are with their constituents. NSA and these overseers conspired to keep the whole thing secret, not to avoid tipping off the terrorists who already knew NSA was lawless, but to avoid the public backlash that would reduce their powers and maybe even reverse a decade of hugely growing secret budgets. Having watched the Drug War over the last 50 years, NSA for 30 years, and TSA/DHS over the last decade, I have zero faith that NSA can collect intimite data about every person in America and on the planet, and then never use that data for any purpose that is counter to the interest of the people surveilled. There will always be emergencies, always crises, always evildoers, always opportunities, that would be relieved if we could just do X that wasn't allowed until now. So what if general warrants are explicitly forbidden? And if searching people without cause is prohibited? We could catch two alleged terrorists -- or a few thousand people with sexual images -- or 750,000 pot smokers -- or 400,000 hard-working Mexican migrants -- every year, if we just use tricky legalisms to ignore those pesky rules. So the government does ignore them. Will you or your loved ones fall into the next witchhunt? Our largest city was just found guilty of forcibly stopping and physically searching hundreds of thousands of black and latino people without cause for a decade -- a racist program defended both before and after the verdict by the Mayor, the Police Commission, the City Council, and state legislators. NSA has secretly been doing warrantless, suspicionless, non-physical searches on every American with a phone for a decade, all using secret gerrymandered catch-22 loopholes in the published constitution and laws, defended before and after by the President, the Congress and all the courts. Make rules for NSA? We already have published rules for NSA and it doesn't follow them today! So Mr Barkan moves on to why NSA would never work against the citizens. The US imprisons more people than any country on earth, and murders far more than most, but it's all OK because those poor, overworked, rule-bound government employees who are doing it are defending freedom. Bullshit they are! Somehow scores of countries have found freedom without descending to this level of lawlessness and repression. NSA cannot operate outside of this context; rules that might work in a hypothetical honest and free government, will not work in the corrupt and lawless government that we have in the United States. NSA employees are accountable for following the rules, Mr. Barkan? Don't make me laugh. There's a word for it: impunity. EFF has diligently pursued NSA in court for most of a decade, and has still gotten no court to even consider the question is what NSA did legal? Other agencies like DoJ and HHS regularly retain big powers and budgets by officially lying about whether marijuana has any medical uses, rather than following the statutes, despite millions of Americans who use it on the advice of their doctor. None of these officials lose their jobs. Find me a senior federal official anywhere who has ever lost their job over major malfeasance like wiretapping, torture, kidnapping, indefinite imprisonment, assassination, or malicious use of power -- let alone been prosecuted or imprisoned for it. Innocent citizens go to prison all the time, from neighborhood blacks to medical marijuana gardeners to Tommy Chong and Martha Stewart -- high officials never. Re Big Data: I have never seen data that could be abused by someone who didn't have a copy of it. My first line of defense of privacy is to deny copies of that data to those who would collect it and later use it against me. This is exactly the policy that NSA supposedly has to follow, according to the published laws and Executive Orders: to prevent abuses against Americans, don't collect against Americans. It's a good first step. NSA is not following that policy. Where Big Data collection is voluntary, I do
Re: [Cryptography] People should turn on PFS in TLS (was Re: Fwd: NYTimes.com: N.S.A. Foils Much Internet Encryption)
At a stretch, one can imagine circumstances in which trying multiple seeds to choose a curve would lead to an attack that we would not easily replicate. I don't suggest that this is really what happened; I'm just trying to work out whether it's possible. Suppose you can easily break an elliptic curve with the right attack string. Attack strings are very expensive to generate, at say 2^80 operations. Moreover, you can't tell what curves they break until they are generated, but it's cheap to test whether a given string breaks a given curve. Each string breaks about one curve in 2^80. Thus the NSA generate an attack string, then generate 2^80 curves looking for one that is broken by the string they generated. They can safely publish this curve, knowing that unless a new attack is developed it will take 2^160 effort for anyone else to generate an attack string that breaks the curve they have chosen. ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] An NSA mathematician shares his from-the-trenches view of the agency's surveillance activities
Techdirt takes apart his statement here: https://www.techdirt.com/articles/20130917/02391824549/nsa-needs-to-give-its-rank-and-file-new-talking-points-defending-surveillance-old-ones-are-stale.shtml NSA Needs To Give Its Rank-and-File New Talking Points Defending Surveillance; The Old Ones Are Stale from the that's-not-really-going-to-cut-it dept by Mike Masnick, Tue, Sep 17th 2013 It would appear that the NSA's latest PR trick is to get out beyond the top brass -- James Clapper, Keith Alexander, Michael Hayden and Robert Litt haven't exactly been doing the NSA any favors on the PR front lately -- and get some commentary from the rank and file. ZDNet apparently agreed to publish a piece from NSA mathemetician/ cryptanalyst Roger Barkan in which he defends the NSA using a bunch of already debunked talking points. What's funny is that many of these were the talking points that the NSA first tried out back in June and were quickly shown to be untrue. However, let's take a look. It's not that Barkan is directly lying... it's just that he's setting up strawmen to knock down at a record pace. John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] The paranoid approach to crypto-plumbing
On 9/17/13 at 4:18 PM, leich...@lrw.com (Jerry Leichter) wrote: MAC'ing the actual data always seemed more logical to me, but once you look at the actual situation, it no longer seems like the right thing to do. When I chose MAC then encrypt I was using the MAC to check the crypto code. CRC would have worked too, but the MAC was free. (I really don't trust my own code very much.) Cheers - Bill - Bill Frantz| The first thing you need when | Periwinkle (408)356-8506 | using a perimeter defense is a | 16345 Englewood Ave www.pwpconsult.com | perimeter. | Los Gatos, CA 95032 ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
[Cryptography] FISA court releases its Primary Order re telephone metadata
The FISA court has a web site (newly, this year): http://www.uscourts.gov/uscourts/courts/fisc/index.html Today they released a Memorandum Opinion and Primary Order in case BR 13-109 (Business Records, 2013, case 109), which lays out the legal reasoning behind ordering several telephone companies to prospectively give NSA the calling records of every subscriber. That document is here: http://www.uscourts.gov/uscourts/courts/fisc/br13-09-primary-order.pdf I am still reading it... John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] FISA court releases its Primary Order re telephone metadata
On Wed, Sep 18, 2013, at 11:02 AM, John Gilmore wrote: That document is here: http://www.uscourts.gov/uscourts/courts/fisc/br13-09-primary-order.pdf Page 4: In granting the government's request, the Court has prohibited the government from accessing the data for any other intelligence or investigative purpose And the counter: http://www.washingtonpost.com/blogs/the-switch/wp/2013/08/05/the-nsa-is-giving-your-phone-records-to-the-dea-and-the-dea-is-covering-it-up/ Alfie -- Alfie John alf...@fastmail.fm ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] The paranoid approach to crypto-plumbing
For hash functions, MACs, and signature schemes, simply concatenating hashes/MACs/signatures gives you at least the security of the stronger one. Joux multicollisions simply tell us that concatenating two or more hashes of the same size doesn't improve their resistance to brute force collsion search much. The only thing you have to be sure of there is that the MAC and signature functions aren't allowed access to each others' secret keys or internal random numbers. Otherwise, MAC#1 can always take the value of MAC#2's key. This is just message, signature 1, signature 2 where the signatures are over the message only. For encryption algorithms, superencryption works fine. You can first encrypt with AES-CBC, then encrypt with Twofish-CFB, then with CAST5 in CFB mode. Again, assuming you are not letting the algorithms know each others' internal state or keys, if any of these algorithms are resistant to chosen plaintext attacks, then the combination will be. This doesn't guarantee that the combination will be any stronger than the strongest of these, but it will be no weaker. (Biham and later Wagner had some clever attacks against some chaining modes using single-DES that showed that you wouldn't always get anything stronger than one of the ciphers, but if any of these layers is strong, then the whole encryption is strong. An alternative approach is to construct a single super-block-cipher, say AES*Twofish*SERPENT, and use it in a standard chaining mode. However, then you are still vulnerable to problems with your chaining mode--the CBC reaction attacks could still defeat a system that used AES*Twofish*SERPENT in CBC mode, but not AES-CBC followed by Twofish-CFB followed by SERPENT-CTR. For key-encryption or transport, I think it's a little more complicated. If I need six symmetric keys and want to use three public key methods (say ECDH, NTRU, RSA) to transport the key, I've got to figure out a way to get the benefit from all these key exchange mechanisms to all six symmetric keys, in a way that I'm sure will not leak any information about any of them. Normally we would use a KDF for this, but we don't want to trust any one crypto algorithm not to screw us over. I think we can get this if we assume that we can have multiple KDFs that have secrets from one another. That is, I compute KDF1( key1, combined key exchange input) XOR KDF2( key2, combined key exchange input) The reason the two KDFs need keys that are secret from each other is because otherwise, KDF1 could just duplicate KDF2 and we'd get an all-zero set of keys. If KDF2 is strong, then KDF1 can't generate an output string with any relationship to KDF2's string when it doesn't know all the input KDF2 is getting. I agree with Perry that this is probably padlocking a screen door. On the other hand, if we want to do it, we want to make sure we guard against as many bad things as possible. In particular, it would be easy to do this in such a way that we missed chaining mode/reaction type attacks. --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] The paranoid approach to crypto-plumbing
Arggh! Of course, this superencryption wouldn't help against the CBC padding attacks, because the attacker would learn plaintext without bothering with the other layers of encryption. The only way to solve that is to preprocess the plaintext in some way that takes the attacker's power to induce a timing difference or error message away. --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography