Re: [Cryptography] prism-proof email in the degenerate case
On Thu, Oct 10, 2013 at 04:22:50PM -0400, Jerry Leichter wrote: > On Oct 10, 2013, at 11:58 AM, "R. Hirschfeld" wrote: > > Very silly but trivial to "implement" so I went ahead and did so: > > > > To send a prism-proof email, encrypt it for your recipient and send it > > to irrefrangi...@mail.unipay.nl > Nice! I like it. Me too. I've been telling people that all PRISM will accomplish regarding the bad guys is to get them to use dead drops, such as comment posting on any of millions of blogs -- low bandwidth, undetectable. The technique in this thread makes the use of a dead drop obvious, and adds significantly to the recipient's work load, but in exchange brings the bandwidth up to more usable levels. Either way the communicating peers must pre-agree a number of things -- a traffic analysis achilles point, but it's one-time vulnerability, and chances are people who would communicate this way already have such meetings. > A couple of comments: > > 1. Obviously, this has scaling problems. The interesting question is > how to extend it while retaining the good properties. If participants > are willing to be identified to within 1/k of all the users of the > system (a set which will itself remain hidden by the system), choosing > one of k servers based on a hash of the recipient would work. (A > concerned recipient could, of course, check servers that he knows > can't possibly have his mail.) Can one do better? Each server/list is a channel. Pre-agree on channels or use hashes. If the latter then the hashes have to be of {sender, recipient}, else one party has a lot of work to do, but then again, using just the sender or just the recipient helps protect the other party against traffic analysis. Assuming there are millions of "channels" then maybe something like H({sender, truncate(H(recipient), log2(number-of-channels-to check))}) will do just fine. And truncate(H(recipient, log2(num-channels))) can be used for introduction purposes. The number of servers/lists divides the total work to do to receive a message. > 2. The system provides complete security for recipients (all you can > tell about a recipient is that he can potentially receive messages - > though the design has to be careful so that a recipient doesn't, for > example, release timing information depending on whether his > decryption succeeded or not). However, the protection is more limited > for senders. A sender can hide its activity by simply sending random > "messages", which of course no one will ever be able to decrypt. Of > course, that adds yet more load to the entire system. But then the sender can't quite prove that they didn't send anything. In a rubber hose attack this could be a problem. This also applies to recipients: they can be observed fetching messages, and they can be observed expending power trying to find ones addressed to them. Also, there's no DoS protection: flooding the lists with bogus messages is a DoS on recipients. Nico -- ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] AES-256- More NIST-y? paranoia
On Mon, Oct 07, 2013 at 11:45:56AM -0400, Arnold Reinhold wrote: > If we are going to always use a construction like AES(KDF(key)), as > Nico suggests, why not go further and use a KDF with variable length > output like Keccak to replace the AES key schedule? And instead of Note, btw, that Keccak is very much like a KDF, and KDFs generally produce variable length output. In fact, the HKDF construction [RFC5869] is rather similar to the sponge concept underlying Keccak. It was the use of SHA-256 as a KDF [but not in an HKDF-like construction] that I was objecting to. > making provisions to drop in a different cipher should a weakness be > discovered in AES, make the number of AES (and maybe KDF) rounds a > negotiated parameter. Given that x86 and ARM now have AES round > instructions, other cipher algorithms are unlikely to catch up in > performance in the foreseeable future, even with an higher AES round > count. Increasing round count is effortless compared to deploying a > new cipher algorithm, even if provision is made the protocol. Dropping > such provisions (at least in new designs) simplifies everything and > simplicity is good for security. As Jerry Leichter said, that's a really nice idea. My IANAC concern would be that there might be greatly diminished returns past some number of rounds relative to the sorts of future attacks that that might drastically weaken AES. There are also issues with cipher modes to worry about, so that on the whole I would still like to have algorithm agility (though I don't think you were arguing against it!); but the addition of a cipher strength knob might well be useful. You're quite right that with CPU support for AES it will be very difficult to justify switching to any other cipher... There's always 3AES (a form of round count, but a layer up, and with much bigger step sizes). I suspect it's not AES we'll have problems with, but everything else (asymmetric crypto and cipher modes most likely). Nico -- ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?
On Sat, Oct 05, 2013 at 09:29:05PM -0400, John Kelsey wrote: > One thing that seems clear to me: When you talk about algorithm > flexibility in a protocol or product, most people think you are > talking about the ability to add algorithms. Really, you are talking > more about the ability to *remove* algorithms. We still have stuff > using MD5 and RC4 (and we'll probably have stuff using dual ec drbg > years from now) because while our standards have lots of options and > it's usually easy to add new ones, it's very hard to take any away. Algorithm agility makes it possible to add and remove algorithms. Both, addition and removal, are made difficult by the fact that it is difficult to update deployed code. Removal is made much more difficult still by the need to remain interoperable with legacy that has been deployed and won't be updated fast enough. I don't know what can be done about this. Auto-update is one part of the answer, but it can't work for everything. I like the idea of having a CRL-like (or OCSP-like?) system for "revoking" algorithms. This might -in some cases- do nothing more than warn the user, or -in other cases- trigger auto-update checks. But, really, legacy is a huge problem that we barely know how to ameliorate a little. It still seems likely that legacy code will continue to remain deployed for much longer than the advertised service lifetime of the same code (see XP, for example), and for at least a few more product lifecycles (i.e., another 10-15 years before we come up with a good solution). Nico -- ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] AES-256- More NIST-y? paranoia
On Fri, Oct 4, 2013 at 11:20 AM, Ray Dillinger wrote: > So, it seems that instead of AES256(key) the cipher in practice should be > AES256(SHA256(key)). More like: use a KDF and separate keys (obtained by applying a KDF to a root key) for separate but related purposes. For example, if you have a full-duplex pipe with a single pre-shared secret key then: a) you should want separate keys for each direction (so you don't need a direction flag in the messages to deal with reflection attacks), b) you should derive a new set of keys for each "connection" if there are multiple connections between the same two peers. And if you're using an AEAD-by-generic-composition cipher mode then you'll want separate keys for data authentication vs. data encryption. The KDF might well be SHA256, but doesn't have to be. Depending on characteristics of the original key you might need a more complex KDF (e.g., a PBKDF if the original is a human-memorable password). This (and various other details about accepted KDF technology that I'm eliding) is the reason that you should want to think of a KDF rather than a hash function. Suppose some day you want to switch to a cipher with a different key size. If all you have to do is tell the KDF how large the key is, then it's easy, but if you have to change the KDF along with the cipher then you have more work to do, work that might or might not be easy. Being able to treat the protocol elements as modular has significant advantages -and some pitfalls- over more monolythic constructions. Nico -- ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] The hypothetical random number generator backdoor
On Sep 25, 2013 8:06 AM, "John Kelsey" wrote: > On Sep 22, 2013, at 8:09 PM, Phillip Hallam-Baker wrote: > > Either way, the question is how to stop this side channel attack. > > One simple way would be to encrypt the nonces from the RNG under a > > secret key generated in some other fashion. > > > > nonce = E (R, k) > > This would work if you had a secure key I couldn't guess for k. If > the entropy is really low, though, I would still see duplicate outputs > from time to time. If the RNG has short cycles, this would also show > up. Note that Kerberos "confounds": it encrypts it's nonces for AES in CTS mode (similar to CBC). Confounding makes it harder to exploit a backdoored RNG if the exploit is made easier by the ability to see RNG outputs as nonces. I'm not sure how much harder though: presumably in the worst case the attacker has the victim's device's seed somehow (e.g., from a MAC address, purchase records, ...), and can search its output via boot and iteration counter searches (the details depend on the PRNG construction, obviously). Seeing an RNG output in the clear probably helps, but the attacker could design the PRNG such that they don't need to. Now, there's a proposal to drop confounding for new cipher suites in Kerberos. Among other things doing so would improve performance. It would also make analysis of the new cipher suites easier, as they'd match what other standard protocols do. Of course, I'd rather implementations have a strong enough RNG and SRNG -- I'd rather not have to care if some RNG outputs are trivially available to attackers. But if confounding is a net security improvement for PRNG-only use cases (is it? it might depend on the PRNG construction and boot-time seed handling), maybe we should keep it. Thoughts? Nico -- ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] [cryptography] very little is missing for working BTNS in Openswan
On Thu, Sep 12, 2013 at 08:28:56PM -0400, Paul Wouters wrote: > Stop making crypto harder! I think you're arguing that active attacks are not a concern. That's probably right today w.r.t. PRISMs. And definitely wrong as to cafe shop wifi. The threat model is the key. If you don't care about active attacks, then you can get BTNS with minimal effort. This is quite true. At least some times we need to care about active attacks. > On Thu, 12 Sep 2013, Nico Williams wrote: > >Note: you don't just want BTNS, you also want RFC5660 -- "IPsec > >channels". You also want to define a channel binding for such channels > >(this is trivial). > > This is exactly why BTNS went nowhere. People are trying to combine > anonymous IPsec with authenticated IPsec. Years dead-locked in channel > binding and channel upgrades. That's why I gave up on BTNS. See also > the last bit of my earlier post regarding Opportunistic Encryption. It's hard to know exactly why BTNS failed, but I can think of: - It was decades too late; it (and IPsec channels) should have been there from the word (RFC1825, 1995), and even then it would have been too late to compete with TLS given that the latter required zero kernel code additions while the former required lots. - I only needed it as an optimization for NFS security at a time when few customers really cared about deploying secure NFS because Linux lacked mature support for it. It's hard to justify a bunch of work on multiple OSes for an optimization to something few customers used even if they should have been using it. - Just do it all in user-land has pretty much won. Any user-land protocol you can think of, from TLS, to DJB's MinimaLT, to -heck- even IKE and ESP over UDP, will be easier to implement and deploy than anything that requires matching kernel implementations in multiple OSes. You see this come up *all* the time in Apps WG. People want SCTP, but for various reasons (NAATTTS) they can't, so they resort to putting an entire SCTP or SCTP-like stack in user-land and run it over UDP. Heck, there's entire TCP/IP user-land stacks designed to go faster than any general-purpose OS kernel's TCP/IP stack does. Yeah, this is a variant of the first reason. There's probably other reasons; listing them all might be useful. These three were probably enough to doom the project. The IPsec channel part is not really much more complex than, say, "connected" UDP sockets. But utter simplicity four years ago was insufficient -- it needed to have been there two decades ago. Nico -- ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Summary of the discussion so far
On Fri, Sep 13, 2013 at 03:17:35PM -0400, Perry E. Metzger wrote: > On Thu, 12 Sep 2013 14:53:28 -0500 Nico Williams > wrote: > > Traffic analysis can't really be defeated, not in detail. > > What's wrong with mix networks? First: you can probably be observed using them. Unless too many people use mix networks you might just end up attracting unwanted attention: more passive surveillance, maybe even active attacks (at the limit very physical attacks). Second: I suspect that to be most effective the mix network also has to be most inconvenient (high latency, for example). That probably means mix networks won't be popular enough to help with the first problem. Third: the mix network had better cross multiple jurisdictions that are not accustomed to cooperating with each other. This seems very difficult to arrange. I'd love to be disabused of the above though. Nico -- ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Summary of the discussion so far
On Wed, Sep 11, 2013 at 04:03:44PM -0700, Nemo wrote: > Phillip Hallam-Baker writes: > > > I have attempted to produce a summary of the discussion so far for use > > as a requirements document for the PRISM-PROOF email scheme. This is > > now available as an Internet draft. > > > > http://www.ietf.org/id/draft-hallambaker-prismproof-req-00.txt > > First, I suggest removing all remotely political commentary and sticking > to technical facts. Phrases like "questionable constitutional validity" > have no place in an Internet draft and harm the document, in my opinion. Privacy relative to PRISMs is a political problem first and foremost. The PRIM operators, if you'll recall, have a monopoly on the use of force. They have the rubber hoses. No crypto can get you out of that bind. I'm extremely skeptical of anti-PRISM plans. I'd start with: - open source protocols - two or more implementations of each protocol, preferably one or more being open source - build with multiple build tools, examine their output[*] - run on minimal OSes, on minimal hardware [**] After that... well, you have to trust counter-parties, trusted third parties, ... It get iffy real quick. The simplest protocols to make PRISM-proof are ones where there's only one end-point. E.g., filesystems. Like Tahoe-LAFS, ZFS, and so on. One end-point -> no counter-parties nor third parties to compromise. The one end-point (or multiple instances of it) is still susceptible to lots of attacks, including local attacks involving plain old dumb security bugs. Next simplest: real-time messaging (so OTR is workable). Traffic analysis can't really be defeated, not in detail. On the other hand, the PRISMs can't catch low-bandwidth communications over dead drops. The Internet is full of dead drops. This makes one wonder why bother with PRISMs. Part of the answer is that as long as the PRISMs were secret the bad guys might have used weak privacy protection methods. But PRISMs had to exist by the same logic that all major WWII powers had to have atomic weapons programs (and they all did): if it could be built, it must be, and adversaries with the requisite resources must be assumed to have built their own. Anti-PRISM seems intractable to me. Nico [*] Oops, this is really hard; only a handful of end-users will ever do this. The goal is to defeat the Thonpson attack -- Thompson trojans bit-rot; using multiple build tools and dissassembly tools would be one way to increase the bit-rot speed. [**] Also insanely difficult. Not gonna happen for most people; the ones who manage it will still be susceptible to traffic analysis and, if of interest, rubber hose cryptanalysis. ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Why prefer symmetric crypto over public key crypto?
On Mon, Sep 09, 2013 at 02:48:56PM -0400, Jeffrey I. Schiller wrote: > I don’t believe you can do this without using some form of public key > system. My $.02: - protocols based entirely on symmetric keying are either PSK or a flavor of Needham-Schroeder (e.g., Kerberos) - neither PSK nor Needham-Schroeder scale - PSK fails to scale for obvious reasons - Kerberos could scale if there were TLD realm operators, but there aren't any, and there can't be because they would have too much power, thus no one would trust them (see below) - Kerberos could scale with a web of trust (PGP-like), but managing that web would be difficult, and realms that are widely trusted are... much too powerful (see below) - Kerberos KDCs have even more privileged a position than PKIX CAs: they can impersonate you to others and vice-versa (therefore they can MITM you) and they can recover all your session keys (unless you use PFS) even when they don't MITM you. This is necessarily so for any symmetric key only protocol. - To get past this requires PK crypto. It's unavoidable. - Life will look a bit bleak for a while once we get to quantum machine cryptopocalypse... Nico -- ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Killing two IV related birds with one stone
On Wed, Sep 11, 2013 at 06:51:16PM -0400, Perry E. Metzger wrote: > It occurs to me that specifying IVs for CBC mode in protocols > like IPsec, TLS, etc. be generated by using a block cipher in counter > mode and that the IVs be implicit rather than transmitted kills two > birds with one stone. > > The first bird is the obvious one: we now know IVs are unpredictable > and will not repeat. > > The second bird is less obvious: we've just gotten rid of a covert > channel for malicious hardware to leak information. I like this, and I've wondered about this in the past as well. But note that this only works for ordered {octet, datagram} streams. It can't work for DTLS, for example, or GSS-API, or Kerberos, or ESP, This can be implemented today anywhere that explicit IVs are needed; there's only a need for the peer to know the seed if they need to be able to verify that you're not leaking through IVs. Of course, we should want nodes to verify that their peers are not leaking through IVs. There's still nonces that are needed at key exchange and authentication time that can still leak key material / PRNG state. I don't think you can get rid of all covert channels... And anyways, your peers could just use out-of-band methods of leaking session keys and such. BTW, Kerberos generally uses confounders instead of IVs. Confounders are just explicit IVs sent encrypted. Confounders leak just as much (but no more) than explicit IVs, so confounding is a bit pointless -- worse, it wastes resources: one extra block encryption/decryption operation per-message. Nico -- ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] [cryptography] very little is missing for working BTNS in Openswan
On Mon, Sep 09, 2013 at 10:25:03AM +0200, Eugen Leitl wrote: > Just got word from an Openswan developer: > > " > To my knowledge, we never finished implementing the BTNS mode. > > It wouldn't be hard to do --- it's mostly just conditionally commenting out > code. > " > There's obviously a large potential deployment base for > BTNS for home users, just think of Openswan/OpenWRT. Note: you don't just want BTNS, you also want RFC5660 -- "IPsec channels". You also want to define a channel binding for such channels (this is trivial). To summarize: IPsec protects discrete *packets*, not discrete packet *flows*. This means that -depending on configuration- you might be using IPsec to talk to some peer at some address at one moment, and the next you might be talking to a different peer at the same address, and you'd never know the difference. IPsec channels consist of ensuring that the peer's ID never changes during the life of a given packet flow (e.g., TCP connection). BTNS pretty much requires IPsec configurations of that make you vulnerable in this way. I think it should be obvious now that "IPsec channels" is a necessary part of any BTNS implementation. Nico -- ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography