Re: [Cryptography] please dont weaken pre-image resistance of SHA3 (Re: NIST about to weaken SHA3?)
Adam, I guess I should preface this by saying I am speaking only for myself. That's always true here--it's why I'm using my personal email address. But in particular, right now, I'm not *allowed* to work. But just speaking my own personal take on things We go pretty *overwhelming* feedback in this direction in the last three weeks. (For the previous several months, we got almost no feedback about it at all, despite giving presentations and posting stuff on hash forum about our plans.). But since we're shut down right now, we can't actually make any decisions or changes. This is really frustrating on all kinds of levels. Personally, I have looked at the technical arguments against the change and I don't really find any of them very convincing, for reasons I described at some length on the hash forum list, and that the Keccak designers also laid out in their post. The core of that is that an attacker who can't do 2^{128} work can't do anything at all to SHA3 with a 256 bit capacity that he couldn't also do to SHA3 with a 512 bit capacity, including finding preimages. But there's pretty much zero chance that we're going to put a standard out that most of the crypto community is uncomfortable with. The normal process for a FIPS is that we would put out a draft and get 60 or 90 days of public comments. As long as this issue is on the table, it's pretty obvious what the public comments would all be about. The place to go for current comments, if you think more are necessary, is the hash forum list. The mailing list is still working, but I think both the archives and the process of being added to the list are frozen thanks to the shutdown. I haven't looked at the hash forum since we shut down, so when we get back there will be a flood of comments there. The last I saw, the Keccak designers had their own proposal for changing what we put into the FIPS, but I don't know what people think about their proposal. --John, definitely speaking only for myself ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?
On Oct 12, 2013, at 6:51 AM, Ben Laurie wrote: ... > AIUI, you're trying to make it so that only active attacks work on the > combined protocol, whereas passive attacks might work on the outer > protocol. In order to achieve this, you assume that your proposed > inner protocol is not vulnerable to passive attacks (I assume the > outer protocol also thinks this is true). Why should we believe the > inner protocol is any better than the outer one in this respect? The point is, we don't know how to make protocols that really are reliably secure against future attacks. If we did, we'd just do that. My hope is that if we layer two of our best attempts at secure protocols on top of one another, then we will get security because the attacks will be hard to get through the composed protocols. So maybe my protocol (or whatever inner protocol ends up being selected) isn't secure against everything, but as long as its weaknesses are covered up by the outer protocol, we still get a secure final result. One requirement for this is that the inner protocol must not introduce new weaknesses. I think that means it must not: a. Leak information about its plaintexts in its timing, error messages, or ciphertext sizes. b. Introduce ambiguities about how the plaintext is to be decrypted that could mess up the outer protocol's authentication. I think we can accomplish (a) by not compressing the plaintext before processing it, by using crypto primitives that don't leak plaintext data in their timing, and by having the only error message that can ever be generated from the inner protocol be essentially a MAC failure or an out-of-sequence error. I think (b) is pretty easy to accomplish with standard crypto, but maybe I'm missing something. ... > Particularly since you're using tainted algorithms ;-). If using AES or P256 are the weak points in the protocol, that is a big win. Right now, we aren't getting anywhere close to that. And there's no reason either AES or P256 have to be used--I'm just looking for a simple, lightweight way to get as much security as possible inside some other protocol. --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Key stretching
AES128, rather. Sent from my iPhone On Oct 11, 2013, at 11:26 AM, Phillip Hallam-Baker wrote: > All, > > Quick question, anyone got a good scheme for key stretching? > > I have this scheme for managing private keys that involves storing them as > encrypted PKCS#8 blobs in the cloud. > > AES128 seems a little on the weak side for this but there are (rare) > circumstances where a user is going to need to type in the key for recovery > purposes so I don't want more than 128 bits of key to type in (I am betting > that 128 bits is going to be sufficient to the end of Moore's law). > > > So the answer is to use AES 256 and stretch the key, but how? I could just > repeat the key: > > K = k + k > > Related key attacks make me a little nervous though. Maybe: > > K = (k + 01234567) XOR SHA512 (k) > > > -- > Website: http://hallambaker.com/ > ___ > The cryptography mailing list > cryptography@metzdowd.com > http://www.metzdowd.com/mailman/listinfo/cryptography ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Key stretching
This is a job for a key derivation function or a cryptographic prng. I would use CTR-DRBG from 800-90 with AES256. Or the extract-then-expand KDF based on HMAC-SHA512. --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?
On Oct 11, 2013, at 1:48 AM, ianG wrote: ... > What's your goal? I would say you could do this if the goal was ultimate > security. But for most purposes this is overkill (and I'd include online > banking, etc, in that). We were talking about how hard it is to solve crypto protocol problems by getting the protocol right the first time, so we don't end up with fielded stuff that's weak but can't practically be fixed. One approach I can see to this is to have multiple layers of crypto protocols that are as independent as possible in security terms. The hope is that flaws in one protocol will usually not get through the other layer, and so they won't lead to practical security flaws. Actually getting the outer protocol right the first time would be better, but we haven't had great success with that so far. > Right now we've got a TCP startup, and a TLS startup. It's pretty messy. > Adding another startup inside isn't likely to gain popularity. Maybe not, though I think a very lightweight version of the inner protocol adds only a few bits to the traffic used and a few AES encryptions to the workload. I suspect most applications would never notice the difference. (Even the version with the ECDH key agreement step would probably not add noticable overhead for most applications.) On the other hand, I have no idea if anyone would use this. I'm still at the level of thinking "what could be done to address this problem," not "how would you sell this?" > iang --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] prism-proof email in the degenerate case
On Oct 10, 2013, at 5:20 PM, Ray Dillinger wrote: > On 10/10/2013 12:54 PM, John Kelsey wrote: >> Having a public bulletin board of posted emails, plus a protocol >> for anonymously finding the ones your key can decrypt, seems >> like a pretty decent architecture for prism-proof email. The >> tricky bit of crypto is in making access to the bulletin board >> both efficient and private. > > Wrong on both counts, I think. If you make access private, you > generate metadata because nobody can get at mail other than their > own. If you make access efficient, you generate metadata because > you're avoiding the "wasted" bandwidth that would otherwise prevent > the generation of metadata. Encryption is sufficient privacy, and > efficiency actively works against the purpose of privacy. So the original idea was to send a copy of all the emails to everyone. What I'm wanting to figure out is if there is a way to do this more efficiently, using a public bulletin board like scheme. The goal here would be: a. Anyone in the system can add an email to the bulletin board, which I am assuming is public and cryptographically protected (using a hash chain to make it impossible for even the owner of the bulletin board to alter things once published). b. Anyone can run a protocol with the bulletin board which results in them getting only the encrypted emails addressed to them, and prevents the bulletin board operator from finding out which emails they got. This sounds like something that some clever crypto protocol could do. (It's related to the idea of searching on encrypted data.). And it would make an email system that was really resistant to tracing users. >Bear --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?
On Oct 10, 2013, at 5:15 PM, Richard Outerbridge wrote: > > How does this prevent MITM? Where does G come from? I'm assuming G is a systemwide shared parameter. It doesn't prevent mitm--remember the idea here is to make a fairly lightweight protocol to run *inside* another crypto protocol like TLS. The inner protocol mustn't add administrative requirements to the application, which means it can't need key management from some administrator or something. The goal is to have an inner protocol which can run inside TLS or some similar thing, and which adds a layer of added security without the application getting more complicated by needing to worry about more keys or certificates or whatever. Suppose we have this inner protocol running inside a TLS version that is subject to one of the CBC padding reaction attacks. The inner protocol completely blocks that. > I'm also leery of using literally the same key in both directions. Maybe a > simple transform would suffice; maybe not. I probably wasn't clear in my writeup, but my idea was to have different keys in different directions--there is a NIST KDF that uses only AES as its crypto engine, so this is relatively easy to do using standard components. --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?
More random thoughts: The minimal inner protocol would be something like this: Using AES-CCM with a tag size of 32 bits, IVs constructed based on an implicit counter, and an AES-CMAC-based KDF, we do the following: Sender: a. Generate random 128 bit value R b. Use the KDF to compute K[S],N[S],K[R],N[R] = KDF(R, 128+96+128+96) c. Sender's 32-bit unsigned counter C[S] starts at 0. d. Compute IV[S,0] = <96 bits of binary 0s>||C[S] e. Send R, CCM(K[S],N[S],IV[S,0],sender_message[0]) Receiver: a. Receive R and derive K[S],N[S],K[R],N[R] from it as above. b. Set Receiver's counter C[R] = 0. c. Compute IV[R,0] = <96 bits of binary 0s>||C[R] d. Send CCM(K[R],N[R],IV[R,0],receiver_message[0]) and so on. Note that in this protocol, we never send a key or IV or nonce. The total communications overhead of the inner protocol is an extra 160 bits in the first message and an extra 32 bits thereafter. We're assuming the outer protocol is taking care of message ordering and guaranteed delivery--otherwise, we need to do something more complicated involving replay windows and such, and probably have to send along the message counters. This doesn't provide a huge amount of extra protection--if the attacker can recover more than a very small number of bits from the first message (attacking through the outer protocol), then the security of this protocol falls apart. But it does give us a bare-minimum-cost inner layer of defenses, inside TLS or SSH or whatever other thing we're doing. Both this and the previous protocol I sketched have the property that they expect to be able to generate random numbers. There's a problem there, though--if the system RNG is weak or trapdoored, it could compromise both the inner and outer protocol at the same time. One way around this is to have each endpoint that uses the inner protocol generate its own internal secret AES key, Q[i]. Then, when it's time to generate a random value, the endpoint asks the system RNG for a random number X, and computes E_Q(X). If the attacker knows Q but the system RNG is secure, we're fine. Similarly, if the attacker can predict X but doesn't know Q, we're fine. Even when the attacker can choose the value of X, he can really only force the random value in the beginning of the protocol to repeat. In this protocol, that doesn't do much harm. The same idea works for the ECDH protocol I sketched earlier. I request two 128 bit random values from the system RNG, X, X'. I then use E_Q(X)||E_Q(X') as my ephemeral DH private key. If an attacker knows Q but the system RNG is secure, then we get an unpredictable value for the ECDH key agreement. If an attacker knows X,X' but doesn't know Q, he doesn't know what my ECDH ephemeral private key is. If he forces it to a repeated value, he still doesn't weaken anything except this run of the protocol--no long-term secret is leaked if AES isn't broken. This is subject to endless tweaking and improvement. But the basic idea seems really valuable: a. Design an inner protocol, whose job is to provide redundancy in security against attacks on the outer protocol. b. The inner protocol should be: (i) As cheap as possible in bandwidth and computational terms. (ii) Flexible enough to be used extremely widely, implemented in most places, etc. (iii) Administratively free, adding no key management or related burdens. (iv) Free from revisions or updates, because the whole point of the inner protocol is to provide redundant security. (That's part of "administratively free.") (v) There should be one or at most two versions (maybe something like the two I've sketched, but better thought out and analyzed). c. As much as possible, we want the security of the inner protocol to be independent of the security of the outer protocol. (And we want this without wanting to know exactly what the outer protocol will look like.) This means: (i) No shared keys or key material or identity strings or anything. (ii) The inner protocol can't rely on the RNG being good. (iii) Ideally, the crypto algorithms would be different, though that may impose too high a cost. At least, we want as many of the likely failure modes to be different. Comments? I'm not all that concerned with the protocol being perfect, but what do you think of the idea of doing this as a way to add redundant security against protocol attacks? --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] prism-proof email in the degenerate case
Having a public bulletin board of posted emails, plus a protocol for anonymously finding the ones your key can decrypt, seems like a pretty decent architecture for prism-proof email. The tricky bit of crypto is in making access to the bulletin board both efficient and private. --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?
Just thinking out loud The administrative complexity of a cryptosystem is overwhelmingly in key management and identity management and all the rest of that stuff. So imagine that we have a widely-used inner-level protocol that can use strong crypto, but also requires no external key management. The purpose of the inner protocol is to provide a fallback layer of security, so that even an attack on the outer protocol (which is allowed to use more complicated key management) is unlikely to be able to cause an actual security problem. On the other hand, in case of a problem with the inner protocol, the outer protocol should also provide protection against everything. Without doing any key management or requiring some kind of reliable identity or memory of previous sessions, the best we can do in the inner protocol is an ephemeral Diffie-Hellman, so suppose we do this: a. Generate random a and send aG on curve P256 b. Generate random b and send bG on curve P256 c. Both sides derive the shared key abG, and then use SHAKE512(abG) to generate an AES key for messages in each direction. d. Each side keeps a sequence number to use as a nonce. Both sides use AES-CCM with their sequence number and their sending key, and keep track of the sequence number of the most recent message received from the other side. The point is, this is a protocol that happens *inside* the main security protocol. This happens inside TLS or whatever. An attack on TLS then leads to an attack on the whole application only if the TLS attack also lets you do man-in-the-middle attacks on the inner protocol, or if it exploits something about certificate/identity management done in the higher-level protocol. (Ideally, within the inner protcol, you do some checking of the identity using a password or shared secret or something, but that's application-level stuff the inner and outer protocols don't know about. Thoughts? --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Iran and murder
The problem with offensive cyberwarfare is that, given the imbalance between attackers and defenders and the expanding use of computer controls in all sorts of systems, a cyber war between two advanced countries will not decide anything militarily, but will leave both combattants much poorer than they were previously, cause some death and a lot of hardship and bitterness, and leave the actual hot war to be fought. Imagine a conflict that starts with both countries wrecking a lot of each others' infrastructure--causing refineries to burn, factories to wreck expensive equipment, nuclear plants to melt down, etc. A week later, that phase of the war is over. Both countries are, at that point, probalby 10-20% poorer than they were a week earlier. Both countries have lots of really bitter people out for blood, because someone they care about was killed or their job's gone and their house burned down or whatever. But probably there's been little actual degradation of their standard war-fighting ability. Their civilian aviation system may be shut down, some planes may even have been crashed, but their bombers and fighters and missiles are mostly still working. Fuel and spare parts may be hard to come by, but the military will certainly get first pick. My guess is that what comes next is that the two countries have a standard hot war, but with the pleasant addition of a great depression sized economic collapse for both right in the middle of it. --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?
On Oct 8, 2013, at 4:46 PM, Bill Frantz wrote: > I think the situation is much more serious than this comment makes it appear. > As professionals, we have an obligation to share our knowledge of the limits > of our technology with the people who are depending on it. We know that all > crypto standards which are 15 years old or older are obsolete, not > recommended for current use, or outright dangerous. We don't know of any way > to avoid this problem in the future. We know how to address one part of this problem--choose only algorithms whose design strength is large enough that there's not some relatively close by time when the algorithms will need to be swapped out. That's not all that big a problem now--if you use, say, AES256 and SHA512 and ECC over P521, then even in the far future, your users need only fear cryptanalysis, not Moore's Law. Really, even with 128-bit security level primitives, it will be a very long time until the brute-force attacks are a concern. This is actually one thing we're kind-of on the road to doing right in standards now--we're moving away from barely-strong-enough crypto and toward crypto that's going to be strong for a long time to come. Protocol attacks are harder, because while we can choose a key length, modulus size, or sponge capacity to support a known security level, it's not so easy to make sure that a protocol doesn't have some kind of attack in it. I think we've learned a lot about what can go wrong with protocols, and we can design them to be more ironclad than in the past, but we still can't guarantee we won't need to upgrade. But I think this is an area that would be interesting to explore--what would need to happen in order to get more ironclad protocols? A couple random thoughts: a. Layering secure protocols on top of one another might provide some redundancy, so that a flaw in one didn't undermine the security of the whole system. b. There are some principles we can apply that will make protocols harder to attack, like encrypt-then-MAC (to eliminate reaction attacks), nothing is allowed to need change its execution path or timing based on the key or plaintext, every message includes a sequence number and the hash of the previous message, etc. This won't eliminate protocol attacks, but will make them less common. c. We could try to treat at least some kinds of protocols more like crypto algorithms, and expect to have them widely vetted before use. What else? ... > Perhaps the shortest limit on the lifetime of an embedded system is the > security protocol, and not the hardware. If so, how do we as society deal > with this limit. What we really need is some way to enforce protocol upgrades over time. Ideally, there would be some notion that if you support version X of the protocol, this meant that you would not support any version lower than, say, X-2. But I'm not sure how practical that is. > Cheers -- Bill --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
[Cryptography] Iran and murder
Alongside Phillip's comments, I'll just point out that assassination of key people is a tactic that the US and Israel probably don't have any particular advantages in. It isn't in our interests to encourage a worldwide tacit acceptance of that stuff. I suspect a lot of the broad principles we have been pushing (assassinations and drone bombings can be done anywhere, cyber attacks against foreign countries are okay when you're not at war, spying on everyone everywhere is perfectly acceptable policy) are in the short-term interests of various powerful people and factions in the US, but are absolutely horrible ideas when you consider the long-term interests of the US. We are a big, rich, relatively free country with lots of government scientists and engineers (especially when you consider funding) and tons of our economy and our society moving online. We are more vulnerable to widespread acceptance of these bad principles than almost anyone, ultimately, But doing all these things has won larger budgets and temporary successes for specific people and agencies today, whereas the costs of all this will land on us all in the future. --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Sha3
On Oct 6, 2013, at 6:29 PM, Jerry Leichter wrote: > On Oct 5, 2013, at 6:12 PM, Ben Laurie wrote: >> I have to take issue with this: >> >> "The security is not reduced by adding these suffixes, as this is only >> restricting the input space compared to the original Keccak. If there >> is no security problem on Keccak(M), there is no security problem on >> Keccak(M|suffix), as the latter is included in the former." > I also found the argument here unconvincing. After all, Keccak restricted to > the set of strings of the form M|suffix reveals that it's input ends with > "suffix", which the original Keccak did not. The problem is with the vague > nature of "no security problem". They are talking about the change to their padding scheme, in which between 2 and 4 bits of extra padding are added to the padding scheme that was originally proposed for SHA3. A hash function that works by processing r bits at a time till the whole message is processed (every hash function I can think of works like this) has to have a padding scheme, so that when someone tries to hash some message that's not a multiple of r bits long, the message gets padded out to r bits. The only security relevance of the padding scheme is that it has to be invertible--given the padded string, there must always be exactly one input string that could have led to that padded string. If it isn't invertible, then the padding scheme would introduce collisions. For example, if your padding scheme was "append zeros until you get the message out to a multiple of r bits," I could get collisions on your hash function by taking some message that was not a multple of r bits, and appending one or more zeros to it. Just appending a single one bit, followed by as many zeros as are needed to get to a multiple of r bits makes a fine padding scheme, so long as the one bit is appended to *every* message, even those which start out a multiple of r bits long. The Keccak team proposed adding a few extra bits to their padding, to add support for tree hashing and to distinguish different fixed-length hash functions that used the same capacity internally. They really just need to argue that they haven't somehow broken the padding so that it is no longer invertible They're making this argument by pointing out that you could simply stick the fixed extra padding bits on the end of a message you processed with the original Keccak spec, and you would get the same result as what they are doing. So if there is any problem introduced by sticking those extra bits at the end of the message before doing the old padding scheme, an attacker could have caused that same problem on the original Keccak by just sticking those extra bits on the end of messages before processing them with Keccak. >-- Jerry --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?
If we can't select ciphersuites that we are sure we will always be comfortable with (for at least some forseeable lifetime) then we urgently need the ability to *stop* using them at some point. The examples of MD5 and RC4 make that pretty clear. Ceasing to use one particular encryption algorithm in something like SSL/TLS should be the easiest case--we don't have to worry about old signatures/certificates using the outdated algorithm or anything. And yet we can't reliably do even that. --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?
One thing that seems clear to me: When you talk about algorithm flexibility in a protocol or product, most people think you are talking about the ability to add algorithms. Really, you are talking more about the ability to *remove* algorithms. We still have stuff using MD5 and RC4 (and we'll probably have stuff using dual ec drbg years from now) because while our standards have lots of options and it's usually easy to add new ones, it's very hard to take any away. --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Sha3
http://keccak.noekeon.org/yes_this_is_keccak.html --John___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?
On Oct 4, 2013, at 10:10 AM, Phillip Hallam-Baker wrote: ... > Dobertin demonstrated a birthday attack on MD5 back in 1995 but it had no > impact on the security of certificates issued using MD5 until the attack was > dramatically improved and the second pre-image attack became feasible. Just a couple nitpicks: a. Dobbertin wasn't doing a birthday (brute force collision) attack, but rather a collision attack from a chosen IV. b. Preimages with MD5 still are not practical. What is practical is using the very efficient modern collision attacks to do a kind of herding attack, where you commit to one hash and later get some choice about which message gives that hash. ... > Proofs are good for getting tenure. They produce papers that are very > citable. There are certainly papers whose only practical importance is getting a smart cryptographer tenure somewhere, and many of those involve proofs. But there's also a lot of value in being able to look at a moderately complicated thing, like a hash function construction or a block cipher chaining mode, and show that the only way anything can go wrong with that construction is if some underlying cryptographic object has a flaw. Smart people have proposed chaining modes that could be broken even when used with a strong block cipher. You can hope that security proofs will keep us from doing that. Now, sometimes the proofs are wrong, and almost always, they involve a lot of simplification of reality (like most proofs aren't going to take low-entropy RNG outputs into account). But they still seem pretty valuable to me for real-world things. Among other things, they give you a completely different way of looking at the security of a real-world thing, with different people looking over the proof and trying to attack things. --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
[Cryptography] Performance vs security
There are specific algorithms where you have a pretty clear-cut security/performance tradeoff. RSA and ECC both give you some choice of security level that has a big impact in terms of performance. AES and SHA2 and eventually SHA3 offer you some secuirty level choices, but the difference in performance between them is relatively unimportant in most applications. Probably the coolest thing about Keccak's capacity parameter is that it gives you an understandable performance/security tradeoff, but the difference in performance between c=256 and c=512 will probably not be noticable in 99% of applications. Then there are algorithms that give you higher performance at the cost of more fragility. The example I can think of here is GCM, which gives you a pretty fast authenticated encryption mode, but which really loses security in a hurry if you reuse an IV. It seems like these two kinds of security/performance tradeoffs belong in different categories, somehow. --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Sha3 and selecting algorithms for speed
Most applications of crypto shouldn't care much about performance of the symmetric crypto, as that's never the thing that matters for slowing things down. But performance continues to matter in competitions and algorithm selection for at least three reasons: a. We can measure performance, whereas security is very hard to measure. There are a gazillion ways to measure performance, but each one gives you an actual set of numbers. Deciding whether JH or Grostl is more likely to fall to cryptanalytic attack in its lifetime is an exercise in reading lots of papers, extrapolating, and reading tea leaves. b. There are low-end environments where performance really does matter. Those often have rather different properties than other environments--for example, RAM or ROM (for program code and S-boxes) may be at a premium. c. There are environments where someone is doing a whole lot of symmetric crypto at once--managing the crypto for lots of different connections, say. In that case, your symmetric algorithm's speed may also have a practical impact. (Though it's still likely to be swamped by your public key algorithms.) --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] RSA equivalent key length/strength
On Oct 2, 2013, at 9:54 AM, Paul Crowley wrote: > On 30 September 2013 23:35, John Kelsey wrote: >> If there is a weak curve class of greater than about 2^{80} that NSA knew >> about 15 years ago and were sure nobody were ever going to find that weak >> curve class and exploit it to break classified communications protected by >> it, then they could have generated 2^{80} or so seeds to hit that weak curve >> class. > > If the NSA's attack involves generating some sort of collision between a > curve and something else over a 160-bit space, they wouldn't have to be > worried that someone else would find and attack that "weak curve class" with > less than 2^160 work. I don't know enough about elliptic curves to have an intelligent opinion on whether this is possible. Has anyone worked out a way to do this? The big question is how much work would have had to be done. If you're talking about a birthday collision on the curve parameters, is that a collision on a 160 bit value, or on a 224 or 256 or 384 or 512 bit value? I can believe NSA doing a 2^{80} search 15 years ago, but I think it would have had to be a top priority. There is no way they were doing 2^{112} searches 15 years ago, as far as I can see. --John___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?
Has anyone tried to systematically look at what has led to previous crypto failures? That would inform us about where we need to be adding armor plate. My impression (this may be the availability heuristic at work) is that: a. Most attacks come from protocol or mode failures, not so much crypto primitive failures. That is, there's a reaction attack on the way CBC encryption and message padding play with your application, and it doesn't matter whether you're using AES or FEAL-8 for your block cipher. b. Overemphasis on performance (because it's measurable and security usually isn't) plays really badly with having stuff be impossible to get out of the field when it's in use. Think of RC4 and DES and MD5 as examples. c. The ways I can see to avoid problems with crypto primitives are: (1) Overdesign against cryptanalysis (have lots of rounds) (2) Overdesign in security parameters (support only high security levels, use bigger than required RSA keys, etc.) (3) Don't accept anything without a proof reducing the security of the whole thing down to something overdesigned in the sense of (1) or (2). --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] are ECDSA curves provably not cooked? (Re: RSA equivalent key length/strength)
On Oct 1, 2013, at 12:51 PM, Adam Back wrote: [Discussing how NSA might have generated weak curves via trying many choices till they hit a weak-curve class that only they knew how to solve.] ... > But the more interesting question I was referring to is a trapdoor weakness > with a weak proof of fairness (ie a fairness that looks like the one in FIPS > 186-3/ECDSA where we dont know how much grinding if any went into the magic > seed values). For illustration though not applicable to ECDSA and probably > outright defective eg can they start with some large number of candidate G > values where G=xH (ie knowing the EC discrete log of some value H they pass > off as a random fairly chosen point) and then do a birthday collision > between the selection of G values and diffrent seed values to a PRNG to find > a G value that they have both a discrete log of wrt H and a PRNG seed. > Bearing in mind they may be willing to throw custom ASIC or FPGA > supercomputer hardware and $1bil budgt at the problem as a one off cost. This general idea is a nice one. It's basically a way of using Merkle's puzzles to build a private key into a cryptosystem. But I think in general, you are going to have to do work equal to the security level of the thing you're trying to backdoor. You have to break it once at its full security level, and then you get to amortize that break forever. (Isn't there something like this you can do for discrete logs in general, though?) Consider Dual EC DRBG. You need a P, Q such that you know x that solves xP = Q, over (say) P-224. So, you arbitrarily choose G = a generator for the group, and a scalar z, and then compute for j = 1 to 2^{112}: T[j] = jz G Now, you have 2^{112} values in a group of 2^{224} values, right? So with about another 2^{113} work, you can hit one of those with two arbitrary seeds, and you'll know the relationship between them. But this takes a total of about 2^{113} work, so it's above the claimed secuity level of P-224. I suspect this would be more useful for something at the 80 bit security level--a really resourceful attacker could probably do a 2^{80} search. > Adam --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] AES-256- More NIST-y? paranoia
On Oct 1, 2013, at 5:58 PM, Peter Fairbrother wrote: > AES, the latest-and-greatest block cipher, comes in two main forms - AES-128 > and AES-256. > > AES-256 is supposed to have a brute force work factor of 2^256 - but we find > that in fact it actually has a very similar work factor to that of AES-128, > due to bad subkey scheduling. > > Thing is, that bad subkey scheduling was introduced by NIST ... after > Rijndael, which won the open block cipher competition with what seems to be > all-the-way good scheduling, was transformed into AES by NIST. What on Earth are you talking about? AES' key schedule wasn't designed by NIST. The only change NIST made to Rijndael was not including some of the alternative block sizes. You can go look up the old Rijndael specs online if you want to verify this. > -- Peter Fairbrother --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] NIST about to weaken SHA3?
On Oct 1, 2013, at 4:48 AM, ianG wrote: ... > This could be the uninformed opinion over unexpected changes. It could also > be the truth. How then to differentiate? > > Do we need to adjust the competition process for a "tweak" phase? > > Let's whiteboard. Once The One is chosen, have a single round + conference > where each of the final contestants propose their optimised version. They > then vote on the choice. I like the general idea here, but I suspect a vote at the end of a conference isn't going to yield great results. I'd hate to see something the designers opposed get adopted because they were outvoted by (say) a larger team. > (OK, we can imagine many ways to do this ... point being that if NIST are > going to tweak the SHA3 then we need to create a way for them to do this, and > have that tweaking be under the control of the submitters, not NIST itself. > In order to maintain the faith of the result.) The Keccak designers proposed reducing the capacity. You can find public statements about this online, including in the slides on their website. Also, the capacity is a parameter defined in the standard to allow an easy to understand performance/security tradeoff. Setting c=256 gives an across the board security level of 128 bits, if you believe the underlying Keccak permutation is good. The actual technical question is whether an across the board 128 bit security level is sufficient for a hash function with a 256 bit output. This weakens the proposed SHA3-256 relative to SHA256 in preimage resistance, where SHA256 is expected to provide 256 bits of preimage resistance. If you think that 256 bit hash functions (which are normally used to achieve a 128 bit security level) should guarantee 256 bits of preimage resistance, then you should oppose the plan to reduce the capacity to 256 bits. If you think a 256 bit hash function should only promise 128 bits of security, except in specific applicaitons like keyed hashes where it has been analyzed specifically and shown to get more, then you should (at least on technical grounds) like the proposal to reduce the capacity to 256 bits for a 256-bit hash output. --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
[Cryptography] Sha3
If you want to understand what's going on wrt SHA3, you might want to look at the nist website, where we have all the slide presentations we have been giving over the last six months detailing our plans. There is a lively discussion going on at the hash forum on the topic. This doesn't make as good a story as the new sha3 being some hell spawn cooked up in a basement at Fort Meade, but it does have the advantage that it has some connection to reality. You might also want to look at what the Keccak designers said about what the capacities should be, to us (they put their slides up) and later to various crypto conferences. Or not. --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] RSA equivalent key length/strength
Having read the mail you linked to, it doesn't say the curves weren't generated according to the claimed procedure. Instead, it repeats Dan Bernstein's comment that the seed looks random, and that this would have allowed NSA to generate lots of curves till they found a bad one. it looks to me like there is no new information here, and no evidence of wrongdoing that I can see. If there is a weak curve class of greater than about 2^{80} that NSA knew about 15 years ago and were sure nobody were ever going to find that weak curve class and exploit it to break classified communications protected by it, then they could have generated 2^{80} or so seeds to hit that weak curve class. What am I missing? Do you have evidence that the NIST curves are cooked? Because the message I saw didn't provide anything like that. --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] RSA equivalent key length/strength
Maybe you should check your code first? A couple nist people verified that the curves were generated by the described process when the questions about the curves first came out. Don't trust us, obviously--that's the whole point of the procedure. But check your code, because the process worked right when we checked it. --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] check-summed keys in secret ciphers?
GOST was specified with S boxes that could be different for different applications, and you could choose s boxes to make GOST quite weak. So that's one example. --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Gilmore response to NSA mathematician's "make rules for NSA" appeal
On Sep 25, 2013, at 2:52 AM, james hughes wrote: > Many, if not all, service providers can provide the government valuable > information regarding their customers. This is not limited to internet > service providers. It includes banks, health care providers, insurance > companies, airline companies, hotels, local coffee shops, book sellers, etc. > where providing a service results in personal information being exchanged. > The US has no corner on the ability to get information from almost any type > of service provider. This is the system that the entire world uses, and > should not be our focus. There are many places where there is no way to provide the service without having access to the data, and probably storing it. For those places, we are stuck with legal and professional and business safeguards. You doctor should take notes when you see him, and can be compelled to give those notes up if he can access them to (for example) respond to a phone call asking to refill your medications. There are rather complicated mechanisms you can imagine to protect your privacy in this situation, but it's hard to imagine them working well in practice. For that situation, what we want is that the access to the information is transparent--the doctor can be compelled to give out information about his patients, but not without his knowledge, and ideally not without your knowledge. But there are a lot of services which do not require that the providers have or collect information about you. Cloud storage and email services don't need to have access to the plaintext data you are storing or sending with them. If they have that information, they are subject to being forced to share it with a government, or deciding to share it with someone for their own business reasons, or having a dishonest employee steal it. If they don't have that information because their service is designed so they don't have it, then they can't be forced to share it--whether with the FBI or the Bahraini government or with their biggest advertiser. No change of management or policy or law can make them change it. Right now, there is a lot of interest in finding ways to avoid NSA surveillance. In particular, Germans and Brazilians and Koreans would presumably rather not have their data made freely available to the US government under what appear to be no restrictions at all. If US companies would like to keep the business of Germans and Brazilians and Koreans, they probably need to work out a way to convincingly show that they will safeguard that data even from the US government. --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] The hypothetical random number generator backdoor
On Sep 22, 2013, at 8:09 PM, Phillip Hallam-Baker wrote: > So we think there is 'some kind' of backdoor in a random number generator. > One question is how the EC math might make that possible. Another is how > might the door be opened. We don't know that there is a backdoor in dual ec, but we know that there could be one of a particular form, and it works like you describe--if you know the backdoor, then given output[i], you can predict all future outputs. The other kinds of weaknesses I would expect to see in the wild would be lack of entropy, which means that you can try some reasonable number of guesses about the value of the output and expect to have a decent probability of being right once. > Either way, the question is how to stop this side channel attack. One simple > way would be to encrypt the nonces from the RNG under a secret key generated > in some other fashion. > > nonce = E (R, k) This would work if you had a secure key I couldn't guess for k. If the entropy is really low, though, I would still see duplicate outputs from time to time. If the RNG has short cycles, this would also show up. > Or hashing the RNG output and XORing with it > > nonce = r XOR H (r) This works against the dual ec type backdoor, but does nothing for low entropy or short cycles. Also, it's kind-of sensitive to how H() works. If H(x) = P(x) xor x then this will be invertible. Still, the basic idea is nice. It would be interesting to see if you could prove something about its security. How about: nonce = r[1] xor H(r[2])? The other thing that you can do is to XOR multiple RNGs together. As long as at least one is unbroken and all other RNGs are independent of that one unbroken one, the resulting random outputs should be secure. --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Gilmore response to NSA mathematician's "make rules for NSA" appeal
On Sep 18, 2013, at 3:27 PM, Kent Borg wrote: > You foreigners actually have a really big vote here. All those US internet > companies want your business, and as you get no protections, in the current > scheme, not even lip-service, you should look for alternatives. As you do, > this puts pressure on the US internet companies, and they have the economic > clout to put pressure on Feinstein and Polosi and all the others. This does not go far enough. The US government is not the only one inclined to steal information which it can reach, either because the information goes over wires the government can listen in on, or because the companies handling the data can be compelled or convinced to hand it over. Right now, we're seeing leaks that confirm the serious efforts of one government to do this stuff, but it is absolutely silly to think the US is the only one doing it. The right way to address this is to eliminate the need to trust almost anyone with your data. If Google[1] has all your cleartext documents and emails, they can be compelled to turn them over, or they can decide to look at them for business reasons, or they can be infiltrated by employees or contractors who look at those emails and documents. You are trusting a lot of people, and trusting a company to possibly behave against its economic interests and legal obligations, to safeguard your privacy. If they have encrypted data only, you don't have to trust them. It needs to be in their business interest to convince you that they *can't* betray you in most ways. > -kb --John [1] I'm not picking on Google in particular--any US company may be compelled to turn over data they have. I imagine the same is true of any German or Korean or Brazilian company, but I don't know the laws in those places. ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] PRISM-Proofing and PRISM-Hardening
On Sep 19, 2013, at 5:21 PM, Phillip Hallam-Baker wrote: > Criminals circumvent the WebPKI rather than trying to defeat it. If they did > start breaking the WebPKI then we can change it and do something different. If criminals circumvent the PKI to steal credit card numbers, this shows up as fraud and is noticed without any need for a Snowden. Eavesdropping doesn't show up in such an obvious way. > But financial transactions are easier than protecting the privacy of > political speech because it is only money that is at stake. The criminals are > not interested in spending $X to steal $0.5X. We can do other stuff to raise > the cost of attack if it turns out we need to do that. Also, criminals find it harder to spend a few million up front before they get the first payoff. Nor can they appeal to patriotism or compel compliance via the law. > If we want this to be a global infrastructure we have 2.4 billion users to > support. If we spend $0.01 per user on support, that is $24 million. It is > likely to be a lot more than that per user. It has to pay for itself ultimately, at least as well as email does. --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] The paranoid approach to crypto-plumbing
Arggh! Of course, this superencryption wouldn't help against the CBC padding attacks, because the attacker would learn plaintext without bothering with the other layers of encryption. The only way to solve that is to preprocess the plaintext in some way that takes the attacker's power to induce a timing difference or error message away. --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] The paranoid approach to crypto-plumbing
For hash functions, MACs, and signature schemes, simply concatenating hashes/MACs/signatures gives you at least the security of the stronger one. Joux multicollisions simply tell us that concatenating two or more hashes of the same size doesn't improve their resistance to brute force collsion search much. The only thing you have to be sure of there is that the MAC and signature functions aren't allowed access to each others' secret keys or internal random numbers. Otherwise, MAC#1 can always take the value of MAC#2's key. This is just message, signature 1, signature 2 where the signatures are over the message only. For encryption algorithms, superencryption works fine. You can first encrypt with AES-CBC, then encrypt with Twofish-CFB, then with CAST5 in CFB mode. Again, assuming you are not letting the algorithms know each others' internal state or keys, if any of these algorithms are resistant to chosen plaintext attacks, then the combination will be. This doesn't guarantee that the combination will be any stronger than the strongest of these, but it will be no weaker. (Biham and later Wagner had some clever attacks against some chaining modes using single-DES that showed that you wouldn't always get anything stronger than one of the ciphers, but if any of these layers is strong, then the whole encryption is strong. An alternative approach is to construct a single super-block-cipher, say AES*Twofish*SERPENT, and use it in a standard chaining mode. However, then you are still vulnerable to problems with your chaining mode--the CBC reaction attacks could still defeat a system that used AES*Twofish*SERPENT in CBC mode, but not AES-CBC followed by Twofish-CFB followed by SERPENT-CTR. For key-encryption or transport, I think it's a little more complicated. If I need six symmetric keys and want to use three public key methods (say ECDH, NTRU, RSA) to transport the key, I've got to figure out a way to get the benefit from all these key exchange mechanisms to all six symmetric keys, in a way that I'm sure will not leak any information about any of them. Normally we would use a KDF for this, but we don't want to trust any one crypto algorithm not to screw us over. I think we can get this if we assume that we can have multiple KDFs that have secrets from one another. That is, I compute KDF1( key1, combined key exchange input) XOR KDF2( key2, combined key exchange input) The reason the two KDFs need keys that are secret from each other is because otherwise, KDF1 could just duplicate KDF2 and we'd get an all-zero set of keys. If KDF2 is strong, then KDF1 can't generate an output string with any relationship to KDF2's string when it doesn't know all the input KDF2 is getting. I agree with Perry that this is probably padlocking a screen door. On the other hand, if we want to do it, we want to make sure we guard against as many bad things as possible. In particular, it would be easy to do this in such a way that we missed chaining mode/reaction type attacks. --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] The paranoid approach to crypto-plumbing
On Sep 17, 2013, at 11:41 AM, "Perry E. Metzger" wrote: > > I confess I'm not sure what the current state of research is on MAC > then Encrypt vs. Encrypt then MAC -- you may want to check on that. Encrypt then MAC has a couple of big advantages centering around the idea that you don't have to worry about reaction attacks, where I send you a possibly malformed ciphertext and your response (error message, acceptance, or even time differences in when you send an error message) tells me something about your secret internal state. > Perry --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] prism proof email, namespaces, and anonymity
On Sep 15, 2013, at 7:47 AM, Adam Back wrote: > Another design permutation I was thinking could be rather interesting is > unobservable mail. That is to say the participants know who they are > talking to (signed, non-pseudonymous) but passive observers do not. It > seems to me that in that circumstance you have more design leverage to > increase the security margin using PIR like tricks than you can with > pseudonymous/anonymous - if the "contract" is that the system remains very > secure so long as both parties to a communication channel want it to remain > that way. This seems like the main way most people would want PPE to work--like email they have now, but much more secure and resistant to abuse. In the overwhelming majority of cases, I know and want to know the people I'm talking with. I just don't want to contents of those conversations or the names of people I'm talking with to be revealed to eavesdroppers. And if I get an email from one of my regular correspondents, I'd like to know it came from him, rather than being spoofed from someone else. For most people, I'm pretty sure the security problems with email are centered around the problem of getting unwanted communication from people you don't want to hear from, some of which may manage install malware on your computer, others of which want to waste your time with scam ads, etc. A PPE scheme that solves that problem can get a lot more users than one that doesn't, and may even eventually take over from the current kind of email. > Adam --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] real random numbers
On Sep 15, 2013, at 6:49 AM, Kent Borg wrote: > John Kelsey wrote: >> I think the big problem with (b) is in quantifying the entropy you get. > > Maybe don't. > > When Bruce Schneier last put his hand to designing an RNG he concluded that > estimating entropy is doomed. I don't think he would object to some coarse > order-of-magnitude confirmation that there is entropy coming in, but I think > trying to meter entropy-in against entropy-out will either leave you starved > or fooled. If you are using a strong cryptographic PRNG, you only really need to know the amount of entropy you've collected in two situations: a. When you want to instantiate the PRNG and start generating keys from it. b. When you want to reseed the PRNG and know you will get some benefit from doing so. But those are pretty critical things, especially (a). You need to know whether it is yet safe to generate your high-value keypair. For that, you don't need super precise entropy estimates, but you do need at least a good first cut entropy estimate--does this input string have 20 bits of entropy or 120 bits? My view is that all the song and dance in /dev/random with keeping track of the entropy in the pool as it flows in and out is not all that useful, but there's just no way around needing to estimate entropy to know if your PRNG is in a secure state or not. --John > -kb ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] real random numbers
Your first two categories are talking about the distribution of entropy--we assume some unpredictability exists, and we want to quantify it in terms of bits of entropy per bit of output. That's a useful distinction to make, and as you said, if you can get even a little entropy per bit and know how much you're getting, you can get something very close to ideal random bits out. Your second two categories are talking about different kinds of sources--completely deterministic, or things that can have randomness but don't always. That leaves out sources that always have a particular amount of entropy (or at least are always expected to!). I'd say even the "squish" category can be useful in two ways: a. If you have sensible mechanisms for collecting entropy, they can't hurt and sometimes help. For example, if you sample an external clock, most of the time, the answer may be deterministic, but once in awhile, you may get some actual entropy, in the sense that the clock drift is sufficient that the sampled value could have one of two values, and an attacker can't know which. b. If you sample enough squishes, you may accumulate a lot of entropy. Some ring oscillator designs are built like this, hoping to occasionally sample the transition in value on one of the oscillators. The idea is that the rest of the behavior of the oscillators might possibly be predicted by an attacker, but what value gets read when you sample a value that's transitioning between a 0 and a 1 is really random, changed by thermal noise. I think the big problem with (b) is in quantifying the entropy you get. I also think that (b) describes a lot of what commonly gets collected by the OS and put into the entropy pool. --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Security is a total system problem (was Re: Perfection versus Forward Secrecy)
On Sep 13, 2013, at 3:23 PM, "Perry E. Metzger" wrote: > The problem these days is not that something like AES is not good > enough for our purposes. The problem is that we too often build a > reinforced steel door in a paper wall. Also, if AES being insufficiently strong is our problem, we have a whole bunch of solutions easily at hand. Superencrypt successively with, say, Serpent, Twofish, CAST, Salsa, and Keccak in duplex mode. This has a performance cost, but it is orders of magnitude less overhead than switching to manual key distribution of one-time pads. It's hard for me to think of a real world threat that is addressed better by a one-time pad than by something cheaper and less likely to get broken via human error or attacks on the key management mechanism. > Perry E. Metzgerpe...@piermont.com ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
[Cryptography] one time pads
Switching from AES to one-time pads to solve your practical cryptanalysis problems is silly. It replaces a tractable algorithm selection problem with a godawful key management problem, when key management is almost certainly the practical weakness in any broken system designed by non-idiots. --Jhn ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
[Cryptography] prism proof email, namespaces, and anonymity
Everyone, The more I think about it, the more important it seems that any anonymous email like communications system *not* include people who don't want to be part of it, and have lots of defenses to prevent its anonymous communications from becoming a nightmare for its participants. If the goal is to make PRISM stop working and make the email part of the internet go dark for spies (which definitely includes a lot more than just US spies!), then this system has to be something that lots of people will want to use. There should be multiple defenses against spam and phishing and other nasty things being sent in this system, with enough designed-in flexibility to deal with changes in attacker behavior over tome. If someone can send participants in the system endless spam or credible death threats, then few people are going to want to participate, and that diminishes the privacy of everyone remaining in the system, along with just making the system a blight in general. If nonparticipants start getting spam from the system, it will either be shunned or shut down, and at any rate won't have the kind of reputation that will move a lot of people onto the system. An ironclad anonymous email system with 10,000 users is a whole lot less privacy-preserving than one with 10,000,000 users. As revelations of more and more eavesdropping come out, we might actually see millions of users want to have something really secure and anonymous, but not if it's widely seen as a firehose o' spam. A lot of the tools we use on the net everyday suffer from having been designed without thinking very far ahead into how they might be exploited or misused--hence spam, malware in PDF files, browser hijacking sorts of attacks, etc. My thought is that we should be thinking of multiple independent defenses against spamming and malware and all the rest, because parasites adapt to their environment. We can't count on "and then you go to jail" as a final step in any protocol, and we can't count on having some friendly utility read millions of peoples' mail to filter the spam if we want this to be secure. So what can we count on to stop spam and malware and other nastiness? Some thoughts off the top of my head. Note that while I think all these can be done with crypto somehow, I am not thinking of how to do them yet, except in very general terms. a. You can't freely send messages to me unless you're on my whitelist. b. This means an additional step of sending me a request to be added to your whitelist. This needs to be costly in something the sender cares about--money, processing power, reputation, solving a captcha, rate-limits to these requests, whatever. (What if the system somehow limited you to only, say, five outstanding requests at a time?). c. Make account creation costly somehow (processing, money, solving a captcha, whatever). Or maybe make creating a receive-only account cheap but make it costly to have an account that can request to communicate with strangers. d. Make sending a message in general cost something. Let receiver addresses indicate what proof of payment of the desired cost they require to accept emails. e. Enable some kind of reputation tracking for senders? I'm not sure if this would work or be a good idea, but it's worth thinking about. f. All this needs to be made flexible, so that as attackers evolve, so can defenses. Ideally, my ppe (prism proof email) address would carry an indication of what proofs your request to communicate needed to carry in order for me to consider it. g. The format of messages needs to be restricted to block malware, both the kind that wants to take over your machine and the kind that wants to help the attacker track you down. Plain text email only? Some richer format to allow foreign language support? h. Attachments should become links to files in an anonymizing cloud storage system. Among other things, this will make it easier to limit the size of the emails in the system, which is important for ensuring anonymity without breaking stuff. What else? I see this as the defining thing that can kill an anonymous encrypted communications system--it can become a swamp of spam and malware and nutcases stalking people, and then nobody sensible will want to come within a hundred meters of it. Alternatively, if users are *more* in control of who contacts them in the prism-proof scheme than with the current kind of email, we can get a lot more people joining. Comments? --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] People should turn on PFS in TLS (was Re: Fwd: NYTimes.com: N.S.A. Foils Much Internet Encryption)
On Sep 10, 2013, at 3:56 PM, Bill Stewart wrote: >> One point which has been mentioned, but perhaps not emphasised enough - if >> NSA have a secret backdoor into the main NIST ECC curves, then even if the >> fact of the backdoor was exposed - the method is pretty well known - without >> the secret constants no-one _else_ could break ECC. >> So NSA could advocate the widespread use of ECC while still fulfilling their >> mission of protecting US gubbmint communications from enemies foreign and >> domestic. Just not from themselves. I think this is completely wrong. First, there aren't any secret constants to those curves, are there? The complaint Dan Bermstein has about the NIST curves is that they (some of them) were generated using a verifiably random method, but that the seeds looked pretty random. The idea here, if I understand it correctly, is that if the guys doing the generation knew of some property that made some of the choices of curves weak, they could have tried a huge number of seeds till they happened upon one that led to a weak curve. If they could afford to try N seeds and do whatever examination of the curve was needed to check it for weakness, then the weak property they were looking for couldn't have had a probability much lower than about 1/N. I think the curves were generated in 1999 (that's the date on the document I could find), so we are probably talking about less than 2^{80} operations total. Unlike the case with the Dual EC generator, where a backdoor could have been installed with no risk that anyone else could discover it, in this case, they would have to generate curves until one fell in some weak curve class that they knew about, and they would have to hope nobody else ever discovered that weak curve class, lest all the federal users of ECC get broken at once. The situation you are describing works for dual ec drbg, but not for the NIST curves, as best I understand things. --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Fw: how could ECC params be subverted & other evidence
> Date: Tue, 10 Sep 2013 13:42:57 +0200 > From: Adam Back > To: "Perry E. Metzger" > Cc: Alexander Klimov , Cryptography List > , Adam Back > Subject: Re: [Cryptography] how could ECC params be subverted & other > ... > The important potential backdoor is NIST 186-3 curves in Peter > Fairbrother's reply, and I think that would be a good place to focus > analysis. I've talked a bit with someone I know with some background in the math here, who didn't see an obvious way for these curves to have been cooked. I don't have the number theory to have an opinion, but if someone can point me to an explanation of how they might have been cooked, I would very much appreciate it. I imagine any such potential backdoor will be very easy to get my management to take seriously right now. And that three months ago, we would not have been at all worried. I forsee a rather large change in institutional culture at NIST. > (DRBG is largely irrelevant due suspected compromised state since > 2007, and very limited use. It is also a different type of issue - > not backdoored curves, arguably backdoored parameters). It sure looks now like Dual EC DRBG was backdoored from the leaks and news stories, though I don't know of any hard proof. But we (NIST) have put SP 800-90 back up for public comment and have issued a bulletin telling people to stop using it until we figure out what to do about it. (The alternatives are to remove it or to fix it.) This DRBG was in the X9.82 document when I joined NIST and came onto the project in 2003. If you go to our website (csrc.nist.gov) you can see old slides and documents, and you can check the wayback machine to verify we haven't changed them. (We also had a public workshop, and participants may still have old copies of the documents.) This shows the development of the standard over time. I think the P and Q were the same in those old documents. If so, this tells you how far back this effort goes--at least to 2003, perhaps further back. I believe the X9.82 effort had been going on several years before that, though not making much progress--I'm not sure how long ago Dual EC DRBG was put in. I paid very little attention to the number theoretic constructions (two originally, one in the final version) because I didn't and don't think I know enough number theory to evaluate them. When we heard about the issue of selection of points (in an X9 meeting, I think in 2006 or early 2007), we discussed the issue, and it didn't seem like a serious threat. I sure didn't think the people I was working with on the document were trying to slip weak stuff in. Instead, it looked like they had generated some parameters randomly and hadn't worried about proving where the parameters came from. This seemed like a really weird place to put a backdoor, because it was insanely slow, and it seemed unlikely to get any significant use. And I, at least, had internalized the idea that we weren't going to get intentional bad advice or sabotage from another part of the federal government. (Accidental screw-ups, sure, but not intentional engineered vulnerabilities.) At the time, the solution we came to--allow the use of the default points (which some people had allegedly baked into implementations in anticipation of the standard) and also allow generation of your own points, seemed adequate. But that was assuming a different world than we live in, apparently. If NSA had a program of intentionally inserting weaknesses in crypto standards, and inserted at least one into a NIST standard, then any potential backdoor parameters we have are scary as hell. No way can we leave them in a standard people are expecting to use. Having such a program burns a hell of a lot of bridges, too. I still don't *know* if this is true--convincing newspaper stories often get stuff wrong, and convincing leaked documents may not be authentic. But it sure looks like the way to bet, now. --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Random number generation influenced, HW RNG
On Sep 10, 2013, at 2:30 AM, ianG wrote: > The question of whether one could simulate a raw physical source is > tantalising. I see diverse opinions as to whether it is plausible, and > thinking about it, I'm on the fence. I don't think simulating a physical source is itself a big challenge. People simulate complicated probabilistic behavior all the time. The challenge is going to be sticking it into the chip in a way that doesn't show up when the chip is taken apart in a lab. How can we design the whole system so that some compromised or flawed pieces don't wreck us? I don't know how to ensure my chip's hardware RNG isn't hacked, but I have some hope of working out a design that will be robust even if it is hacked. --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Random number generation influenced, HW RNG
On Sep 9, 2013, at 6:32 PM, "Perry E. Metzger" wrote: > First, David, thank you for participating in this discussion. > > To orient people, we're talking about whether Intel's on-chip > hardware RNGs should allow programmers access to the raw HRNG output, > both for validation purposes to make sure the whole system is working > correctly, and if they would prefer to do their own whitening and > stretching of the output. Giving raw access to the noise source outputs lets you test the source from the outside, and there is alot to be said for it. But I am not sure how much it helps against tampered chips. If I can tamper with the noise source in hardware to make it predictable, it seems like I should also be able to make it simulate the expected behavior. I expect this is more complicated than, say, breaking the noise source and the internal testing mechanisms so that the RNG outputs a predictable output stream, but I am not sure it is all that much more complicated. How expensive is a lightweight stream cipher keyed off the time and the CPU serial number or some such thing to generate pseudorandom bits? How much more to go from that to a simulation of the expectdd behavior, perhaps based on the same circutry used in the unhacked version to test the noise source outputs? --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Techniques for malevolent crypto hardware
On Sep 8, 2013, at 3:55 PM, Thor Lancelot Simon wrote: ... > I also wonder -- again, not entirely my own idea, my whiteboard partner > can speak up for himself if he wants to -- about whether we're going > to make ourselves better or worse off by rushing to the "safety" of > PFS ciphersuites, which, with their reliance on DH, in the absence of > good RNGs may make it *easier* for the adversary to recover our eventual > symmetric-cipher keys, rather than harder! I don't think you can do anything useful in crypto without some good source of random bits. If there is a private key somewhere (say, used for signing, or the public DH key used alongside the ephemeral one), you can combine the hash of that private key into your PRNG state. The result is that if your entropy source is bad, you get security to someone who doesn't compromise your private key in the future, and if your entropy source is good, you get security even against someone who compromises your private key in the future (that is, you get perfect forward secrecy). > Thor --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Techniques for malevolent crypto hardware
In principle, the malevolent crypto accellerator could flip into weak mode (however that happens) only upon receiving a message for decryption with some specific value or property. That would defeat any testing other than constant observation. This is more or less the attack that keeps parallel testing of electronic voting machines from being a good answer to the security concerns about them. --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Market demands for security (was Re: Opening Discussion: Speculation on "BULLRUN")
As an aside: a. Things that just barely work, like standards groups, must in general be easier to sabotage in subtle ways than things that click along with great efficiency. But they are also things that often fail with no help at all from anyone, so it's hard to tell. b. There really are tradeoffs between security and almost everything else. If you start suspecting conspiracy every time someone is reluctant to make that tradeoff in the direction you prefer, you are going to spend your career suspecting everyone everywhere of being ant-security. This is likely to be about as productive as going around suspecting everyone of being a secret communist or racist or something. --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Opening Discussion: Speculation on "BULLRUN"
Let's suppose I design a block cipher such that, with a randomly generated key and 10,000 known plaintexts, I can recover that key. For this to be useful in a world with relatively sophisticated cryptanalysts, I must have confidence that it is extremely hard to find my trapdoor, even when you can look closely at my cipher's design. At this point, what I have is a trapdoor one-way function. You generate a random key K and then compute E(K,i) for i = 1 to 1. The output of the one-way function is the ciphertext. The input is K. If nobody can break the cipher, then this is a one-way funciton. If only I, who designed it, can break it, then it's a trapdoor one-way function. At this point, I have a perfectly fine public key encryption system. To send me a message, choose a random K, use it to encrypt 1 through 1, and then send me the actual message encrypted after that in K. If nobody but me can break the system, then this cipher works as my public key. The assumption that matters here is that you know enough cryptanalysis that it would be hard to hide a practical attack from you. If you don't know about differential cryptanalysis, I can do the master key cryptosystem, but only until you learn about it, at which point you will break my cipher. But if you can, say, hide the only good linear characteristics for some cipher in its S-boxes in a way that is genuinely intractible for anyone else to find, then you have a public key cryptosystem. You can publish the algorithm for hiding new linear characteristics in an S-box--this becomes the keypair generation algorithm. The private key is the linear characteristic that lets you break the cipher with (say) 1 known plaintexts, the public key is the cipher definition. --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] [cryptography] Random number generation influenced, HW RNG
There are basically two ways your RNG can be cooked: a. It generates predictable values. Any good cryptographic PRNG will do this if seeded by an attacker. Any crypto PRNG seeded with too little entropy can also do this. b. It leaks its internal state in its output in some encrypted way. Basically any cryptographic processing of the PRNG output is likely to clobber this. The only fix for (a) is to get enough entropy in your PRNG before generating outputs. I suspect Intel's RNG and most other hardware RNGs are extremely likely to be better than any other source of entropy you can get on your computer, but you don't have to trust them 100%. Instead, do whatever OS level collection you can, combine that with 256 bits from the Intel RNG, and throw in anything else likely to help--ethernet address, IP address, timestamp, anything you can get from the local network, etc. Hash that all and feed it into a strong cryptographic PRNG--something like CTR-DRBG or HMAC-DRBG from SP 800-90. If you do that, you will have guarded against both (a) and (b). --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Why prefer symmetric crypto over public key crypto?
On Sep 7, 2013, at 3:25 PM, "Christian Huitema" wrote: > Another argument is “minimal dependency.” If you use public key, you depend > on both the public key algorithm, to establish the key, and the symmetric key > algorithm, to protect the session. If you just use symmetric key, you depend > on only one algorithm. > > Of course, that means getting pair-wise shared secrets, and protecting them. > Whether that’s harder or more fragile than maintaining a key ring is a matter > of debate. It is probably more robust than relying on CA. Pairwise shared secrets are just about the only thing that scales worse than public key distribution by way of PGP key fingerprints on business cards. The equivalent of CAs in an all-symmetric world is KDCs. Instead of having the power to enable an active attack on you today, KDCs have the power to enable a passive attack on you forever. If we want secure crypto that can be used by everyone, with minimal trust, public key is the only way to do it. One pretty sensible thing to do is to remember keys established in previous sessions, and use those combined with the next session. For example, if we do Diffie-Hellman today and establish a shared key K, we should both store that key, and we should try to reuse it next time as an additional input into our KDF. That is, next time we use Diffie-Hellman to establish K1, then we get actual-key = KDF(K1, K, other protocol details). That means that if even one session was established securely, the communications are secure (up to the symmetric crypto strength) forevermore. > - -- Christian Huitema --John___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] XORing plaintext with ciphertext
It depends on the encryption scheme used. For a stream cipher (including AES in counter or OFB mode), this yields the keystream. If someone screws up and uses the same key and IV twice, you can use knowledge of the first plaintext to learn the second. For other AES chaining modes, it's less scary, though if someone reuses their key and IV, knowing plaintext xor ciphertext from the first time the key,iv pair was used can reveal some plaintext from the second time it was used. --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] In the face of "cooperative" end-points, PFS doesn't help
Your cryptosystem should be designed with the assumption that an attacker will record all old ciphertexts and try to break it later. The whole point of encryption is to make that attack not scary. We can never rule out future attacks, or secret ones now. But we can move away from marginal key lengths and outdated, weak ciphers. Getting people to do that is like pulling teeth, which is why we're still using RC4, and 1024-bit RSA keys and DH primes. --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] FIPS, NIST and ITAR questions
... > Let H(X) = SHA-512(X) || SHA-512(X) > where '||' is concatenation. Assuming SHA-512 is a cryptographically secure > hash H trivially is as well. (Nothing in the definition of a cryptographic > hash function says anything about minimality.) But H(X) is clearly not > useful for producing a PRNG. You won't get a prf or stream cipher or prng or block cipher just out of collision resistance--you need some kind of pseudorandomness assumption. We expect general purpose hash functions like Keccak to provide that, but it doesn't follow from the collision resistance assumption, for exactly the reason you gave there--it's possible to design collision-resistant functions that leak input or are predictable in some bits. > I don't actually know if there exists a construction of a PRNG from a > cryptographically secure hash function. (You can build a MAC, but even > that's not trivial; people tried all kinds of things that failed until the > HMAC construction was proven correct.) The HMAC construction wouldn't give a PRF for your example of h(x) = sha512(x) || sha512(x) A single output would be trivial to distinguish from a 1024 bit random number. >-- Jerry --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] FIPS, NIST and ITAR questions
Sent from my iPad On Sep 3, 2013, at 6:06 PM, Jerry Leichter wrote: > On Sep 3, 2013, at 3:16 PM, Faré wrote: >> Can't you trivially transform a hash into a PRNG, a PRNG into a >> cypher, and vice versa? > No. > >> hash->PRNG: append blocks that are digest (seed ++ counter ++ seed) > Let H(X) = SHA-512(X) || SHA-512(X) > where '||' is concatenation. Assuming SHA-512 is a cryptographically secure > hash H trivially is as well. (Nothing in the definition of a cryptographic > hash function says anything about minimality.) But H(X) is clearly not > useful for producing a PRNG. > > If you think this is "obviously" wrong, consider instead: > > H1(X) = SHA-512(X) || SHA-512(SHA-512(X)) > > Could you determine, just from black-box access to H1, that it's equally bad > as a PRNG? (You could certainly do it with about 2^256 calls to H1 with > distinct inputs - by then you have a .5 chance of a duplicated top half of > the output, almost certainly with a distinct bottom half. But that's a > pretty serious bit of testing) > > I don't actually know if there exists a construction of a PRNG from a > cryptographically secure hash function. (You can build a MAC, but even > that's not trivial; people tried all kinds of things that failed until the > HMAC construction was proven correct.) >-- Jerry > > ___ > The cryptography mailing list > cryptography@metzdowd.com > http://www.metzdowd.com/mailman/listinfo/cryptography ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Opening Discussion: Speculation on "BULLRUN"
I don't see what problem would actually be solved by dropping public key crypto in favor of symmetric only designs. I mean, if the problem is that all public key systems are broken, then yeah, we will have to do something else. But if the problem is bad key generation or bad implementations, those will be with us even after we abandon all the public key stuff. And as Jon said, the trust problems get harder, not easier. With only symmetric crypto, whoever acts as the introducer between Alice and Bob can read their traffic passively and undetectably. With public key crypto, the introducer can do a man in the middle attack (an active attack) and risks detection, as Alice and Bob now have things signed by the introducer associating the wrong keys with Bob and Alice, respectively. --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Opening Discussion: Speculation on "BULLRUN"
> On Thu, 5 Sep 2013 19:14:53 -0400 John Kelsey > wrote: >> First, I don't think it has anything to do with Dual EC DRGB. Who >> uses it? > > It did *seem* to match the particular part of the story about a > subverted standard that was complained about by Microsoft > researchers. I would not claim that it is the most important part of > the story. One thing I can say for certain: NSA did not write SP 800-90. (That was the implication of from the New York Times article.). That was mostly Elaine Barker, my coworker, with me providing a lot of text around technical issues. The algorithms all came out of work from ANSI X9.82 Part 3, which Elaine also wrote with lots of input and text from me and the rest of the editing committee. We worked with NSA people on this, and X9.82 Part 2 (the entropy source section) was written entirely by NSA people, but it isn't easy for me to see how anyone could stick a trapdoor in that. We're still working with NSA people on SP 800-90B, which includes a lot of stuff on testing entropy sources. When I started there was an RSA based algorithm which we eventually dropped (analogous to bbs), Dual EC DRBG, the hash drbg (which is still in 800-90 in a much-changed version for backward comatibility reasons, because the standardization process took forever and people started designing to the X9.82 standard before it was done--this was also the reason we ended up keeping the Dual EC DRBG) and an X9.17 based thing we got rid of. A bunch of the symmetric stuff in there is mine--I made changes the the hash DRBG to address time/memory/data tradeoffs, and wrote the HMAC DRBG and CTR DRBG. I also changed around the hash df and wrote the bc df. It is possible Dual EC DRBG had its P and Q values generated to insert a trapdoor, though I don't think anyone really knows that (except the people who generated it, but they probably can't prove anything to us at this point). It's also immensely slower than the other DRBGs, and I have a hard time seeing why anyone would use it. (But if you do, you should generate your own P and Q.) ... >> Where do the world's crypto random numbers come from? My guess is >> some version of the Windows crypto api and /dev/random >> or /dev/urandom account for most of them. > > I'm starting to think that I'd probably rather type in the results of > a few dozen die rolls every month in to my critical servers and let > AES or something similar in counter mode do the rest. > > A d20 has a bit more than 4 bits of entropy. I can get 256 bits with > 64 die rolls, or, if I have eight dice, 16 rolls of the group. If I > mistype when entering the info, no harm is caused. The generator can > be easily tested for correct behavior if it is simply a block cipher. If you're trying to solve the problem of not trusting your entropy source, this is reasonable, but it doesn't exactly scale to normal users. Entropy collection in software is a pain in the ass, and my guess is that the overwhelming majority of developers are happy to punt and just use the OS' random numbers. That looks to be what happened with the Henninger and Lenstra results regarding lots of RSA keys with shared moduli. That flaw implies a huge number of easily factored RSA keys out there, thanks to insufficient entropy and calling /dev/urandom on a system where it won't block even if it thinks it has zero entropy. (This was a multi-level screw up, as I understand it, between a Linux version and OpenSSL and the implementors of various appliance network devices.). It would be smarter for any crypto software to use the OS source for some seed material, and then combine it with both unique or nearly unique information and anything that's likely to have some entropy, and use that to initialize a good PRNG. (I'd use CTR DRBG.) Your ethernet address doesn't have any entropy, but it works like a salt in a password hashing scheme--an attacker must now rerun his attack for each individual instance of the crypto software. In general, software that uses crypto should probably be trying to gather entropy wherever possible, not just trusting the OS. And it should use that entropy plus the OS' entropy to seed its own PRNG. And it should throw in any information that might distinguish this instance from other instances, to force any attackers to rerun their attack on each instance individually. When I saw the keystore stuff, I thought "bad key generation." Henninger found a bunch of RSA keys from the same make of devices that were sharing primes, thanks to really awful entropy collection. If those devices had been collecting 40 bits of entropy for the first prime, she might never have found a pair of keys with a shared prime. But someone using analogous methods might be able to
Re: [Cryptography] Opening Discussion: Speculation on "BULLRUN"
First, I don't think it has anything to do with Dual EC DRGB. Who uses it? My impression is that most of the encryption that fits what's in the article is TLS/SSL. That is what secures most encrypted content going online. The easy way to compromise that in a passive attack is to compromise servers' private keys, via cryptanalysis or compromise or bad key generation. For server side TLS using RSA, guessing just the client's random values ought to be enough to read the traffic. For active attacks, getting alternative certs issued for a given host and playing man in the middle would work. Where do the world's crypto random numbers come from? My guess is some version of the Windows crypto api and /dev/random or /dev/urandom account for most of them. What does most of the world's TLS? OpenSSL and a few other libraries, is my guess. But someone must have good data about this. My broader question is, how the hell did a sysadmin in Hawaii get hold of something that had to be super secret? He must have been stealing files from some very high ranking people. --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Backup is completely separate
The backup access problem isn't just a crypto problem, it's a social/legal problem. There ultimately needs to be some outside mechanism for using social or legal means to ensure that, say, my kids can get access to at least some of my encrypted files after I drop dead or land in the hospital in a coma. Or that I can somehow convince someone that it's really me and I'd like access to the safe deposit box whose password I forgot and lost my backup copy of. Or whatever. This is complicated by the certainty that if someone has the power to get access to my encrypted data, they will inevitably be forced to do so by courts or national security letters, and will also be subject to extralegal pressures or attacks to make them turn over some keys. I suspect the best that can be workably done now is to make any key escrow service's key accesses transparent and impossible to hide from the owner of the key, and then let users decide what should and shoudn't be escrowed. But this isn't all that great an answer. --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] NSA and cryptanalysis
What I think we are worried about here are very widespread automated attacks, and they're passive (data is collected and then attacks are run offline). All that constrains what attacks make sense in this context. You need attacks that you can run in a reasonable time, with minimal requirements on the amount of plaintext or the specific values of plaintext. The perfect example of an attack that works well here is a keysearch on DES; another example is the attack on WEP. All the attacks we know of on reduced-round AES and AES-like ciphers require a lot of chosen plaintexts, or related key queries, or both. There is no way to completely rule out some amazing new break of AES that makes the cipher fall open and drop your plaintext in the attacker's lap, but I don't see anything at all in the literature that supports that fear, and there are a *lot* of smart people trying to find new ways to attack or use AES-like designs. So I put this at the bottom of my list of likely problems. Some attacks on public key systems also require huge numbers of encryptions or specially formed ciphertexts that get sent to the target for decryption--we can ignore those for this discussion. So we're looking at trying to factor an RSA modulus or to examine a lot of RSA encryptions to a particular public key (and maybe some signatures from that key) and try to get somewhere from that. I don't know enough about the state of the art in factoring or attacking RSA to have a strong intuition about how likely this is. I'm pretty skeptical, though--the people. know who are experts in this stuff don't seem especially worried. However, a huge breakthrough in factoring would make for workable passive attacks of this kind, though it would have to be cheap enough to use to break each user's public key separately. Finally, we have the randomness sources used to generate RSA and AES keys. This, like symmetric cryptanalysis, is an area I know really well. And my intuition (backed by plenty of examples) is that this is probably the place that is most likely to yield a practical offline attack of this kind. When someone screws up the implementation of RSA or AES, they may at least notice some interoperability problems. They will never notice this when they screw up their implementation so that RNG only gets 32 bits of entropy before generating the user's RSA keypair. And if I know that your RSA key is likely to have one of these 2^{32} factors, I can make a passive attack work really well. Comments? --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Functional specification for email client?
I think it makes sense to separate out the user-level view of what happens (the first five or six points) from how it's implemented (the last few points, and any other implementation discussions). In order for security to be usable, the user needs to know what he is being promised by the security mechanisms--not which digital signature scheme is being used or whether decoy messages are sent to frustrate traffic analysis. If something arrives in my inbox with a from address of nob...@nowhere.com, then I need to know that this means that's who it came from. If I mail something to nob...@nowhere.com, then I need to know that only the owner of that address will be able to read it. I need to know that nobody should be able to know with whom I'm communicating. But I don't need to know about keys, digital signatures, mix nets, etc. That's what I want to know as a cryptographer, but as a user, I just want to know what the security system is promising and how reliable its promises are. My intuition is that binding the security promises to email addresses instead of identities is the right way to proceed. I think this is something most people can understand, and more importantly it's something we can do with existing technology and no One True Name Authority In The Sky handing out certs. One side issue here is that this system's email address space needs to somehow coexist with the big wide internet's address space. It will really suck if someone else can get my gmail address n the secure system, but it will also be confusing if my inbox has a random assortment of secure and insecure emails, and I have to do some extra step to know which is which. --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] NSA and cryptanalysis
If I had to bet, I'd bet on bad rngs as the most likely source of a breakthrough in decrypting lots of encrypted traffic from different sources. --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: Interesting bit of a quote
>From: "Travis H." <[EMAIL PROTECTED]> >Sent: Jul 14, 2006 11:22 PM >To: David Mercer <[EMAIL PROTECTED]> >Cc: cryptography@metzdowd.com >Subject: Re: Interesting bit of a quote ... >The problem with this is determining if the media has been replaced. >Absent other protections, one could simply write a new WORM media with >falsified information. > >I can see two ways of dealing with this: > >1) Some kind of physical authenticity, such as signing one's name on >the media as they are produced (this assumes the signer is not >corruptible), or applying a frangible difficult-to-duplicate seal of >some kind (this assumes access controls on the seals). I think this is going to resolve to chain-of-custody rules of some kind. One problem is that so long as the company making the records is storing them onsite, it's hard for an outside auditor to be sure they aren't being tampered with. (Can the CEO really not work out a way to get one of his guys access to the tape storage vault?) >2) Some kind of hash chain covering the contents, combined with >publication of the hashes somewhere where they cannot be altered (e.g. >publish hash periodically in a classified ad in a newspaper). You could do the whole digital timestamping thing here. You could also just submit hashes of this week's backup tape to your auditor and the SEC or something. Another solution is to use cryptographic audit logs. Bruce Schneier and I did some work on this several years ago, using a MAC to authenticate the current record as it's written, and a one-way function to derive the next key. (This idea was apparently developed by at least two other people independently.) Jason Holt has extended this idea to use digital signatures, which makes them far more practical. One caveat is that cryptographic audit logs only work if the logging machine is honest when the logs are written. --John - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Interesting bit of a quote
>From: Anne & Lynn Wheeler <[EMAIL PROTECTED]> >Sent: Jul 11, 2006 6:45 PM >Subject: Re: Interesting bit of a quote ... >my slightly different perspective is that audits in the past have >somewhat been looking for inconsistencies from independent sources. this >worked in the days of paper books from multiple different corporate >sources. my claim with the current reliance on IT technology ... that >the audited information can be all generated from a single IT source ... >invalidating any assumptions about audits being able to look for >inconsistencies from independent sources. A reasonable intelligent >hacker could make sure that all the information was consistent. It's interesting to me that this same kind of issue comes up in voting security, where computerized counting of hand-marked paper ballots (or punched cards) has been and is being replaced with much more user-friendly DREs, where paper poll books are being replaced with electronic ones, etc. It's easy to have all your procedures built around the idea that records X and Y come from independent sources, and then have technology undermine that assumption. The obvious example of this is rules for recounts and paper record retention which are applied to DREs; the procedures make lots of sense for paper ballots, but no sense at all for DREs. I wonder how many other areas of computer and more general security have this same kind of issue. --John Kelsey, NIST - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
A weird macro virus story
Guys, Some of my co-workers here at NIST got an email macro virus which appeared to be targeted to cryptographers. It appeared to be addressed to Moti Yung, and come from Lawrie Brown and Henri Gilbert (though that name was misspelled, maybe a transcription error from an alternate character set). Did any of you notice something like this? The email appeared to be addressed to several submission addresses for various crypto conferences. Wow, we've really made it as an influential group of people. We're important enough to be *targets* now --John - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: [Cfrg] HMAC-MD5
From: [EMAIL PROTECTED] >Sent: Mar 30, 2006 3:38 PM >To: cryptography@metzdowd.com >Subject: Re: [Cfrg] HMAC-MD5 >I think that we have the "evidence". The security MD5 depends >heavily on a lot of nonlinearities in functions F,G,I and on >carries in arithmetic additions. Nonlinearities in F,G,I are >bitwise and very weak. Carries are much stronger, but the collision >attacks showed that it is possible to controll them also. The question is, can these still be controlled when the attacker doesn't know the internal state of the chaining variables? If not, we may end up with second preimage attacks (which would finish off MD5 for most hashing applications!), but still not know how to attack HMAC. The attack model is really different! For what it's worth, though, I agree that we need to get rid of MD5 anywhere it's still in place, since the only thing we know about its security is that it's a lot less than anyone expected it to be even a year ago. In fact, we should have started this when Dobbertin had his free-start collision result. If we had, we'd be able to regard the devastating MD5 collisions we're seeing now in the same way we regard devastating attacks on FEAL. (If someone extends the best attack on FEAL to 64 rounds, that will be cool, but nobody will be scrambling to replace FEAL in their products and protocols.) >Vlastimil Klima --John Kelsey, NIST - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Entropy Definition (was Re: passphrases with more than 160 bits of entropy)
>From: John Denker <[EMAIL PROTECTED]> >Sent: Mar 24, 2006 11:57 AM >To: Erik Zenner <[EMAIL PROTECTED]>, cryptography@metzdowd.com >Subject: Re: Entropy Definition (was Re: passphrases with more than 160 bits >of entropy) >Erik Zenner wrote: > >>>0 occurs with probability 1/2 >>>each other number from 1 to 2^{160}+1 happens with >>>probability 2^{-161}. > >> Is anyone aware of whether (and where) this was >> discussed in the literature, or what other approaches are taken? > >This particular problem is contrived or at least exaggerated. The example is a contrived, pathological case. And there are a bunch of easy solutions once you know this is the distribution. The point is to demonstrate that Shannon entropy gives you the wrong answer when you try to use it here. Now, you might ask if this is a problem in a real-world setting. So let's imagine a very simple distribution--sequences of flips of a biased coin. There are nice ways to remove the bias, but let's imagine we're not doing that--instead, we're going to take a sequence of coin flips, turn it into a 0/1 sequence, and then hash it down to get a key. Suppose the coin is biased so that heads comes up 0.9 of the time, and that we generate 16-bit sequences at a time. The Shannon entropy of a 16-bit sequence is about 7.5, but the most likely symbol (all heads) comes up about 18% of the time. So if we tried to generate a 7-bit "key", we'd lose on the attacker's first guess 18% of the time. So, let's go further with this. We want to generate a DES key, with 56 bits of entropy. A 64-bit sequence produced with this biased coin has Shannon entropy of about 60 bits, but an attacker has about a 1/1000 chance of guessing the DES key we derive from it on his first try, which is unacceptable for just about any crypto application. (The min-entropy is about ten bits.) So yes, I used a contrived example to demonstrate the problem, but no, the problem isn't just an ivory-tower concern. The intuition here is that Shannon entropy is concerned with communications channels, where we assume we have to send every symbol. So when you have lots of low-probability symbols, and one of those low-probability symbols is chosen 999 times out of a 1000, the amount of bandwidth you need to transmit those symbols easily becomes dominated by them. Most of the time, you're sending one of a large set of symbols, and they all have to be distinguished from one another. The situation for the attacker is a lot more like having a channel where it's acceptable to only send a small fraction of those symbols, and just drop the rest. That is, it's okay for the attacker's model to just ignore the huge set of low-probability symbols that occur 999/1000 of the time, and instead just transmit the highest probability symbol 1/1000 of the time. Instead of transmitting it, he's just guessing it. When he gets it right, he learns your DES key. ... >This version serves to illustrate, in an exaggerated way, the necessity >of not assuming that the raw data words are IID (independent and identically >distributed). Yes! The "independent" part is much harder to deal with than the per-symbol distribution, in many real-world applications. The worst of these are operating system events (used in Windows and various Unixes to get random bits). Peter Gutmann did some really nice work on this on a bunch of operating systems in a Usenix paper, and he updated that and has a version of it on his website. If you're doing OS-event sampling to generate random bits, you really ought to look at his stuff. (But if you can get some hardware support, either directly or via sampling the microphone like the Turbid design does, you're probably on much firmer ground.) --John Kelsey, NIST - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
RE: Entropy Definition (was Re: passphrases with more than 160 bits of entropy)
>From: Erik Zenner <[EMAIL PROTECTED]> >Sent: Mar 24, 2006 4:14 AM >To: cryptography@metzdowd.com >Subject: RE: Entropy Definition (was Re: passphrases with more than 160 bits >of entropy) ... >> [I wrote:] >> 0 occurs with probability 1/2 >> each other number from 1 to 2^{160}+1 happens with >> probability 2^{-161}. >> >> The Shannon entropy on this distribution is 81.5 bits. But >> if you tried to sample it once to generate an 80-bit Skipjack >> key, half the time, I'd guess your key on my first try. > >It's entirely correct that entropy is the wrong measure here, but >the question is how a good measure would look like. There are a bunch of different entropy measures. Shannon entropy is the right one for compression ratios and channel capacity, but not for difficulty of guessing the string. >Assume that you have a sample space with N elements and an intelligent >attacker (i.e., one that tries the most probable elements first). Then >what you actually are interested in is that the attacker's probability >of success after q sampling attempts is as close as possible to the >lowest possible, namely q * 2^{-N}. A natural way of measuring this >seems to be some kind of distance between Pr[succ after q samples] and >the ideal function q * 2^{-N}. Such a measure might allow a designer >to decide whether a non-perfect distribution is still "acceptable" or >simply "far out". Is anyone aware of whether (and where) this was >discussed in the literature, or what other approaches are taken? The best discussion I've seen of this is in a couple papers by John Pliam, about something he calls the work function, W(n). This is just the probability of having guessed some secret value after n operations (typically n guesses). You can generalize this to the number of operations you have to carry out to have a given probability of violating any security property (repeating a nonce, getting a block collision in a block cipher chaining mode, etc). It's a very general way of thinking about limiting parameters of crypto algorithms. You're basically heading toward this idea in what you said above. Let's think of this for an ideal case: a 128-bit key. When you have done 2^k guesses of the key, you have a 2^{n-k} chance of having guessed the key correctly. So if you graphed the work vs probability of success on a log/log chart, you'd get a straight line for the ideal case. W(n) = 2^{-128} n, as you said above. Now, consider the case where you are generating a 128-bit key from a bunch of sampled events on your computer that have been concatenated together in a string S. Imagine making a list of all the possible values of that string, and sorting them in descending order of probability. You now have the best possible sequence of guesses. W(n) is the sum of the first n probabilities. If W(n) > 2^{-128} n for any n, then the attacker has some point where it is to his advantage to guess S instead of guessing your key. So, this is where min-entropy comes in. Min-entropy is just -lg(P[max]), the base 2 log of the maximum probability in the distribution. You can convince yourself than if the min-entropy of S is at least 128, then it's never easier to guess S than to guess K. This is because each possible input string must have probability lower than 2^{-128}, so the sum of the first n, W(n) < n 2^{-128}. This doesn't solve the much harder engineering/data analysis problem of getting some reasonable approximation for the probabilities of S, unfortunately. The easy way to solve that is to design a hardware RNG in such a way that you pretty-much know the probabilty model to use and can work out how to sample it to get a good estimate of the probabilities. However, it is kind of nice to recognize that you only have to estimate the largest probability to compute the min-entropy. (It ought to be the easiest probability to measure and bound!) >Erik > >-- >Dr. Erik Zenner Phone: +45 39 17 96 06Cryptico A/S >Chief Cryptographer Mobile: +45 60 77 95 41Fruebjergvej 3 >[EMAIL PROTECTED] www.cryptico.com DK 2100 Copenhagen --John Kelsey, NIST - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Linux RNG paper
>From: David Malone <[EMAIL PROTECTED]> >Sent: Mar 23, 2006 3:43 PM >To: "Travis H." <[EMAIL PROTECTED]> >Cc: "Heyman, Michael" <[EMAIL PROTECTED]>, cryptography@metzdowd.com, [EMAIL >PROTECTED], [EMAIL PROTECTED] >Subject: Re: Linux RNG paper ... >One metric might be guessability (mean number of guesses required >or moments there of). As you point out, Arikan and Massey have >shown that Shannon entropy are not particularly good estimates of >guessability. Generalisations of entropy, like Reni entropy do seem >to have meaning. The min-entropy mentioned in RFC 4086 seems >reasonable, though I don't think the rational given in the RFC is >actually correct. Starting clarification: Min-entropy of a probability distribution is -lg ( P[max] ), minus the base-two log of the maximum probability. The nice thing about min-entropy in the PRNG world is that it leads to a really clean relationship between how many bits of entropy we need to seed the PRNG, and how many bits of security (in terms of resistance to brute force guessing attack) we can get. Suppose I have a string S with 128 bits of min-entropy. That means that the highest probablity guess of S has probability 2^{-128}. I somehow hash S to derive a 128-bit key. The question to ask is, could you guess S more cheaply than you guess the key? When the min-entropy is 128, it can't be any cheaper to guess S than to guess the key. That's true whether we're making one guess, two guesses, ten guesses, or 2^{127} guesses. To see why this is so, consider the best case for an attacker: S is a 128-bit uniform random string. Now, all possible values have the same probability, and guessing S is exactly the same problem as guessing the key. (I'm ignoring any bad things that happen when we hash down to a key, but those can be important to think about in a different context.) Now, why is this the best case for an attacker? Because it gives the highest probability of guessing right on the nth guess. If the min-entropy is 128, then the highest probability symbol must have prob. 2^{-128}. If the next highest probability symbol has lower than 2^{-128} probability, then his second guess has lower probability. And then the next highest probability symbol is constrained in the same way. This makes it really easy, once you're working in min-entropy terms, to answer questions like "do I have enough entropy in this string to initialize a PRNG based on running AES-128 in counter mode?" > David. --John Kelsey, NIST - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Entropy Definition (was Re: passphrases with more than 160 bits of entropy)
>From: John Denker <[EMAIL PROTECTED]> >Sent: Mar 23, 2006 1:44 PM >To: John Kelsey <[EMAIL PROTECTED]>, cryptography@metzdowd.com >Subject: Re: Entropy Definition (was Re: passphrases with more than 160 bits >of entropy) ... >With some slight fiddling to get the normalization right, 1/2 >raised to the power of (program length) defines a probability >measure. This may not be "the" probability you want, but it >is "a" probability, and you can plug it into the entropy definition. No, this isn't right for the algorithmic information theory meaning at all. For that measure, we can intelligently discuss the entropy of a specific random string, without reference to a probability model. Indeed, what's the probability distribution of the sequence of bits defined by Chaitin's Omega? You can certainly complain that they should have used a different term than entropy here, but you're not going to get these to be the same. >The problem is almost never understanding the definition of >entropy. The problem is almost always ascertaining what is >the correct probability measure applicable to the problem >at hand. Well, you need to decide which of the probability distribution kinds of entropy measures to use, and that differs in different applications. If you use min-entropy or guessing entropy to estimate the limits on how well some sequence of symbols will compress, you'll get a pretty lousy answer. The same goes for using Shannon entropy to determine whether you have collected enough entropy in your pool to generate a 128-bit AES key. ... >> 0 occurs with probability 1/2 >> each other number from 1 to 2^{160}+1 happens with probability >> 2^{-161}. >> >> The Shannon entropy on this distribution is 81.5 bits. > >I think it is very close to 81 bits, not 81.5, but that is a minor >point that doesn't change the conclusion: >> But if you >> tried to sample it once to generate an 80-bit Skipjack key, half the >> time, I'd guess your key on my first try. > >Yeah, but if I sample it N times, with high probability I can generate >a large number of very good keys. This problem is faced by (and solved >by) any decent TRNG. Hmmm. I've seen plenty of people get this wrong. If you use Shannon entropy as your measure, and then say "when I have collected 128 bits of Shannon entropy in my pool, I can generate a 128-bit AES key," you will generate keys that aren't as secure as you think they are. Now, most TRNGs seem to evade this and many other problems by designing the hardware to give a relatively simple probability model. If your probability model is close to uniform, then all these probability distribution based entropy measurements converge (with a constant difference). In the case above, we had better specify N. If you sample it 16 times, then you have a 2^{-16} chance of still being in a weak state. Once you get enough samples that the probability of being in the pathological worst case is negligible (whatever that means in your application), then you can start generating output bits. That's probably somewhere between N=32 and N=64. --John - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Entropy Definition (was Re: passphrases with more than 160 bits of entropy)
>From: Jack Lloyd <[EMAIL PROTECTED]> >Sent: Mar 22, 2006 11:30 PM >To: cryptography@metzdowd.com >Subject: Re: Entropy Definition (was Re: passphrases with more than 160 bits >of entropy) ... As an aside, this whole discussion is confused by the fact that there are a bunch of different domains in which entropy is defined. The algorithmic information theory sense of entropy (how long is the shortest program that produces this sequence?) is miles away from the information theory sense of entropy, and even that has a bunch of different flavors. For the information theory meanings of entropy, what you're really talking about is a property of a probability distribution. You can do that in terms of Shannon entropy if you want to know about bounds on average bandwidth requirements for transmitting symbols from that distribution. You can look at guessing entropy if you want to know the -lg(expected work to guess the next symbol). You can look at min-entropy if you want a hard bound on how many symbols you need to sample to derive a 128-bit key that won't ever expose you to weaknesses based on how you selected it. And so on. Shannon entropy is the one most people know, but it's all wrong for deciding how many samples you need to derive a key. The kind of classic illustration of this is the probability distirbution: 0 occurs with probability 1/2 each other number from 1 to 2^{160}+1 happens with probability 2^{-161}. The Shannon entropy on this distribution is 81.5 bits. But if you tried to sample it once to generate an 80-bit Skipjack key, half the time, I'd guess your key on my first try. ... >Yes. Let's say the contents of tommorrow's NY Times has n bits of entropy (we >probably can't actually calculate n, but we know it is a finite and fixed >value). And the LA Times from the same day will also have some amount of >entropy (call it n'). However, the entropy of the two papers combined would >(probably) not be n+n', but some number x st min(n,n') <= x <= n+n', because >the two papers will report on many of the same issues, carry some of the same >AP stories, and so forth. This redundant information doesn't increase the >entropy (reading the same AP story in a second newspaper wouldn't give you any >new information). Right. If the probabilities are independent, you can add the entropies. That's why they're defined in log terms. --John - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: pipad, was Re: bounded storage model - why is R organized as 2-d array?
>From: [EMAIL PROTECTED] >Sent: Mar 21, 2006 9:58 AM >To: [EMAIL PROTECTED] >Cc: [EMAIL PROTECTED], cryptography@metzdowd.com >Subject: Re: pipad, was Re: bounded storage model - why is R organized as >2-d array? ... >| Anyone see a reason why the digits of Pi wouldn't form an excellent >| public large (infinite, actually) string of "random" bits? >The issue would be: Are there any dependencies amoung the bits of >pi that would make it easier to predict where an XOR of n streams of >bits taken from different positions actually come from - or, more >weakly, to predict subsequent bits. When you build this scheme, you have to compare it to all other ways of generating random-looking keystream for a stream cipher. That means comparing it with generators which are guaranteed to be as hard to predict output bits for as a large integer is hard to factor, for example. Beyond the coolness factor of pi, it's hard to work out what additional guarantee we're getting from using it here. I don't know what the performance of the algorithm for generating the next n bits of pi looks like, but I'm guessing that I can do a fair number of AES/Serpent/Twofish/MARS/RC6/Blowfish/CAST/3DES/IDEA/RC5 calculations for the same cost. And we know how to build a stream cipher out of all those ciphers (running in OFB or counter mode) which is guaranteed to be as strong as the strongest of them. It's all about tradeoffs between performance, security, and what strong statements you can make about your security when you're done. In some applications, I am willing to give up a lot of performance for a solid proof of security; in others, I am willing to give up any hope of a proof of security to get really good performance. > -- Jerry --John Kelsey - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: NPR : E-Mail Encryption Rare in Everyday Use
>From: Peter Saint-Andre <[EMAIL PROTECTED]> >Sent: Feb 24, 2006 3:18 PM >Subject: Re: NPR : E-Mail Encryption Rare in Everyday Use ... >We could just as well say that "encryption of remote server sessions is >rare in everyday use". It's just that only geeks even do remote server >sessions, so they use SSH instead of telnet. >The thing is that email is in wide use (unlike remote server sessions). >Personally I doubt that anything other than a small percentage of email >will ever be signed, let alone encrypted (heck, most people on this list >don't even sign their mail). I'm certain that only a small percentage of e-mail will ever be signed, so long as the tools to do that are so hard to use, and the value added so small. I find it useful to use encryption all the time on my private data, but virtually never use it for communications, because even among cryptographers the setup hassles are too great, and the value added too small. What we ultimately need is encryption and authentication that are: a. Automatic and transparent. b. Add some value or are bundled with something that does. c. Don't try to tie into the whole horrible set of PKI standards in terms of uniquely identifying each human and bit in the universe, and getting them to sign legally binding messages whose full interpretation requires reading and understanding a 30-page CPS. If email encryption became as transparent as SSL, most e-mail would be encrypted. This would still leave various phishing issues, etc., but eavesdropping and a lot of impersonation and spam and phishing would get much harder. >Peter --John Kelsey - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: thoughts on one time pads
troy the pad as it is used, but we have to worry >about data remanence. You have to worry about securing the key material from cradle to grave, and operationally makign sure you use the right key material with the right person and never reuse it. OTPs are terribly sensitive to the randomness of your key material (if you screw up and use bits biased 60/40 for your 128-bit AES key, you'll drop to 90+ bits of security; if you do that for your OTP, you'll reveal quite a bit of the normal ASCII text you send, because there's not very much entropy per bit.). The issue of preventing reuse is enormous. >3) How should one combine OTP with another conventional encryption >method, so that if the pad is copied, we still have conventional >cipher protection? In this manner, one could use the same system for >different use cases; one could, for example, mail the pad, or leave it >with a third party for the recipient to pick up, and you >opportunistically theoretical security if the opponent doesn't get it, >and you get empirical (conventional) security if they do. If you dig around, Rivest wrote a paper at Crypto 82 or 83 about randomizing encryption which is still pretty nice. To a first approximation, superencryption (encrypting the OTP-encrypted text with AES-CBC or some such thing) is always okay. >4) For authentication, it is simple to get excellent results from an >OTP. You simply send n bytes of the OTP, which an attacker has a >2^-8n chance in guessing. How do we ensure message integrity? Is it >enough to include a checksum that is encrypted with the pad? There are provably secure authentication schemes that use much less key material per message. Google for universal hashing and IBC Hash, and for provably secure authentication schemes. I seem to recall that Stinson has a really nice survey of this either webbed or in his book. (Anyone else remember?) >Does it >depend on our method of encipherment? Assuming the encipherment is >XOR, is a CRC sufficient, or can one flip bits in the message and CRC >field so as to cancel each other? Yep, you can flip bits, so long as you know the CRC polynomial. If the CRC polynomial is randomly chosen for each message, and used correctly, you get one of those provably-secure authentication schemes. >If so, how should we compute a MIC? > Just SHA-1, and include that right after the plaintext (that is, we >encrypt the MIC so as to not reveal a preimage if SHA-1 is found to be >invertible)? Read The Friendly Literature. ... >"The generation of random numbers is too important to be left to chance." > -- Robert R. Coveyou -><- http://www.lightconsulting.com/~travis/ >GPG fingerprint: 50A1 15C5 A9DE 23B9 ED98 C93E 38E9 204A 94C2 641B --John Kelsey - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: RNG quality verification
ainst it. But with a single 128-bit output, I am nearly certain of identifying this source. If you don't know the key I'm using for AES, then distinguishing counter mode outputs is as hard as breaking AES, for some very precisely defined meaning of the work "breaking." But it has no more than 128 bits of entropy, because if you ever guessed the right 128 bit AES key, you could predict all future outputs from my PRNG forever. That means it would be silly to use, say, AES128 to generate 256-bit AES keys, since the attacker would only need to guess the 128-bit AES key to learn all the 256-bit AES keys you were generating. For what it's worth, we're working on a standard for cryptographic random number generation in X9. There's a draft document (SP800-90) up on the NIST website http://csrc.nist.gov/publications/drafts.html#sp800-90 discussing some hopefully good crypto PRNGs, and some guidelines for their use. This doesn't discuss much about analyzing entropy sources (we're still hashing that out). If you want to understand the security of crypto PRNG algorithms, you can look at some papers I did on this with Bruce Schneier, Niels Ferguson, David Wagner, and Chris Hall: http://www.schneier.com/paper-yarrow.html www.cs.berkeley.edu/~daw/papers/prngs-fse98.ps Also, Peter's PRNG paper is at his website: http://www.cs.auckland.ac.nz/~pgut001/ And Lisa Yin did a paper with Desai and Hevia for Eurocrypt 2002 trying to do reduction proofs for various PRNGs--basically showing that if certain properties of the hash function or block cipher used hold, then the PRNG is secure. I don't know if there's an online version available. If you want to understand how to do the data analysis for a hardware entropy source, I recommend looking at the analysis of the Intel and VIA C3 hardware RNGs, both on Cryptography Research's site. http://www.cryptography.com/research/evaluations.html The big thing to understand here is how much nicer life is when you design the source of entropy from the beginning to follow some kind of reasonable probability model, rather than looking for some way to adapt a mathematically useful model to some complicated process like OS loading or something. There's a reasonably nice suite of statistical tests available from NIST, though I've heard some complaints about the portability of the code. More broadly, there are a gazillion statistical packages out there. But if you don't understand your model, you'll just shoot yourself in the foot by throwing sophisticated statistical tools at the problem. >Best regards, >Philipp G�hring --John Kelsey, NIST - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: [EMAIL PROTECTED]: Re: [EMAIL PROTECTED]: [IP] more on AP Story Justice Dept. Probing Domestic Spyin]
... >From: Eugen Leitl <[EMAIL PROTECTED]> >Sent: Jan 1, 2006 6:18 AM >To: Cryptography List >Subject: [EMAIL PROTECTED]: Re: [EMAIL PROTECTED]: [IP] more on AP > Story Justice Dept. Probing Domestic Spyin] ... >as long as your OTP's are truly random and never compromised, the key >exchange will be secure and the only way to attack your traffic >remotely will be brute force of AES256. I'm coming late to this discussion, but if you're already trusting AES256 for security, why not just exchange a single long-term AES256 key between mutually-trusting sites? Then, you can generate today's piece of the one-time-pad using a shared counter or a timestamp or something. Further, this lets you store the secret that derives your keys inside a tamper-resistant crypto module. >Eugen* Leitl http://leitl.org";>leitl http://leitl.org --John Kelsey, NIST - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Encryption using password-derived keys
>From: Jack Lloyd <[EMAIL PROTECTED]> >Sent: Nov 29, 2005 11:08 AM >To: cryptography@metzdowd.com >Subject: Encryption using password-derived keys >The basic scenario I'm looking at is encrypting some data using a >password-derived key (using PBKDF2 with sane salt sizes and iteration >counts). I am not sure if what I'm doing is sound practice or just >pointless overengineering and wanted to get a sanity check. >My inclination is to use the PBKDF2 output as a key encryption key, >rather than using it to directly key the cipher (with the key used >for the cipher itself being created by a good PRNG). For some reason >the idea of using it directly makes me nervous, but not in a way I >can articulate, leading me to suspect I'm worried over nothing. I think this is sensible for convenience reasons: a. You can now change passwords without decrypting and re-encrypting all your data. And similarly, if you ever had a reason to change keys, you could do that without changing passwords. b. You can now check the correctness of the entered password when you decrypt the data encryption key (using authenticated encryption!), rather than needing to process the whole data. (You could also just generate a few extra check bits from PBKDF2.) >So, assuming using it as a KEK makes sense: At first I thought to use >XOR to combine the two keys, but realized that could lead to related >key attacks (by just flipping bits in the field containing the >encrypted key). That is probably not a problem with good algorithms, >but, then again, why take the chance; so I was thinking instead using >NIST's AES-wrap (or perhaps a less weirdly designed variant of it >that uses HMAC for integrity checking and AES in CBC mode for >confidentiality). You almost certainly need to do encryption and authentication both on your bulk data and your encrypted key. So why not do some mode that does both (CCM being the obvious choice) for both the key encryption and the bulk data encryption? Like: salt = PRNG_output(128) iteration_count = 100 nonce = current nonce DEK = PRNG_output(128) KEK = PBKDF2(password,salt,iteration_count,128) KeyBlob = CCM(KEK,0,DEK) BulkData = CCM(DEK,nonce,plaintext) >Am I thinking about this far harder than I should? I'll toss two other random ideas out there to see if they're useful to you: a. You may be worried about having a properly seeded PRNG available to do your data encryption key generation. I think a sensible way around this is to use both a PRNG output and some extra bits from PBKDF2 to derive the first data encryption key. Like you could do: X = PBKDF2(password,salt,iteration_count,256) KEK = left 128 bits of X S = right 128 bits of X DEK = S xor PRNG_output(128) b. You can use a clever trick by Abadi, Lomas and Needham to save yourself most of the work you do on iterating the password hash during the creation of the KEK, but not when rederiving it. Basically, what you do is instead of setting an iteration count of 2^{21}, you generate a big random salt, and omit 20 bits of it from the salt that's stored with the encrypted file. This forces anyone trying to re-derive the KEK to do about 2^{20} work on average, but it makes generating the original encrypted file almost free. I'm always surprised that this isn't used more often, because it's such a clever trick. >-Jack --John Kelsey - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: the effects of a spy
>From: [EMAIL PROTECTED] >Sent: Nov 16, 2005 12:26 PM >Subject: Re: the effects of a spy ... >Remember Clipper? It had an NSA-designed 80-bit encryption >algorithm. One interesting fact about it was that it appeared to be >very aggressively designed. Most published algorithms will, for >example, use (say) 5 rounds beyond the point where differential >cryptoanalysis stops giving you an advantage. Clipper, on the other >hand, falls to differential cryptoanalysis if you use even one less >round than the specification calls for. Nipick: The system was Clipper, the algorithm was Skipjack. >Why the NSA would design something so close to the edge has always >been a bit of a mystery (well, to me anyway). One interpretation is >that NSA simply has a deeper understanding than outsiders of where >the limits really are. What to us looks like aggressive design, to >them is reasonable and even conservative. Three comments here: a. Maybe they really do have a good generic differential-probability limiting algorithm. There are algorithms like this in the public world (they can be really computationally expensive, and they only tell you upper bounds on a subset of possible attacks), and you'd expect NSA to be interested, since they design a lot of algorithms. It's not so intuitive to me that this would have applied to impossible differentials unless they designed it to, though. In that case, you're looking at differentials that can't appear instead of differentials that appear too often. b. Maybe they don't care that much about attacks that require some huge number of plaintexts. The academic world has defined the game in terms of total work being the critical parameter in the attack, and we're seeing a push over time to move that to total attack cost. (That is, it's not so interesting if you have a 2^{100} attack on a 128-bit block cipher, if the attack is impossible to parallelize.) If someone publishes an attack on Twofish tomorrow which requires 2^{96} plaintexts to break it faster than brute-force, we'll all agree that's an attack. But there's no reason NSA has to think that way--maybe they have some other parameter like 2^{n/2} texts for n-bit block ciphers, after which they don't care about attacks because they're not practical. c. Maybe they just got it wrong. SHA0 and SHA1 demonstrate that this is all too possible. (It's quite plausible to me that they have very good tools for analyzing block ciphers, but that they aren't or weren't sure how to best apply them to hash functions.) ... >Or maybe ... the reasoning Perry mentions above applies here. Any >time you field a system, there is a possibility that your opponents >will get hold of it. In the case of Clipper, where the algorithm was >intended to be published, there's no "possibility" about it. So why >make it any stronger than you have to? Reducing Skipjack to 31 rounds wouldn't make a practical trapdoor appear! You're still talking about 2^{34} chosen plaintexts! > -- Jerry --John Kelsey - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: timing attack countermeasures (nonrandom but unpredictable delays)
>From: "Travis H." <[EMAIL PROTECTED]> >Sent: Nov 16, 2005 11:37 PM >To: David Wagner <[EMAIL PROTECTED]> >Cc: cryptography@metzdowd.com >Subject: Re: timing attack countermeasures (nonrandom but unpredictable delays) ... >I don't follow; averaging allows one to remove random variables from >the overall time, but you're still left with the real computation >time plus the the deterministic delay I suggested as a function of >the input. >Specifically, time t(k,x) = f(k,x) + d(k,x) + r >Where r is a random variable modelling all random factors, f is the >time to compute the function, and d is the deterministic delay I >suggested that is a function of the inputs. Averaging with repeated >evaluations of the same k and x allows one to compute the mean value >of r, and the sum f+d, but I don't see how that helps one seperate f >from d. What am I missing? Let's assume d(k,x) is a random function of k||x, uniformly distributed between 0 and T where T is the average time of the encryption. I choose a set of inputs to the cipher x[0,1,2,...,n-1] so that if my guess of some part of k is right, I expect their total timing to be much lower than the average case. I get back Sum(f(k,x[i])+d(k,x[i])+r[i]). Suppose my guess is wrong. Now what we expect is: a. Sum(f(k,x[i]) = average value b. Sum(d(k,x[i]) = average value c. Sum(r[i]) = average value Suppose my guess is right. Now what we expect is: a. Sum(f(k,x[i]) = unusually low value b. Sum(d(k,x[i]) = average value c. Sum(r[i]) = average value So, right guesses still give me unusually low values, and wrong guesses still give me average-looking values. That means the timing channel is still there--d(k,x) only adds random noise. The only way to avoid this is to make d(k,x) somehow related to f(k,x). That's the idea behind things like having software or hardware go through both the 0 and 1 case for each bit processed in an exponent. In that case, we get d(k,x) being fast when f(k,x) is slow, and vice versa, and we close the timing channel. As long as d(k,x) is independent of f(k,x), I can still test guesses of parts of k or parts of x. --John Kelsey - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: On Digital Cash-like Payment Systems
>From: cyphrpunk <[EMAIL PROTECTED]> >Sent: Oct 27, 2005 9:15 PM >To: "James A. Donald" <[EMAIL PROTECTED]> >Cc: cryptography@metzdowd.com, [EMAIL PROTECTED] >Subject: Re: On Digital Cash-like Payment Systems >On 10/26/05, James A. Donald <[EMAIL PROTECTED]> wrote: >> How does one inflate a key? >Just make it bigger by adding redundancy and padding, before you >encrypt it and store it on your disk. That way the attacker who wants >to steal your keyring sees a 4 GB encrypted file which actually holds >about a kilobyte of meaningful data. Current trojans can steal files >and log passwords, but they're not smart enough to decrypt and >decompress before uploading. They'll take hours to snatch the keyfile >through the net, and maybe they'll get caught in the act. Note that there are crypto schemes that use huge keys, and it's possible to produce simple variants of existing schemes that use multiple keys. That would mean that the whole 8GB string was necessary to do whatever crypto thing you wanted to do. A simple example is to redefine CBC-mode encryption as C[i] = E_K(C[i-1] xor P[i] xor S[C[i-1] mod 2^{29}]) where S is the huge shared string, and we're using AES. Without access to the shared string, you could neither encrypt nor decrypt. >CP --John - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: On the orthogonality of anonymity to current market demand
>From: "R.A. Hettinga" <[EMAIL PROTECTED]> >Sent: Oct 25, 2005 8:34 AM >To: cryptography@metzdowd.com, [EMAIL PROTECTED] >Subject: On the orthogonality of anonymity to current market demand ... >That is to say, your analysis conflicts with the whole trend towards >T-0 trading, execution, clearing and settlement in the capital >markets, and, frankly, with all payment in general as it gets >increasingly granular and automated in nature. The faster you can >trade or transact business with the surety that the asset in question >is now irrevocably yours, the more trades and transactions you can >do, which benefits not only the individual trader but markets as a >whole. The prerequisite for all this is that when the asset changes hands, it's very nearly certain that this was the intention of the asset's previous owner. My point isn't to express my love for book-entry payment systems. There's plenty to hate about them. But if the alternative is an anonymous, irreversible payment system whose control lies in software running alongside three pieces of spyware on my Windows box, they probably still win for most people. Even bad payment systems are better than ones that let you have everything in your wallet stolen by a single attack. ... >However "anonymous" irrevocability might offend one's senses and >cause one to imagine the imminent heat-death of the financial >universe (see Gibbon, below... :-)), I think that technology will >instead step up to the challenge and become more secure as a >result. What's with the heat-death nonsense? Physical bearer instruments imply stout locks and vaults and alarm systems and armed guards and all the rest, all the way down to infrastructure like police forces and armies (private or public) to avoid having the biggest gang end up owning all the gold. Electronic bearer instruments imply the same kinds of things, and the infrastructure for that isn't in place. It's like telling people to store their net worth in their homes, in gold. That can work, but you probably can't leave the cheapest lock sold at Home Depot on your front door and stick the gold coins in the same drawer where you used to keep your checkbook. >And, since internet bearer transactions are, by their very >design, more secure on public networks than book-entry transactions >are in encrypted tunnels on private networks, they could even be said >to be secure *in spite* of the fact that they're anonymous; that -- >as it ever was in cryptography -- business can be transacted between >two parties even though they don't know, or trust, each other. Why do you say internet bearer transactions are more secure? I can see more efficient, but why more secure? It looks to me like both kinds of payment system are susceptible to the same broad classes of attacks (bank misbehavior (for a short time), someone finding a software bug, someone breaking a crypto algorithm or protocol). What makes one more secure than the other? ... >Cheers, >RAH --John Kelsey - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: [fc-discuss] Financial Cryptography Update: On Digital Cash-like Payment Systems
>From: cyphrpunk <[EMAIL PROTECTED]> >Sent: Oct 24, 2005 5:58 PM >To: John Kelsey <[EMAIL PROTECTED]> >Subject: Re: [fc-discuss] Financial Cryptography Update: On Digital Cash-like >Payment Systems ... >Digital wallets will require real security in user PCs. Still I don't >see why we don't already have this problem with online banking and >similar financial services. Couldn't a virus today steal people's >passwords and command their banks to transfer funds, just as easily >as the fraud described above? To the extent that this is not >happening, the threat against ecash may not happen either. Well, one difference is that those transactions can often be undone, if imperfectly at times. The whole set of transactions is logged in many different places, and if there's an attack, there's some reasonable hope of getting the money back. And that said, there have been reports of spyware stealing passwords for online banking systems, and of course, there are tons of phishing and pharming schemes to get the account passwords in a more straightforward way. The point is, if you're ripped off in this way, there's a reasonable chance you can get your money back, because the bank has a complete record of the transactions that were done. There's no chance of this happening when there's no record of the transaction anywhere. >> The payment system operators will surely be sued for this, because >> they're the only ones who will be reachable. They will go broke, and >> the users will be out their money, and nobody will be silly enough to >> make their mistake again. >They might be sued but they won't necessarily go broke. It depends on >how deep the pockets are suing them compared to their own, and most >especially it depends on whether they win or lose the lawsuit. I don't think so. Suppose there's a widespread attack that steals money from tens of thousands of users of this payment technology. There seem to be two choices: a. The payment system somehow makes good on their losses. b. Everyone who isn't dead or insane pulls every dime left in that system out, knowing that they could be next. It's not even clear that these are mutually exclusive, but if (a) doesn't happen, (b) surely will. Nobody wants their money stolen, and I don't think many people are so confident of their computer security that they're willing to bet huge amounts of money on it. If you have to be that confident in your computer security to use the payment system, it's not going to have many clients. >CP --John - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: [fc-discuss] Financial Cryptography Update: On Digital Cash-like Payment Systems
From: cyphrpunk <[EMAIL PROTECTED]> Sent: Oct 24, 2005 2:14 PM Subject: Re: [fc-discuss] Financial Cryptography Update: On Digital Cash-like Payment Systems On 10/22/05, Ian G <[EMAIL PROTECTED]> wrote: >Note that e-gold, which originally sold non-reversibility as a key >benefit of the system, found that this feature attracted Ponzi >schemes and fraudsters of all stripes, and eventually it was forced >to reverse transactions and freeze accounts. It's not clear that any >payment system which keeps information around to allow for potential >reversibility can avoid eventually succumbing to pressure to reverse >transactions. Only a Chaumian type system, whose technology makes >reversibility fundamentally impossible, is guaranteed to allow for >final clearing. And even then, it might just be that the operators >themselves will be targeted for liability since they have engineered >a system that makes it impossible to go after the fruits of criminal >actions. More to the point, an irreversible payment system raises big practical problems in a world full of very hard-to-secure PCs running the relevant software. One exploitable software bug, properly used, can steal an enormous amount of money in an irreversible way. And if your goal is to sow chaos, you don't even need to put most of the stolen money in your own account--just randomly move it around in irreversible, untraceable ways, making sure that your accounts are among the ones that benefit from the random generosity of the attack. The payment system operators will surely be sued for this, because they're the only ones who will be reachable. They will go broke, and the users will be out their money, and nobody will be silly enough to make their mistake again. >CP --John - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: multiple keys to 1
>From: rbg9000 <[EMAIL PROTECTED]> >Sent: Sep 8, 2005 3:01 PM >To: cryptography@metzdowd.com >Subject: multiple keys to 1 >Sorry, I really don't know much about encryption, and my >google searches haven't turned up much. I wondering if it's >possible to reduce a set of symmetric keys (aes, twofish, >etc..) used to encrypt data into 1 key that can decrypt it? The straightforward symmetric crypto approach to this (it's not pretty or elegant, but it works) is to have each encrypting key be shared with the owner of the recipient key. Thus, we might have: Alice has key K_a Bob has key K_b Carol has key K_c Dave, the recipient, knows all three keys. Now, to encrypt a message to Dave, anyone can just do something like Encrypt(K_a,Message) Dave can try all possible keys, or can require that the sender prepend some kind of unique identifier, like Alice, Encrypt(K_a,Message) If you're looking for more elegant solutions, you end up needing to look at public key cryptosystems as Perry pointed out. Or look at multicast encryption and signature schemes, which have some neat stuff right at the boundary between symmetric and public key crypto. --John - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Another entry in the internet security hall of shame....
Guys, Recently, Earthlink's webmail server certificate started showing up as expired. (It obviously expired a long time ago; I suspect someone must have screwed up in changing keys over or something, because the problem wasn't happening up until recently.) So, I contacted Earthlink's technical support, and I got this really encouraging reply / Dear John Kelsey, Thank you for contacting us. I understand that you are having problems viewing Webmail and that it send out an error on SSL certificate. I suggest that you try lowering the security settings of your Internet Explorer. Please follow the steps below on how to lower the security settings on your Internet Explorer. 1. Open Internet Explorer. 2. On the Task panel click on Internet Options. 3. Click on the Advance Tab. 4. Scroll down and uncheck [Warn about invalid site certificates]. 5. Remember to click on Apply. 6. Click on OK. You have successfully lower your Internet Explorer settings. Should you have any other concerns, please get back to us. You will receive a prompt reply. Sincerely, Therese B. 3613 EarthLink Electronic Customer Support EarthLink, Inc. Case ID 69080634 Looking for easy access to news, stocks, sports, and your favorite links? With the EarthLink Personal Start Page you can customize everything from the background colors to your local weather. For more information please visit http://start.earthlink.net Resolve your customer service questions on-line at our Account maintenance web site. To add email mailboxes, change passwords, or update your credit card information, go to: http://myaccount.earthlink.net You can also trade real-time messages with one of our friendly Live Chat representatives: http://support.earthlink.net/chat Or email us and get a response that day: http://support.earthlink.net/email Original Message Follows: - * Presented Article: 142454 * Name: John Kelsey * Email: [EMAIL PROTECTED] * Account Type: EarthLink Experience * Issue: Spam/Internet Fraud Problem * Detailed Issue: Report an Issue * Article Title: Protecting Yourself Against Email/Internet Fraud * Message Body: The SSL certificate for webmail.earthlink.net is expired. The webmail.atl.earthlink.net certificate is fine, it's just the webmail.earthlink.net certificate. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
herding attack paper submitted to ePrint archive
Guys, Yoshi and I have submitted a draft of the Herding Hash Functions paper up on the IACR ePrint server, and assuming there are no problems, it should be up reasonably soon. The core of the result is that when I can find lots of collisions for a hash function by brute force (or maybe analytically, though that gets more complicated), I can also break most systems that use a hash function to prove prior knowledge. I gave a rump session talk on this a few days ago at Crypto. --John Kelsey, NIST, August 2005 - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: How much for a DoD X.509 certificate?
>From: Peter Gutmann <[EMAIL PROTECTED]> >Sent: Aug 11, 2005 7:42 AM >To: cryptography@metzdowd.com >Subject: How much for a DoD X.509 certificate? >$25 and a bit of marijuana, apparently. See: > http://www.wjla.com/news/stories/0305/210558.html > http://www.wjla.com/news/stories/0105/200474.html >Although the story doesn't mention this, the "ID" in >question was the DoD Common Access Card, a smart card >containing a DoD-issued certificate. To get a CAC, you >normally have to provide two forms of verification... in >this case I guess the two were photo ID of dead presidents >and empirical proof that you know how to buy weed. Ah, so this was more of an attribute certificate, then. And that the certificate was issued based partly on a nonstandard proof of possession protocol. (More specifically, "proof of possession with intent to distribute.") >Peter. --John - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Possible non-extension property for hash functions
>From: Bill Frantz <[EMAIL PROTECTED]> >Sent: Aug 6, 2005 6:27 PM >To: cryptography@metzdowd.com >Subject: Possible non-extension property for hash functions ... [Talking about the length-extension property.] >H(x) = H(y) ==> H(x||s) = H(y||s) >It seems to me that there might be a class of hash >functions for which this property did not hold. While >hashes in this class might require random access to the >entire input, they could prevent the "message extension" >class of attack exploited by Lucks and Daum (see ><http://th.informatik.uni-mannheim.de/people/lucks/HashCollisions>) >when they generated two Postscript files with very >different output, but the same MD5 hash. There are actually a couple ways to do this. Either: a. Process the message sequentially, but do a final operation on the intermediate result before putting out an output hash, which introduces new collisions. Any truncated hash like SHA384 or SHA224 does this. As an added benefit, once you truncate enough bits (SHA384 truncates 128 bits), the length extension attack on prefix MAC goes away, and the Joux multicollisions and long message second preimage attacks Bruce and I came up with become much more difficult. (On the other hand, it doesn't seem easy to do any nice reduction proof from the strength of the SHA256 compression function to the strength of the SHA384 hash function.) b. Process the message multiple times, or give yourself random access, or whatever. Just processing through the message twice sequentially does eliminate the simple length-extension property, but there are variations on it that can still be used--that's why Joux multicollisions can be found even when you process the message twice sequentially. Are there other ways I'm not seeing to do this? ... >Cheers - Bill --John Kelsey - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: solving the wrong problem
>From: "Perry E. Metzger" <[EMAIL PROTECTED]> >Sent: Aug 6, 2005 2:28 PM >To: cryptography@metzdowd.com >Subject: solving the wrong problem >Frequently, scientists who know nothing about security come >up with ingenious ways to solve non-existent problems. Take >this, for example: >http://www.sciam.com/article.cfm?chanID=sa003&articleID=00049DB6-ED96-12E7-AD9683414B7F >Basically, some clever folks have found a way to "fingerprint" the >fiber pattern in a particular piece of paper so that they know they >have a particular piece of paper on hand. It is claimed that this >could help stop forged passports. >Unfortunately, the invention is wholely useless for the > stated purpose. A couple of these guys gave a talk at NIST recently. The thing is, I can think of a bunch of uses for the thing they're doing. This looks genuinely useful as a tool. Whether they've worked out how to use the tool to best effect is a different question. The passport idea doesn't add much, as you pointed out. The reason is that the thing you care about there is that the information on the passport hasn't been tampered with and originated from the right source. An identical copy of my passport is no worse than the original. On the other hand, think about the uses of this technology for paper bearer instruments. Design travelers' checks that include a 2D barcode with a BLS signature, bound to the piece of paper, and you can print the damned thing on regular paper if the readers are cheap enough. Similar things apply to stamps, tickets, etc. If you can get readers into peoples' homes, you can even allow home printing of tickets, travelers' checks, etc., each bound to a specific piece of paper. Add a reader to your favorite DVD player platform (I think it's the same basic hardware as is used in a DVD player), and you can uniquely sign content on a disc, and use the player's hardware to enforce only playing content when the disc's biometric matches the signed content. You could use the technique to scan small bits of flat surfaces of all your stuff (the basic technique works on paper, plastic, and metal, at least; I'm not sure if it works on wood or glass), record the biometrics and locations of the scans, and provide this to the police when your house gets burgled. There are some wonderful potential uses for this technology in making paper-based voting systems *much* more secure. And on and on. If I were in the business of producing tamper-resistant paper, I'd be scared to death. ... >Anyway, I have a larger point. >I read about such stuff every day -- wacky new ways of >building "tamper proof tokens", "quantum cryptography", and >other mechanisms invented by smart people who don't >understand threat models at all. Yes. As I said, sometimes this stuff looks almost useless (like quantum cryptography), other times it looks like it may provide powerful tools, despite the fact that its designers don't know much about how to use those tools yet. The same is often true in cryptography, where we have some very theoretical work which sometimes ends up having enormous practical consequences. >We already have the term "snake oil" for a very different >type of bad security idea, and the term has proven valuable >for quashing such things. We need a term for this sort of >thing -- the steel tamper resistant lock added to the >tissue paper door on the wrong vault entirely, at great >expense, by a brilliant mind that does not understand the >underlying threat model at all. In my consulting days, I used to use the term "padlocking the screen door" for the related phenomenon of piling security on one part of the system while ignoring the bigger vulnerabilities. But this is a bit different >Perry --John Kelsey - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: draft paper: "Deploying a New Hash Algorithm"
>From: "Steven M. Bellovin" <[EMAIL PROTECTED]> >Sent: Aug 5, 2005 12:04 PM >To: Steve Furlong <[EMAIL PROTECTED]> >Cc: cryptography@metzdowd.com .Subject: Re: draft paper: "Deploying a New Hash Algorithm" ... >I'd have phrased it differently than Perry did. I'd say >that the attackers are often cleverer *about security* than >protocol designers, because insecurity is their specialty. >Ordinary protocol desingers are good at designing those >protocols, but they haven't been trained to think about >security. Yes! I've noticed that it's really common for me to work on a project for a very short time (like an hour or two), and start noticing all kinds of security holes, including a lot of stuff with nothing to do with cryptography. I'll still be asking very basic questions of the other people on the project about how things are *supposed* to work, but be pointing out attacks they never thought of at the same time. I think this is just a different way of thinking. Attackers and security people do this all the time. Most normal people never do--it's like once they've got the rules in their heads, that's what's possible, and they don't even think about it. How many times, working on security for some system, have you pointed out an attack, only to hear some variation on "but who would think of that?" And you can see the same thing happening in discussions of homeland security and counterterrorism stuff. It's like most people look at the national guardsmen in the airport, and say "whew, I feel safer," rather than "what the heck are those guys supposed to do to stop hijacked planes crashing into buildings?" I like your starting points, but I think the real approach to thinking about this is a bit broader. It has to do with understanding the rules, and trying to ask, for each one, "and what makes me obey that rule?" or "what would happen if I didn't do such and so?" > --Steven M. Bellovin, http://www.cs.columbia.edu/~smb --John Kelsey - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Possibly new result on truncating hashes
Guys, I have what seems like a new and interesting result, which I haven't seen before, but which may or may not be new. The high order bit is that you can't generally guarantee that truncating your hash (chopping off some bits) won't weaken it. That is, if you chop SHA256 off to 160 bits as a replacement for SHA1 (something I'm working on with Niels Ferguson for X9 right now), it's possible that there's no attack on SHA256, but there is an attack on SHA160. How could this work? Suppose we have an algorithm like the Wang attacks on MD5, SHA0, or SHA1 for finding a single collision pair. The algorithm returns a single collision pair on the first 160 bits of SHA256 for (say) 2^{64} work. (Remember that this is just an example--I don't have any such algorithm!) Each time the algorithm is run, it gives a new, unrelated collision pair, and the remaining 96 bits are completely randomized by the collision pair. Now, this is an attack on SHA256 truncated to 160 bits. Does it lead to an attack on SHA256 as a whole? If it does, then we can make a reduction proof that says that the truncated hash is strong if the original hash is strong. Unfortunately, we can't make this argument, because this postulated collision algorithm can't be used to find a collision in the whole SHA256 more efficiently than brute force. Let's do the counting argument: Each time we call the 160-bit collision algorithm, we get a new pair which has the same first 160 bits of SHA256 output, and random unrelated last 96 bits of SHA256 output. Each pair has a probability of 2^{-96} of colliding in the remaining bits. So, to get a collision on the whole SHA256 using this 160-bit collision algorithm, we expect to have to try about 2^{96} collision pairs, each found at a cost of 2^{64}. The resulting work is 2^{64} * 2^{96} = 2^{160}, more than a straight brute-force collision search on SHA256. What does this mean? It means that just because you have a good 256-bit hash, you can't necessarily make a good 160 bit hash from it. You might be able to--it seems like you usually will be able to--but there's no guarantee. Comments? Is this some well-known result that I'm rediscovering? --John - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: ID "theft" -- so what?
>From: Aram Perez <[EMAIL PROTECTED]> >Sent: Jul 14, 2005 10:45 AM >To: Cryptography >Subject: Re: ID "theft" -- so what? >Why do cryptography folks equate PKI with >certificates and CAs? One nontrivial reason is that many organizations have spent a lot of time and money building up elaborate rules for using PKI, after long negotiations between legal and technical people, many hours of writing and revising, gazillions of dollars in consultants' time, etc. So, anytime you start doing anything involving public key cryptography, all this machinery gets invoked, for bureaucratic reasons. That is, you've now trespassed on PKI turf, and you'll have to comply with this enormous set of rules. I know of a couple cases where this led to really irritating results. In one, a friend of mine was using a digital signature to verify some fairly trivial thing, but was told it was against policy to use a digital signature without the whole PKI. (I believe he changed the documentation, so that he was using a "modular arithmetic checksum" instead of a signature verification.) As a consultant, I designed and evaluated a lot of systems that used public key cryptography. None of the successful ones tried to use the whole X.509 + CRL + CPS + everything else overhead--typically, they just used a one-deep hierarchy, where the keypair was put into the device by the manufacturer along with a copy of the top-level public key used to sign all device public keys. This works, because it doesn't try to incorporate the output of 20 years of make-work programs in cryptography (they weren't intended that way, but that's largely how they turned out), and it doesn't try to incorporate every idea that might be useful anywhere in the world into some very limited and simple application. >Aram Perez --John Kelsey - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
halloween hash bash reminder--July 15 deadline
Guys, This is just a reminder that the NIST hash workshop (Oct 31-Nov 1 of this year) is still taking submitted talks, abstracts, etc., until July 15. There are no proceedings, so there should not be any problem publishing things that you discuss at this workshop. A major goal of doing this is to get people to discuss interesting ongoing work so we can understand it now, rather than after we've made decisions about how to react to all the excitement in the hash function world. (For what it's worth, I plan on presenting some new hash function results with Yoshi Kohno that we intend to publish somewhere else. I expect we'll post these on the ECRYPT server before that.) This workshop is going to have a big impact on decisions like whether we should do some AES-like process to get a new hash function standard, whether we should try to standardize on some additional algorithms (like Whirlpool or the Russian standard GOST hash function), etc. Taking part is a great opportunity to influence those decisions. If you have something new to say about hash functions, or something old that should be repeated, send us some slides, or at least an extended abstract, and we'll see whether we can fit you onto the agenda for some discussion time. --John Kelsey, NIST, July 2005 - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: /dev/random is probably not
>From: "Charles M. Hannum" <[EMAIL PROTECTED]> >Sent: Jul 3, 2005 7:42 AM >To: Don Davis <[EMAIL PROTECTED]> >Cc: cryptography@metzdowd.com >Subject: Re: /dev/random is probably not ... >Also, I don't buy for a picosecond that you have to gather >"all" timings in order to predict the output. As we know >from countless other attacks, anything that gives you some >bits will reduce the search space and therefore weaken the >system, even if it does not directly give you the result. I think you're landing on the genuinely hard problem here. Designing a PRNG intelligently is an exercise in design and cryptanalysis, so long as you get to assume that you know how much entropy you're getting. But actually getting reliably entropy estimates is: a. A data analysis problem, where you never really get a final answer, you just get the best model you knew how to test and don't find strong evidence to discard it, and b. Enormously sensitive to implementation details that are hard to deal with in software. In a hardware RNG design, you can at least analyze test-mode raw outputs of the ring oscillator or whatever, build a statistical model, and know that the same basic model will also describe the devices in the field. They may vary because of manufacturing defects, changes in the field (like heating/cooling or component failure), etc., but you at least know what kind of thing you've got. With software sources, there's pretty much no limit to what changes the designer of the hardware is allowed to make to your devices, so long as he keeps the interface the same. You do lots of analysis on a machine with a spinning disk drive, and end up on one with one networked drive and one flash drive, or some such horrible thing. Additionally, building a probability model for stuff you observe on a general purpose computer is *hard*, because there's so much complicated deterministic stuff going on. Even if you're using the same machine and setup to collect entropy in production as you did to build your probability model for entropy estimation, it's hard to have enormous confidence in the correctness of your estimates. How much of that apparently random behavior you were getting when you sampled the cycle counter in a tight loop was because of genuine unpredictability, and how much was because of the very patterned but complicated stuff going on behind the scenes on your machine? --John - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Collisions for hash functions: how to exlain them to your boss
>From: Eric Rescorla <[EMAIL PROTECTED]> Sent: Jun 14, 2005 9:36 AM >Subject: Re: Collisions for hash functions: how to exlain them to >your boss [Discussing the MD5 attacks and their practicality, especially the recent postscript demonstration.] ... >But everything you've just said applies equally to my >JavaScript example. It's not a security vulnerability in >the browser or the data format that it displays differently >depending on the context. It's an intentional feature! I think our disagreement here has to do with what we're seeing from the attack. You're seeing a specific attack vector--use conditional execution/display + the ability to find specific collisions of a particular form to yield these nice attacks where we have two messages that amount to X ||M0||M1 X*||M0||M1 where when the first part of the message is X, some kind of conditional execution displays M0, while X* leads to the display of M1. And I think you're right to say that in many cases, once you're viewing the result of blindly executing programs that I send you, you're vulnerable to other attacks that are about as damaging. Now, it's certainly possible imagine cases where this kind of conditional execution wouldn't be allowed to access anything outside the file, but once you've decided to put in a full featured scripting language, it's not that much of a stretch to think you'll let me read the system time. I'm seeing a more general pattern of attacks, in which X and X* amount to context for the display of whatever follows them. That seems to me to encompass a lot more than macros and script files with conditional execution. And even when I don't have a specific attack in mind, it worries me that if I'm trying to help someone safely use MD5, I've got to think through whether there is any way at all to make this kind of attack pattern work. It's a heck of a lot easier to say "don't use MD5." ... >-Ekr --John Kelsey - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: expanding a password into many keys
>From: Greg Rose <[EMAIL PROTECTED]> >Sent: Jun 14, 2005 2:54 PM >To: EKR <[EMAIL PROTECTED]> >Cc: Ian G <[EMAIL PROTECTED]>, cryptography@metzdowd.com >Subject: Re: expanding a password into many keys ... >You know, the proof that HMAC is a good MAC requires that the >*compression function* of the underlying hash is good. And for almost >all applications like this one, both the input password and the >sequence number, tag name, or whatever the second input is, all fit >into a single compression function block. Actually, there are quite a few attacks on these schemes that amount to messing up that property--getting the message to span multiple blocks (as with the length extension attacks). The other thing that any direct use of a crypto primitive can't help with is "sliding" data between fields. If you don't encode your fields in some parseable way before feeding them into the hash/MAC/block cipher/whatever, you get these weird attacks where I either request key "XXX" with optional additional string "Y", or I request key "XX" with optional additional string "XY", and I get the same result. Usually this doesn't lead to a practical attack, but it puts an extra burden on the engineer using the KDF, since he or she had better understand this attack and make sure it doesn't apply. Adams, Kramer, Mister, and Zucherato have a recent paper pointing out a bunch of these problems with widely used KDFs. I think of these as analogous to the length-extension property of hash functions or the complementation property of DES--it doesn't keep the crypto mechanism from being used securely, but it does make the job of an engineer trying to use it needlessly more complicated. >Greg RoseINTERNET: [EMAIL PROTECTED] --John Kelsey - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: NIST Public Workshop on Cryptographic Hashes
>From: "Perry E. Metzger" <[EMAIL PROTECTED]> >Sent: Jun 15, 2005 8:00 AM >To: cryptography@metzdowd.com >Subject: NIST Public Workshop on Cryptographic Hashes [Discussing the NIST public hash function workshop.] >> The workshop will be held on October 31 and November 1, 2005, >> from 9 a.m. to 5:30 p.m. Informally, we're calling this the halloween hash bash. Come dressed as your favorite hash function! If you want to have some impact on where we go with hash functions, this is a good thing to attend >Perry E. Metzger [EMAIL PROTECTED] --John Kelsey, NIST - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Collisions for hash functions: how to exlain them to your boss
>From: Eric Rescorla <[EMAIL PROTECTED]> >Sent: Jun 13, 2005 5:09 PM >To: "Weger, B.M.M. de" <[EMAIL PROTECTED]> >Cc: cryptography@metzdowd.com, > Stefan Lucks <[EMAIL PROTECTED]> >Subject: Re: Collisions for hash functions: how to exlain them to your boss ... >Yes, this is all true, but it's kind of orthogonal to my point, >which is that if you're willing to execute a program, this >attack can be mounted *without* the ability to produce hash >collisions. The fact that so few people regard PS, HTML, Word, >etc. as software just makes this point that much sharper. >As far as I can tell, the ability fo produce hash collisions just >makes the attack marginally worse. Hang on a minute. The issue isn't whether your data format is being executed (in some sense almost any nontrivial data format can be seen as a scripting language interpreted by the viewer). The issue is that I can make two different documents, one of which displays exactly what you tell me you want it to display, the other of which displays anything I like, with the same MD5 (MD4, RIPEMD, SHA0, SHA1) hash output. You can view the document on a viewer you trust, without any security vulnerabilities in the viewer/data format, but you still get fooled. Saying "inspect the code/data/whatever" as an answer to this problem isn't too useful, since an inspection intended to turn up security problems may not turn up the fact that the executable code has some region of data in it which could be varied in just the right way for the MD5 or SHA1 attacks to work, without making it an invalid program. (I've been thinking about how these attacks apply to validated software in voting and gaming machines) Fundamentally, it's just really hard to safely use a hash function for which collisions can be found cheaply. It requires every crypto engineer to become an expert on both how the collision attack works (and all the possible variants!) and also an expert on what ambiguity may exist in each data format he may ever want to hash. There's an obvious solution to this in principle, though the huge installed bases of MD5 and SHA1 make it painful >-Ekr -John Kelsey - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: expanding a password into many keys
>From: Ian G <[EMAIL PROTECTED]> >Sent: Jun 12, 2005 11:27 AM >To: cryptography@metzdowd.com >Subject: expanding a password into many keys >I'd like to take a password and expand it into several keys. It >seems like a fairly simple operation of hashing the concatonatonation >of the password with each key name in turn to get each key. >Are there any 'gotchas' with that? There's a length extension property with what you're doing, so if I get to choose your key names, I can do something unpleasant to you. Suppose I know the length of pass, and get to choose two key names, K1_name and K2_name. You give me K1 = sha1( pass||K1_name), then I need to guess K2_name. I can choose K2_name to be K1_name, appropriately padded to the full block size exactly as it will be in the SHA1 computation that produces K1. Then, I can compute K2 on my own, because the only effect of the secret value "pass" on K2 is going through K1. This doesn't look like an especially realistic attack model, but I'm not sure what you're doing with this >iang --John Kelsey - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]