Re: [Cryptography] please dont weaken pre-image resistance of SHA3 (Re: NIST about to weaken SHA3?)

2013-10-14 Thread John Kelsey

I guess I should preface this by saying I am speaking only for myself.  That's 
always true here--it's why I'm using my personal email address.  But in 
particular, right now, I'm not *allowed* to work.  But just speaking my own 
personal take on things

We go pretty *overwhelming* feedback in this direction in the last three weeks. 
 (For the previous several months, we got almost no feedback about it at all, 
despite giving presentations and posting stuff on hash forum about our plans.). 
 But since we're shut down right now, we can't actually make any decisions or 
changes.  This is really frustrating on all kinds of levels.

Personally, I have looked at the technical arguments against the change and I 
don't really find any of them very convincing, for reasons I described at some 
length on the hash forum list, and that the Keccak designers also laid out in 
their post.  The core of that is that an attacker who can't do 2^{128} work 
can't do anything at all to SHA3 with a 256 bit capacity that he couldn't also 
do to SHA3 with a 512 bit capacity, including finding preimages.  

But there's pretty much zero chance that we're going to put a standard out that 
most of the crypto community is uncomfortable with.  The normal process for a 
FIPS is that we would put out a draft and get 60 or 90 days of public comments. 
 As long as this issue is on the table, it's pretty obvious what the public 
comments would all be about.  

The place to go for current comments, if you think more are necessary, is the 
hash forum list.  The mailing list is still working, but I think both the 
archives and the process of being added to the list are frozen thanks to the 
shutdown.  I haven't looked at the hash forum since we shut down, so when we 
get back there will be a flood of comments there.  The last I saw, the Keccak 
designers had their own proposal for changing what we put into the FIPS, but I 
don't know what people think about their proposal. 

--John, definitely speaking only for myself
The cryptography mailing list

Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-12 Thread John Kelsey
On Oct 12, 2013, at 6:51 AM, Ben Laurie wrote:
 AIUI, you're trying to make it so that only active attacks work on the
 combined protocol, whereas passive attacks might work on the outer
 protocol. In order to achieve this, you assume that your proposed
 inner protocol is not vulnerable to passive attacks (I assume the
 outer protocol also thinks this is true). Why should we believe the
 inner protocol is any better than the outer one in this respect?

The point is, we don't know how to make protocols that really are reliably 
secure against future attacks.  If we did, we'd just do that. 

My hope is that if we layer two of our best attempts at secure protocols on top 
of one another, then we will get security because the attacks will be hard to 
get through the composed protocols.  So maybe my protocol (or whatever inner 
protocol ends up being selected) isn't secure against everything, but as long 
as its weaknesses are covered up by the outer protocol, we still get a secure 
final result.  

One requirement for this is that the inner protocol must not introduce new 
weaknesses.  I think that means it must not:

a.  Leak information about its plaintexts in its timing, error messages, or 
ciphertext sizes.  

b.  Introduce ambiguities about how the plaintext is to be decrypted that could 
mess up the outer protocol's authentication.  

I think we can accomplish (a) by not compressing the plaintext before 
processing it, by using crypto primitives that don't leak plaintext data in 
their timing, and by having the only error message that can ever be generated 
from the inner protocol be essentially a MAC failure or an out-of-sequence 

I think (b) is pretty easy to accomplish with standard crypto, but maybe I'm 
missing something.  

 Particularly since you're using tainted algorithms ;-).

If using AES or P256 are the weak points in the protocol, that is a big win.  
Right now, we aren't getting anywhere close to that.  And there's no reason 
either AES or P256 have to be used--I'm just looking for a simple, lightweight 
way to get as much security as possible inside some other protocol.  


The cryptography mailing list

Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-11 Thread John Kelsey
On Oct 11, 2013, at 1:48 AM, ianG wrote:

 What's your goal?  I would say you could do this if the goal was ultimate 
 security.  But for most purposes this is overkill (and I'd include online 
 banking, etc, in that).

We were talking about how hard it is to solve crypto protocol problems by 
getting the protocol right the first time, so we don't end up with fielded 
stuff that's weak but can't practically be fixed.  One approach I can see to 
this is to have multiple layers of crypto protocols that are as independent as 
possible in security terms.  The hope is that flaws in one protocol will 
usually not get through the other layer, and so they won't lead to practical 
security flaws.  

Actually getting the outer protocol right the first time would be better, but 
we haven't had great success with that so far. 

 Right now we've got a TCP startup, and a TLS startup.  It's pretty messy.  
 Adding another startup inside isn't likely to gain popularity.

Maybe not, though I think a very lightweight version of the inner protocol adds 
only a few bits to the traffic used and a few AES encryptions to the workload.  
I suspect most applications would never notice the difference.  (Even the 
version with the ECDH key agreement step would probably not add noticable 
overhead for most applications.)  On the other hand, I have no idea if anyone 
would use this.  I'm still at the level of thinking what could be done to 
address this problem, not how would you sell this?  


The cryptography mailing list

Re: [Cryptography] Key stretching

2013-10-11 Thread John Kelsey
This is a job for a key derivation function or a cryptographic prng.  I would 
use CTR-DRBG from 800-90 with AES256.  Or the extract-then-expand KDF based on 

The cryptography mailing list

Re: [Cryptography] Iran and murder

2013-10-10 Thread John Kelsey
The problem with offensive cyberwarfare is that, given the imbalance between 
attackers and defenders and the expanding use of computer controls in all sorts 
of systems, a cyber war between two advanced countries will not decide anything 
militarily, but will leave both combattants much poorer than they were 
previously, cause some death and a lot of hardship and bitterness, and leave 
the actual hot war to be fought. 

Imagine a conflict that starts with both countries wrecking a lot of each 
others' infrastructure--causing refineries to burn, factories to wreck 
expensive equipment, nuclear plants to melt down, etc.  A week later, that 
phase of the war is over.  Both countries are, at that point, probalby 10-20% 
poorer than they were a week earlier.  Both countries have lots of really 
bitter people out for blood, because someone they care about was killed or 
their job's gone and their house burned down or whatever.  But probably there's 
been little actual degradation of their standard war-fighting ability.  Their 
civilian aviation system may be shut down, some planes may even have been 
crashed, but their bombers and fighters and missiles are mostly still working.  
Fuel and spare parts may be hard to come by, but the military will certainly 
get first pick.  My guess is that what comes next is that the two countries 
have a standard hot war, but with the pleasant addition of a great depression 
 sized economic collapse for both right in the middle of it.

The cryptography mailing list

Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-10 Thread John Kelsey
Just thinking out loud

The administrative complexity of a cryptosystem is overwhelmingly in key 
management and identity management and all the rest of that stuff.  So imagine 
that we have a widely-used inner-level protocol that can use strong crypto, but 
also requires no external key management.  The purpose of the inner protocol is 
to provide a fallback layer of security, so that even an attack on the outer 
protocol (which is allowed to use more complicated key management) is unlikely 
to be able to cause an actual security problem.  On the other hand, in case of 
a problem with the inner protocol, the outer protocol should also provide 
protection against everything.

Without doing any key management or requiring some kind of reliable identity or 
memory of previous sessions, the best we can do in the inner protocol is an 
ephemeral Diffie-Hellman, so suppose we do this:  

a.  Generate random a and send aG on curve P256

b.  Generate random b and send bG on curve P256

c.  Both sides derive the shared key abG, and then use SHAKE512(abG) to 
generate an AES key for messages in each direction.

d.  Each side keeps a sequence number to use as a nonce.  Both sides use 
AES-CCM with their sequence number and their sending key, and keep track of the 
sequence number of the most recent message received from the other side.  

The point is, this is a protocol that happens *inside* the main security 
protocol.  This happens inside TLS or whatever.  An attack on TLS then leads to 
an attack on the whole application only if the TLS attack also lets you do 
man-in-the-middle attacks on the inner protocol, or if it exploits something 
about certificate/identity management done in the higher-level protocol.  
(Ideally, within the inner protcol, you do some checking of the identity using 
a password or shared secret or something, but that's application-level stuff 
the inner and outer protocols don't know about.  


The cryptography mailing list

Re: [Cryptography] prism-proof email in the degenerate case

2013-10-10 Thread John Kelsey
Having a public bulletin board of posted emails, plus a protocol for 
anonymously finding the ones your key can decrypt, seems like a pretty decent 
architecture for prism-proof email.  The tricky bit of crypto is in making 
access to the bulletin board both efficient and private.  

The cryptography mailing list

Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-10 Thread John Kelsey
More random thoughts:

The minimal inner protocol would be something like this:

Using AES-CCM with a tag size of 32 bits, IVs constructed based on an implicit 
counter, and an AES-CMAC-based KDF, we do the following:

a.  Generate random 128 bit value R
b.  Use the KDF to compute K[S],N[S],K[R],N[R] = KDF(R, 128+96+128+96)
c.  Sender's 32-bit unsigned counter C[S] starts at 0.
d.  Compute IV[S,0] = 96 bits of binary 0s||C[S]
e.  Send R, CCM(K[S],N[S],IV[S,0],sender_message[0])

a.  Receive R and derive K[S],N[S],K[R],N[R] from it as above.
b.  Set Receiver's counter C[R] = 0.
c.  Compute IV[R,0] = 96 bits of binary 0s||C[R]
d.  Send CCM(K[R],N[R],IV[R,0],receiver_message[0])

and so on.  

Note that in this protocol, we never send a key or IV or nonce.  The total 
communications overhead of the inner protocol is an extra 160 bits in the first 
message and an extra 32 bits thereafter.  We're assuming the outer protocol is 
taking care of message ordering and guaranteed delivery--otherwise, we need to 
do something more complicated involving replay windows and such, and probably 
have to send along the message counters.  

This doesn't provide a huge amount of extra protection--if the attacker can 
recover more than a very small number of bits from the first message (attacking 
through the outer protocol), then the security of this protocol falls apart.  
But it does give us a bare-minimum-cost inner layer of defenses, inside TLS or 
SSH or whatever other thing we're doing.  

Both this and the previous protocol I sketched have the property that they 
expect to be able to generate random numbers.  There's a problem there, 
though--if the system RNG is weak or trapdoored, it could compromise both the 
inner and outer protocol at the same time.  

One way around this is to have each endpoint that uses the inner protocol 
generate its own internal secret AES key, Q[i].  Then, when it's time to 
generate a random value, the endpoint asks the system RNG for a random number 
X, and computes E_Q(X).  If the attacker knows Q but the system RNG is secure, 
we're fine.  Similarly, if the attacker can predict X but doesn't know Q, we're 
fine.  Even when the attacker can choose the value of X, he can really only 
force the random value in the beginning of the protocol to repeat.  In this 
protocol, that doesn't do much harm.  

The same idea works for the ECDH protocol I sketched earlier.  I request two 
128 bit random values from the system RNG, X, X'.  I then use E_Q(X)||E_Q(X') 
as my ephemeral DH private key. If an attacker knows Q but the system RNG is 
secure, then we get an unpredictable value for the ECDH key agreement.  If an 
attacker knows X,X' but doesn't know Q, he doesn't know what my ECDH ephemeral 
private key is.  If he forces it to a repeated value, he still doesn't weaken 
anything except this run of the protocol--no long-term secret is leaked if AES 
isn't broken.  

This is subject to endless tweaking and improvement.  But the basic idea seems 
really valuable:  

a.  Design an inner protocol, whose job is to provide redundancy in security 
against attacks on the outer protocol.

b.  The inner protocol should be:

(i)  As cheap as possible in bandwidth and computational terms.

(ii) Flexible enough to be used extremely widely, implemented in most places, 

(iii) Administratively free, adding no key management or related burdens.

(iv) Free from revisions or updates, because the whole point of the inner 
protocol is to provide redundant security.  (That's part of administratively 

(v)  There should be one or at most two versions (maybe something like the two 
I've sketched, but better thought out and analyzed).

c.  As much as possible, we want the security of the inner protocol to be 
independent of the security of the outer protocol.  (And we want this without 
wanting to know exactly what the outer protocol will look like.)  This means:

(i)  No shared keys or key material or identity strings or anything.

(ii) The inner protocol can't rely on the RNG being good.

(iii) Ideally, the crypto algorithms would be different, though that may impose 
too high a cost.  At least, we want as many of the likely failure modes to be 

Comments?  I'm not all that concerned with the protocol being perfect, but what 
do you think of the idea of doing this as a way to add redundant security 
against protocol attacks?  


The cryptography mailing list

Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-10 Thread John Kelsey
On Oct 10, 2013, at 5:15 PM, Richard Outerbridge wrote:
 How does this prevent MITM?  Where does G come from?

I'm assuming G is a systemwide shared parameter.  It doesn't prevent 
mitm--remember the idea here is to make a fairly lightweight protocol to run 
*inside* another crypto protocol like TLS.  The inner protocol mustn't add 
administrative requirements to the application, which means it can't need key 
management from some administrator or something.  The goal is to have an inner 
protocol which can run inside TLS or some similar thing, and which adds a layer 
of added security without the application getting more complicated by needing 
to worry about more keys or certificates or whatever.  

Suppose we have this inner protocol running inside a TLS version that is 
subject to one of the CBC padding reaction attacks.  The inner protocol 
completely blocks that.  

 I'm also leery of using literally the same key in both directions.  Maybe a 
 simple transform would suffice; maybe not.

I probably wasn't clear in my writeup, but my idea was to have different keys 
in different directions--there is a NIST KDF that uses only AES as its crypto 
engine, so this is relatively easy to do using standard components.  

The cryptography mailing list

Re: [Cryptography] prism-proof email in the degenerate case

2013-10-10 Thread John Kelsey
On Oct 10, 2013, at 5:20 PM, Ray Dillinger wrote:

 On 10/10/2013 12:54 PM, John Kelsey wrote:
 Having a public bulletin board of posted emails, plus a protocol 
 for anonymously finding the ones your key can decrypt, seems 
 like a pretty decent architecture for prism-proof email.  The 
 tricky bit of crypto is in making access to the bulletin board 
 both efficient and private.  
 Wrong on both counts, I think.  If you make access private, you
 generate metadata because nobody can get at mail other than their
 own.  If you make access efficient, you generate metadata because
 you're avoiding the wasted bandwidth that would otherwise prevent
 the generation of metadata. Encryption is sufficient privacy, and
 efficiency actively works against the purpose of privacy.

So the original idea was to send a copy of all the emails to everyone.  What 
I'm wanting to figure out is if there is a way to do this more efficiently, 
using a public bulletin board like scheme.  The goal here would be:

a.  Anyone in the system can add an email to the bulletin board, which I am 
assuming is public and cryptographically protected (using a hash chain to make 
it impossible for even the owner of the bulletin board to alter things once 

b.  Anyone can run a protocol with the bulletin board which results in them 
getting only the encrypted emails addressed to them, and prevents the bulletin 
board operator from finding out which emails they got.

This sounds like something that some clever crypto protocol could do.  (It's 
related to the idea of searching on encrypted data.). And it would make an 
email system that was really resistant to tracing users.  


The cryptography mailing list

Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-09 Thread John Kelsey
On Oct 8, 2013, at 4:46 PM, Bill Frantz wrote:

 I think the situation is much more serious than this comment makes it appear. 
 As professionals, we have an obligation to share our knowledge of the limits 
 of our technology with the people who are depending on it. We know that all 
 crypto standards which are 15 years old or older are obsolete, not 
 recommended for current use, or outright dangerous. We don't know of any way 
 to avoid this problem in the future.

We know how to address one part of this problem--choose only algorithms whose 
design strength is large enough that there's not some relatively close by time 
when the algorithms will need to be swapped out.  That's not all that big a 
problem now--if you use, say, AES256 and SHA512 and ECC over P521, then even in 
the far future, your users need only fear cryptanalysis, not Moore's Law.  
Really, even with 128-bit security level primitives, it will be a very long 
time until the brute-force attacks are a concern.  

This is actually one thing we're kind-of on the road to doing right in 
standards now--we're moving away from barely-strong-enough crypto and toward 
crypto that's going to be strong for a long time to come. 

Protocol attacks are harder, because while we can choose a key length, modulus 
size, or sponge capacity to support a known security level, it's not so easy to 
make sure that a protocol doesn't have some kind of attack in it.  

I think we've learned a lot about what can go wrong with protocols, and we can 
design them to be more ironclad than in the past, but we still can't guarantee 
we won't need to upgrade.  But I think this is an area that would be 
interesting to explore--what would need to happen in order to get more ironclad 
protocols?  A couple random thoughts:

a.  Layering secure protocols on top of one another might provide some 
redundancy, so that a flaw in one didn't undermine the security of the whole 

b.  There are some principles we can apply that will make protocols harder to 
attack, like encrypt-then-MAC (to eliminate reaction attacks), nothing is 
allowed to need change its execution path or timing based on the key or 
plaintext, every message includes a sequence number and the hash of the 
previous message, etc.  This won't eliminate protocol attacks, but will make 
them less common.

c.  We could try to treat at least some kinds of protocols more like crypto 
algorithms, and expect to have them widely vetted before use.  

What else?  

 Perhaps the shortest limit on the lifetime of an embedded system is the 
 security protocol, and not the hardware. If so, how do we as society deal 
 with this limit.

What we really need is some way to enforce protocol upgrades over time.  
Ideally, there would be some notion that if you support version X of the 
protocol, this meant that you would not support any version lower than, say, 
X-2.  But I'm not sure how practical that is.  

 Cheers -- Bill

The cryptography mailing list

Re: [Cryptography] Sha3

2013-10-07 Thread John Kelsey
On Oct 6, 2013, at 6:29 PM, Jerry Leichter wrote:

 On Oct 5, 2013, at 6:12 PM, Ben Laurie wrote:
 I have to take issue with this:
 The security is not reduced by adding these suffixes, as this is only
 restricting the input space compared to the original Keccak. If there
 is no security problem on Keccak(M), there is no security problem on
 Keccak(M|suffix), as the latter is included in the former.
 I also found the argument here unconvincing.  After all, Keccak restricted to 
 the set of strings of the form M|suffix reveals that it's input ends with 
 suffix, which the original Keccak did not.  The problem is with the vague 
 nature of no security problem.

They are talking about the change to their padding scheme, in which between 2 
and 4 bits of extra padding are added to the padding scheme that was originally 
proposed for SHA3.  A hash function that works by processing r bits at a time 
till the whole message is processed (every hash function I can think of works 
like this) has to have a padding scheme, so that when someone tries to hash 
some message that's not a multiple of r bits long, the message gets padded out 
to r bits.  

The only security relevance of the padding scheme is that it has to be 
invertible--given the padded string, there must always be exactly one input 
string that could have led to that padded string.  If it isn't invertible, then 
the padding scheme would introduce collisions.  For example, if your padding 
scheme was append zeros until you get the message out to a multiple of r 
bits, I could get collisions on your hash function by taking some message that 
was not a multple of r bits, and appending one or more zeros to it.  Just 
appending a single one bit, followed by as many zeros as are needed to get to a 
multiple of r bits makes a fine padding scheme, so long as the one bit is 
appended to *every* message, even those which start out a multiple of r bits 

The Keccak team proposed adding a few extra bits to their padding, to add 
support for tree hashing and to distinguish different fixed-length hash 
functions that used the same capacity internally.  They really just need to 
argue that they haven't somehow broken the padding so that it is no longer 

They're making this argument by pointing out that you could simply stick the 
fixed extra padding bits on the end of a message you processed with the 
original Keccak spec, and you would get the same result as what they are doing. 
 So if there is any problem introduced by sticking those extra bits at the end 
of the message before doing the old padding scheme, an attacker could have 
caused that same problem on the original Keccak by just sticking those extra 
bits on the end of messages before processing them with Keccak.  

-- Jerry

The cryptography mailing list

[Cryptography] Iran and murder

2013-10-07 Thread John Kelsey
Alongside Phillip's comments, I'll just point out that assassination of key 
people is a tactic that the US and Israel probably don't have any particular 
advantages in.  It isn't in our interests to encourage a worldwide tacit 
acceptance of that stuff.  

I suspect a lot of the broad principles we have been pushing (assassinations 
and drone bombings can be done anywhere, cyber attacks against foreign 
countries are okay when you're not at war, spying on everyone everywhere is 
perfectly acceptable policy) are in the short-term interests of various 
powerful people and factions in the US, but are absolutely horrible ideas when 
you consider the long-term interests of the US.  We are a big, rich, relatively 
free country with lots of government scientists and engineers (especially when 
you consider funding) and tons of our economy and our society moving online.  
We are more vulnerable to widespread acceptance of these bad principles than 
almost anyone, ultimately,  But doing all these things has won larger budgets 
and temporary successes for specific people and agencies today, whereas the 
costs of all this will land on us all in the future.  


The cryptography mailing list

Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-06 Thread John Kelsey
One thing that seems clear to me:  When you talk about algorithm flexibility in 
a protocol or product, most people think you are talking about the ability to 
add algorithms.  Really, you are talking more about the ability to *remove* 
algorithms.  We still have stuff using MD5 and RC4 (and we'll probably have 
stuff using dual ec drbg years from now) because while our standards have lots 
of options and it's usually easy to add new ones, it's very hard to take any 

The cryptography mailing list

Re: [Cryptography] Sha3 and selecting algorithms for speed

2013-10-05 Thread John Kelsey
Most applications of crypto shouldn't care much about performance of the 
symmetric crypto, as that's never the thing that matters for slowing things 
down.  But performance continues to matter in competitions and algorithm 
selection for at least three reasons:

a.  We can measure performance, whereas security is very hard to measure.  
There are a gazillion ways to measure performance, but each one gives you an 
actual set of numbers.  Deciding whether JH or Grostl is more likely to fall to 
cryptanalytic attack in its lifetime is an exercise in reading lots of papers, 
extrapolating, and reading tea leaves.

b.  There are low-end environments where performance really does matter.  Those 
often have rather different properties than other environments--for example, 
RAM or ROM (for program code and S-boxes) may be at a premium.  

c.  There are environments where someone is doing a whole lot of symmetric 
crypto at once--managing the crypto for lots of different connections, say.  In 
that case, your symmetric algorithm's speed may also have a practical impact.  
(Though it's still likely to be swamped by your public key algorithms.)   

The cryptography mailing list

[Cryptography] Performance vs security

2013-10-05 Thread John Kelsey
There are specific algorithms where you have a pretty clear-cut 
security/performance tradeoff.  RSA and ECC both give you some choice of 
security level that has a big impact in terms of performance.  AES and SHA2 and 
eventually SHA3 offer you some secuirty level choices, but the difference in 
performance between them is relatively unimportant in most applications.  
Probably the coolest thing about Keccak's capacity parameter is that it gives 
you an understandable performance/security tradeoff, but the difference in 
performance between c=256 and c=512 will probably not be noticable in 99% of 

Then there are algorithms that give you higher performance at the cost of more 
fragility.  The example I can think of here is GCM, which gives you a pretty 
fast authenticated encryption mode, but which really loses security in a hurry 
if you reuse an IV.

It seems like these two kinds of security/performance tradeoffs belong in 
different categories, somehow.  


The cryptography mailing list

Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-05 Thread John Kelsey
On Oct 4, 2013, at 10:10 AM, Phillip Hallam-Baker wrote:
 Dobertin demonstrated a birthday attack on MD5 back in 1995 but it had no 
 impact on the security of certificates issued using MD5 until the attack was 
 dramatically improved and the second pre-image attack became feasible.

Just a couple nitpicks: 

a.  Dobbertin wasn't doing a birthday (brute force collision) attack, but 
rather a collision attack from a chosen IV.  

b.  Preimages with MD5 still are not practical.  What is practical is using the 
very efficient modern collision attacks to do a kind of herding attack, where 
you commit to one hash and later get some choice about which message gives that 

 Proofs are good for getting tenure. They produce papers that are very 

There are certainly papers whose only practical importance is getting a smart 
cryptographer tenure somewhere, and many of those involve proofs.  But there's 
also a lot of value in being able to look at a moderately complicated thing, 
like a hash function construction or a block cipher chaining mode, and show 
that the only way anything can go wrong with that construction is if some 
underlying cryptographic object has a flaw.  Smart people have proposed 
chaining modes that could be broken even when used with a strong block cipher.  
You can hope that security proofs will keep us from doing that.  

Now, sometimes the proofs are wrong, and almost always, they involve a lot of 
simplification of reality (like most proofs aren't going to take low-entropy 
RNG outputs into account).  But they still seem pretty valuable to me for 
real-world things.  Among other things, they give you a completely different 
way of looking at the security of a real-world thing, with different people 
looking over the proof and trying to attack things.  

The cryptography mailing list

Re: [Cryptography] Sha3

2013-10-05 Thread John Kelsey

The cryptography mailing list

Re: [Cryptography] AES-256- More NIST-y? paranoia

2013-10-02 Thread John Kelsey
On Oct 1, 2013, at 5:58 PM, Peter Fairbrother wrote:

 AES, the latest-and-greatest block cipher, comes in two main forms - AES-128 
 and AES-256.
 AES-256 is supposed to have a brute force work factor of 2^256  - but we find 
 that in fact it actually has a very similar work factor to that of AES-128, 
 due to bad subkey scheduling.
 Thing is, that bad subkey scheduling was introduced by NIST ... after 
 Rijndael, which won the open block cipher competition with what seems to be 
 all-the-way good scheduling, was transformed into AES by NIST.

What on Earth are you talking about?  AES' key schedule wasn't designed by 
NIST.  The only change NIST made to Rijndael was not including some of the 
alternative block sizes.  You can go look up the old Rijndael specs online if 
you want to verify this.

 -- Peter Fairbrother


The cryptography mailing list

Re: [Cryptography] are ECDSA curves provably not cooked? (Re: RSA equivalent key length/strength)

2013-10-02 Thread John Kelsey

On Oct 1, 2013, at 12:51 PM, Adam Back wrote:

[Discussing how NSA might have generated weak curves via trying many choices 
till they hit a weak-curve class that only they knew how to solve.]
 But the more interesting question I was referring to is a trapdoor weakness
 with a weak proof of fairness (ie a fairness that looks like the one in FIPS
 186-3/ECDSA where we dont know how much grinding if any went into the magic
 seed values).  For illustration though not applicable to ECDSA and probably
 outright defective eg can they start with some large number of candidate G
 values where G=xH (ie knowing the EC discrete log of some value H they pass
 off as a random fairly chosen point) and then do a birthday collision
 between the selection of G values and diffrent seed values to a PRNG to find
 a G value that they have both a discrete log of wrt H and a PRNG seed. 
 Bearing in mind they may be willing to throw custom ASIC or FPGA
 supercomputer hardware and $1bil budgt at the problem as a one off cost.

This general idea is a nice one.  It's basically a way of using Merkle's 
puzzles to build a private key into a cryptosystem.  But I think in general, 
you are going to have to do work equal to the security level of the thing 
you're trying to backdoor.  You have to break it once at its full security 
level, and then you get to amortize that break forever.  (Isn't there something 
like this you can do for discrete logs in general, though?)  

Consider Dual EC DRBG.  You need a P, Q such that you know x that solves xP = 
Q, over (say) P-224.  So, you arbitrarily choose G = a generator for the group, 
and a scalar z, and then compute for

 j = 1 to 2^{112}:
T[j] = jz G

Now, you have 2^{112} values in a group of 2^{224} values, right?  So with 
about another 2^{113} work, you can hit one of those with two arbitrary seeds, 
and you'll know the relationship between them.  

But this takes a total of about 2^{113} work, so it's above the claimed secuity 
level of P-224.  I suspect this would be more useful for something at the 80 
bit security level--a really resourceful attacker could probably do a 2^{80} 


The cryptography mailing list

Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-02 Thread John Kelsey
Has anyone tried to systematically look at what has led to previous crypto 
failures?  That would inform us about where we need to be adding armor plate.  
My impression (this may be the availability heuristic at work) is that:

a.  Most attacks come from protocol or mode failures, not so much crypto 
primitive failures.  That is, there's a reaction attack on the way CBC 
encryption and message padding play with your application, and it doesn't 
matter whether you're using AES or FEAL-8 for your block cipher.  

b.  Overemphasis on performance (because it's measurable and security usually 
isn't) plays really badly with having stuff be impossible to get out of the 
field when it's in use.  Think of RC4 and DES and MD5 as examples.  

c.  The ways I can see to avoid problems with crypto primitives are:

(1)  Overdesign against cryptanalysis (have lots of rounds)

(2)  Overdesign in security parameters (support only high security levels, use 
bigger than required RSA keys, etc.) 

(3)  Don't accept anything without a proof reducing the security of the whole 
thing down to something overdesigned in the sense of (1) or (2).

The cryptography mailing list

Re: [Cryptography] RSA equivalent key length/strength

2013-10-02 Thread John Kelsey
On Oct 2, 2013, at 9:54 AM, Paul Crowley wrote:

 On 30 September 2013 23:35, John Kelsey wrote:
 If there is a weak curve class of greater than about 2^{80} that NSA knew 
 about 15 years ago and were sure nobody were ever going to find that weak 
 curve class and exploit it to break classified communications protected by 
 it, then they could have generated 2^{80} or so seeds to hit that weak curve 
 If the NSA's attack involves generating some sort of collision between a 
 curve and something else over a 160-bit space, they wouldn't have to be 
 worried that someone else would find and attack that weak curve class with 
 less than 2^160 work.

I don't know enough about elliptic curves to have an intelligent opinion on 
whether this is possible.  Has anyone worked out a way to do this?  

The big question is how much work would have had to be done.  If you're talking 
about a birthday collision on the curve parameters, is that a collision on a 
160 bit value, or on a 224 or 256 or 384 or 512 bit value?  I can believe NSA 
doing a 2^{80} search 15 years ago, but I think it would have had to be a top 
priority.  There is no way they were doing 2^{112} searches 15 years ago, as 
far as I can see.

The cryptography mailing list

Re: [Cryptography] NIST about to weaken SHA3?

2013-10-01 Thread John Kelsey
On Oct 1, 2013, at 4:48 AM, ianG wrote:
 This could be the uninformed opinion over unexpected changes.  It could also 
 be the truth.  How then to differentiate?
 Do we need to adjust the competition process for a tweak phase?
 Let's whiteboard.  Once The One is chosen, have a single round + conference 
 where each of the final contestants propose their optimised version.  They 
 then vote on the choice.

I like the general idea here, but I suspect a vote at the end of a conference 
isn't going to yield great results.  I'd hate to see something the designers 
opposed get adopted because they were outvoted by (say) a larger team.

 (OK, we can imagine many ways to do this ... point being that if NIST are 
 going to tweak the SHA3 then we need to create a way for them to do this, and 
 have that tweaking be under the control of the submitters, not NIST itself.  
 In order to maintain the faith of the result.)

The Keccak designers proposed reducing the capacity.  You can find public 
statements about this online, including in the slides on their website.  Also, 
the capacity is a parameter defined in the standard to allow an easy to 
understand performance/security tradeoff.  Setting c=256 gives an across the 
board security level of 128 bits, if you believe the underlying Keccak 
permutation is good.  

The actual technical question is whether an across the board 128 bit security 
level is sufficient for a hash function with a 256 bit output.  This weakens 
the proposed SHA3-256 relative to SHA256 in preimage resistance, where SHA256 
is expected to provide 256 bits of preimage resistance.  If you think that 256 
bit hash functions (which are normally used to achieve a 128 bit security 
level) should guarantee 256 bits of preimage resistance, then you should oppose 
the plan to reduce the capacity to 256 bits.  If you think a 256 bit hash 
function should only promise 128 bits of security, except in specific 
applicaitons like keyed hashes where it has been analyzed specifically and 
shown to get more, then you should (at least on technical grounds) like the 
proposal to reduce the capacity to 256 bits for a 256-bit hash output.

The cryptography mailing list

Re: [Cryptography] check-summed keys in secret ciphers?

2013-09-30 Thread John Kelsey
GOST was specified with S boxes that could be different for different 
applications, and you could choose s boxes to make GOST quite weak.  So that's 
one example.

The cryptography mailing list

Re: [Cryptography] RSA equivalent key length/strength

2013-09-30 Thread John Kelsey
Having read the mail you linked to, it doesn't say the curves weren't generated 
according to the claimed procedure.  Instead, it repeats Dan Bernstein's 
comment that the seed looks random, and that this would have allowed NSA to 
generate lots of curves till they found a bad one.  

it looks to me like there is no new information here, and no evidence of 
wrongdoing that I can see.  If there is a weak curve class of greater than 
about 2^{80} that NSA knew about 15 years ago and were sure nobody were ever 
going to find that weak curve class and exploit it to break classified 
communications protected by it, then they could have generated 2^{80} or so 
seeds to hit that weak curve class.  

What am I missing?  Do you have evidence that the NIST curves are cooked?  
Because the message I saw didn't provide anything like that.  

The cryptography mailing list

[Cryptography] Sha3

2013-09-30 Thread John Kelsey
If you want to understand what's going on wrt SHA3, you might want to look at 
the nist website, where we have all the slide presentations we have been giving 
over the last six months detailing our plans.  There is a lively discussion 
going on at the hash forum on the topic.  

This doesn't make as good a story as the new sha3 being some hell spawn cooked 
up in a basement at Fort Meade, but it does have the advantage that it has some 
connection to reality.

You might also want to look at what the Keccak designers said about what the 
capacities should be, to us (they put their slides up) and later to various 
crypto conferences.  

Or not.  

The cryptography mailing list

Re: [Cryptography] Gilmore response to NSA mathematician's make rules for NSA appeal

2013-09-25 Thread John Kelsey
On Sep 25, 2013, at 2:52 AM, james hughes wrote:

 Many, if not all, service providers can provide the government valuable 
 information regarding their customers. This is not limited to internet 
 service providers. It includes banks, health care providers, insurance 
 companies, airline companies, hotels, local coffee shops, book sellers, etc. 
 where providing a service results in personal information being exchanged. 
 The US has no corner on the ability to get information from almost any type 
 of service provider. This is the system that the entire world uses, and 
 should not be our focus.

There are many places where there is no way to provide the service without 
having access to the data, and probably storing it.  For those places, we are 
stuck with legal and professional and business safeguards.  You doctor should 
take notes when you see him, and can be compelled to give those notes up if he 
can access them to (for example) respond to a phone call asking to refill your 
medications.  There are rather complicated mechanisms you can imagine to 
protect your privacy in this situation, but it's hard to imagine them working 
well in practice.  For that situation, what we want is that the access to the 
information is transparent--the doctor can be compelled to give out information 
about his patients, but not without his knowledge, and ideally not without your 

But there are a lot of services which do not require that the providers have or 
collect information about you.  Cloud storage and email services don't need to 
have access to the plaintext data you are storing or sending with them.  If 
they have that information, they are subject to being forced to share it with a 
government, or deciding to share it with someone for their own business 
reasons, or having a dishonest employee steal it.  If they don't have that 
information because their service is designed so they don't have it, then they 
can't be forced to share it--whether with the FBI or the Bahraini government or 
with their biggest advertiser.  No change of management or policy or  law can 
make them change it.  

Right now, there is a lot of interest in finding ways to avoid NSA 
surveillance.  In particular, Germans and Brazilians and Koreans would 
presumably rather not have their data made freely available to the US 
government under what appear to be no restrictions at all.  If US companies 
would like to keep the business of Germans and Brazilians and Koreans, they 
probably need to work out a way to convincingly show that they will safeguard 
that data even from the US government.   

The cryptography mailing list

Re: [Cryptography] Gilmore response to NSA mathematician's make rules for NSA appeal

2013-09-24 Thread John Kelsey
On Sep 18, 2013, at 3:27 PM, Kent Borg wrote:

 You foreigners actually have a really big vote here.  All those US internet 
 companies want your business, and as you get no protections, in the current 
 scheme, not even lip-service, you should look for alternatives.  As you do, 
 this puts pressure on the US internet companies, and they have the economic 
 clout to put pressure on Feinstein and Polosi and all the others.

This does not go far enough.  The US government is not the only one inclined to 
steal information which it can reach, either because the information goes over 
wires the government can listen in on, or because the companies handling the 
data can be compelled or convinced to hand it over.  Right now, we're seeing 
leaks that confirm the serious efforts of one government to do this stuff, but 
it is absolutely silly to think the US is the only one doing it.  

The right way to address this is to eliminate the need to trust almost anyone 
with your data.  If Google[1] has all your cleartext documents and emails, they 
can be compelled to turn them over, or they can decide to look at them for 
business reasons, or they can be infiltrated by employees or contractors who 
look at those emails and documents.  You are trusting a lot of people, and 
trusting a company to possibly behave against its economic interests and legal 
obligations, to safeguard your privacy.  If they have encrypted data only, you 
don't have to trust them.  

It needs to be in their business interest to convince you that they *can't* 
betray you in most ways.  



[1] I'm not picking on Google in particular--any US company may be compelled to 
turn over data they have.  I imagine the same is true of any German or Korean 
or Brazilian company, but I don't know the laws in those places.  
The cryptography mailing list

Re: [Cryptography] PRISM-Proofing and PRISM-Hardening

2013-09-22 Thread John Kelsey
On Sep 19, 2013, at 5:21 PM, Phillip Hallam-Baker wrote:

  Criminals circumvent the WebPKI rather than trying to defeat it. If they did 
 start breaking the WebPKI then we can change it and do something different.

If criminals circumvent the PKI to steal credit card numbers, this shows up as 
fraud and is noticed without any need for a Snowden.  Eavesdropping doesn't 
show up in such an obvious way.  

 But financial transactions are easier than protecting the privacy of 
 political speech because it is only money that is at stake. The criminals are 
 not interested in spending $X to steal $0.5X. We can do other stuff to raise 
 the cost of attack if it turns out we need to do that.

Also, criminals find it harder to spend a few million up front before they get 
the first payoff.  Nor can they appeal to patriotism or compel compliance via 
the law.  

 If we want this to be a global infrastructure we have 2.4 billion users to 
 support. If we spend $0.01 per user on support, that is $24 million. It is 
 likely to be a lot more than that per user.

It has to pay for itself ultimately, at least as well as email does. 

The cryptography mailing list

Re: [Cryptography] The paranoid approach to crypto-plumbing

2013-09-17 Thread John Kelsey
On Sep 17, 2013, at 11:41 AM, Perry E. Metzger wrote:

 I confess I'm not sure what the current state of research is on MAC
 then Encrypt vs. Encrypt then MAC -- you may want to check on that.

Encrypt then MAC has a couple of big advantages centering around the idea that 
you don't have to worry about reaction attacks, where I send you a possibly 
malformed ciphertext and your response (error message, acceptance, or even time 
differences in when you send an error message) tells me something about your 
secret internal state.  


The cryptography mailing list

Re: [Cryptography] The paranoid approach to crypto-plumbing

2013-09-17 Thread John Kelsey
For hash functions, MACs, and signature schemes, simply concatenating 
hashes/MACs/signatures gives you at least the security of the stronger one.  
Joux multicollisions simply tell us that concatenating two or more hashes of 
the same size doesn't improve their resistance to brute force collsion search 
much.  The only thing you have to be sure of there is that the MAC and 
signature functions aren't allowed access to each others' secret keys or 
internal random numbers.  Otherwise, MAC#1 can always take the value of MAC#2's 
key.  This is just

message, signature 1, signature 2

where the signatures are over the message only.  

For encryption algorithms, superencryption works fine.  You can first encrypt 
with AES-CBC, then encrypt with Twofish-CFB, then with CAST5 in CFB mode.  
Again, assuming you are not letting the algorithms know each others' internal 
state or keys, if any of these algorithms are resistant to chosen plaintext 
attacks, then the combination will be.  This doesn't guarantee that the 
combination will be any stronger than the strongest of these, but it will be no 
weaker.  (Biham and later Wagner had some clever attacks against some chaining 
modes using single-DES that showed that you wouldn't always get anything 
stronger than one of the ciphers, but if any of these layers is strong, then 
the whole encryption is strong.  

An alternative approach is to construct a single super-block-cipher, say 
AES*Twofish*SERPENT, and use it in a standard chaining mode.  However, then you 
are still vulnerable to problems with your chaining mode--the CBC reaction 
attacks could still defeat a system that used AES*Twofish*SERPENT in CBC mode, 
but not AES-CBC followed by Twofish-CFB followed by SERPENT-CTR.  

For key-encryption or transport, I think it's a little more complicated.  If I 
need six symmetric keys and want to use three public key methods (say ECDH, 
NTRU, RSA) to transport the key, I've got to figure out a way to get the 
benefit from all these key exchange mechanisms to all six symmetric keys, in a 
way that I'm sure will not leak any information about any of them.  Normally we 
would use a KDF for this, but we don't want to trust any one crypto algorithm 
not to screw us over.  

I think we can get this if we assume that we can have multiple KDFs that have 
secrets from one another.  That is, I compute 

KDF1( key1, combined key exchange input) XOR KDF2( key2, combined key exchange 

The reason the two KDFs need keys that are secret from each other is because 
otherwise, KDF1 could just duplicate KDF2 and we'd get an all-zero set of keys. 
 If KDF2 is strong, then KDF1 can't generate an output string with any 
relationship to KDF2's string when it doesn't know all the input KDF2 is 

I agree with Perry that this is probably padlocking a screen door.  On the 
other hand, if we want to do it, we want to make sure we guard against as many 
bad things as possible.  In particular, it would be easy to do this in such a 
way that we missed chaining mode/reaction type attacks.  

The cryptography mailing list

Re: [Cryptography] The paranoid approach to crypto-plumbing

2013-09-17 Thread John Kelsey
Arggh!  Of course, this superencryption wouldn't help against the CBC padding 
attacks, because the attacker would learn plaintext without bothering with the 
other layers of encryption.  The only way to solve that is to preprocess the 
plaintext in some way that takes the attacker's power to induce a timing 
difference or error message away.  

The cryptography mailing list

Re: [Cryptography] real random numbers

2013-09-14 Thread John Kelsey
Your first two categories are talking about the distribution of entropy--we 
assume some unpredictability exists, and we want to quantify it in terms of 
bits of entropy per bit of output.  That's a useful distinction to make, and as 
you said, if you can get even a little entropy per bit and know how much you're 
getting, you can get something very close to ideal random bits out.

Your second two categories are talking about different kinds of 
sources--completely deterministic, or things that can have randomness but don't 
always.  That leaves out sources that always have a particular amount of 
entropy (or at least are always expected to!).  

I'd say even the squish category can be useful in two ways:

a.  If you have sensible mechanisms for collecting entropy, they can't hurt and 
sometimes help.  For example, if you sample an external clock, most of the 
time, the answer may be deterministic, but once in awhile, you may get some 
actual entropy, in the sense that the clock drift is sufficient that the 
sampled value could have one of two values, and an attacker can't know which.  

b.  If you sample enough squishes, you may accumulate a lot of entropy.  Some 
ring oscillator designs are built like this, hoping to occasionally sample the 
transition in value on one of the oscillators.  The idea is that the rest of 
the behavior of the oscillators might possibly be predicted by an attacker, but 
what value gets read when you sample a value that's transitioning between a 0 
and a 1 is really random, changed by thermal noise.  

I think the big problem with (b) is in quantifying the entropy you get.  I also 
think that (b) describes a lot of what commonly gets collected by the OS and 
put into the entropy pool.  

The cryptography mailing list

Re: [Cryptography] People should turn on PFS in TLS (was Re: Fwd: N.S.A. Foils Much Internet Encryption)

2013-09-13 Thread John Kelsey
On Sep 10, 2013, at 3:56 PM, Bill Stewart wrote:

 One point which has been mentioned, but perhaps not emphasised enough - if 
 NSA have a secret backdoor into the main NIST ECC curves, then even if the 
 fact of the backdoor was exposed - the method is pretty well known - without 
 the secret constants no-one _else_ could break ECC.
 So NSA could advocate the widespread use of ECC while still fulfilling their 
 mission of protecting US gubbmint communications from enemies foreign and 
 domestic. Just not from themselves.

I think this is completely wrong.

First, there aren't any secret constants to those curves, are there?  The 
complaint Dan Bermstein has about the NIST curves is that they (some of them) 
were generated using a verifiably random method, but that the seeds looked 
pretty random.  The idea here, if I understand it correctly, is that if the 
guys doing the generation knew of some property that made some of the choices 
of curves weak, they could have tried a huge number of seeds till they happened 
upon one that led to a weak curve.  If they could afford to try N seeds and do 
whatever examination of the curve was needed to check it for weakness, then the 
weak property they were looking for couldn't have had a probability much lower 
than about 1/N.  

I think the curves were generated in 1999 (that's the date on the document I 
could find), so we are probably talking about less than 2^{80} operations 
total.  Unlike the case with the Dual EC generator, where a backdoor could have 
been installed with no risk that anyone else could discover it, in this case, 
they would have to generate curves until one fell in some weak curve class that 
they knew about, and they would have to hope nobody else ever discovered that 
weak curve class, lest all the federal users of ECC get broken at once.  

The situation you are describing works for dual ec drbg, but not for the NIST 
curves, as best I understand things.  

The cryptography mailing list

[Cryptography] one time pads

2013-09-13 Thread John Kelsey
Switching from AES to one-time pads to solve your practical cryptanalysis 
problems is silly.  It replaces a tractable algorithm selection problem with a 
godawful key management problem, when key management is almost certainly the 
practical weakness in any broken system designed by non-idiots.  


The cryptography mailing list

Re: [Cryptography] Random number generation influenced, HW RNG

2013-09-09 Thread John Kelsey
On Sep 9, 2013, at 6:32 PM, Perry E. Metzger wrote:

 First, David, thank you for participating in this discussion.
 To orient people, we're talking about whether Intel's on-chip
 hardware RNGs should allow programmers access to the raw HRNG output,
 both for validation purposes to make sure the whole system is working
 correctly, and if they would prefer to do their own whitening and
 stretching of the output.

Giving raw access to the noise source outputs lets you test the source from the 
outside, and there is alot to be said for it.  But I am not sure how much it 
helps against tampered chips.  If I can tamper with the noise source in 
hardware to make it predictable, it seems like I should also be able to make it 
simulate the expected behavior.  I expect this is more complicated than, say, 
breaking the noise source and the internal testing mechanisms so that the RNG 
outputs a predictable output stream, but I am not sure it is all that much more 
complicated.  How expensive is a lightweight stream cipher keyed off the time 
and the CPU serial number or some such thing to generate pseudorandom bits?  
How much more to go from that to a simulation of the expectdd behavior, perhaps 
based on the same circutry used in the unhacked version to test the noise 
source outputs?  

The cryptography mailing list

Re: [Cryptography] In the face of cooperative end-points, PFS doesn't help

2013-09-08 Thread John Kelsey
Your cryptosystem should be designed with the assumption that an attacker will 
record all old ciphertexts and try to break it later.  The whole point of 
encryption is to make that attack not scary.  We can never rule out future 
attacks, or secret ones now.  But we can move away from marginal key lengths 
and outdated, weak ciphers.  Getting people to do that is like pulling teeth, 
which is why we're still using RC4, and 1024-bit RSA keys and DH primes.  


The cryptography mailing list

Re: [Cryptography] XORing plaintext with ciphertext

2013-09-08 Thread John Kelsey
It depends on the encryption scheme used.  For a stream cipher (including AES 
in counter or OFB mode), this yields the keystream.  If someone screws up and 
uses the same key and IV twice, you can use knowledge of the first plaintext to 
learn the second.  For other AES chaining modes, it's less scary, though if 
someone reuses their key and IV, knowing plaintext xor ciphertext from the 
first time the key,iv pair was used can reveal some plaintext from the second 
time it was used.  

The cryptography mailing list

Re: [Cryptography] Why prefer symmetric crypto over public key crypto?

2013-09-08 Thread John Kelsey

On Sep 7, 2013, at 3:25 PM, Christian Huitema wrote:

 Another argument is “minimal dependency.” If you use public key, you depend 
 on both the public key algorithm, to establish the key, and the symmetric key 
 algorithm, to protect the session. If you just use symmetric key, you depend 
 on only one algorithm.
 Of course, that means getting pair-wise shared secrets, and protecting them. 
 Whether that’s harder or more fragile than maintaining a key ring is a matter 
 of debate. It is probably more robust than relying on CA.

Pairwise shared secrets are just about the only thing that scales worse than 
public key distribution by way of PGP key fingerprints on business cards.  The 
equivalent of CAs in an all-symmetric world is KDCs.  Instead of having the 
power to enable an active attack on you today, KDCs have the power to enable a 
passive attack on you forever.  If we want secure crypto that can be used by 
everyone, with minimal trust, public key is the only way to do it.  

One pretty sensible thing to do is to remember keys established in previous 
sessions, and use those combined with the next session.  For example, if we do 
Diffie-Hellman today and establish a shared key K, we should both store that 
key, and we should try to reuse it next time as an additional input into our 
KDF.  That is, next time we use Diffie-Hellman to establish K1, then we get 
actual-key = KDF(K1, K, other protocol details).  That means that if even one 
session was established securely, the communications are secure (up to the 
symmetric crypto strength) forevermore.  

 - -- Christian Huitema

The cryptography mailing list

Re: [Cryptography] [cryptography] Random number generation influenced, HW RNG

2013-09-08 Thread John Kelsey
There are basically two ways your RNG can be cooked:

a.  It generates predictable values.  Any good cryptographic PRNG will do this 
if seeded by an attacker.  Any crypto PRNG seeded with too little entropy can 
also do this.  

b.  It leaks its internal state in its output in some encrypted way.  Basically 
any cryptographic processing of the PRNG output is likely to clobber this. 

The only fix for (a) is to get enough entropy in your PRNG before generating 
outputs.  I suspect Intel's RNG and most other hardware RNGs are extremely 
likely to be better than any other source of entropy you can get on your 
computer, but you don't have to trust them 100%.  Instead, do whatever OS level 
collection you can, combine that with 256 bits from the Intel RNG, and throw in 
anything else likely to help--ethernet address, IP address, timestamp, anything 
you can get from the local network, etc.  Hash that all and feed it into a 
strong cryptographic PRNG--something like CTR-DRBG or HMAC-DRBG from SP 800-90. 
 If you do that, you will have guarded against both (a) and (b).  


The cryptography mailing list

Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-08 Thread John Kelsey
Let's suppose I design a block cipher such that, with a randomly generated key 
and 10,000 known plaintexts, I can recover that key.  For this to be useful in 
a world with relatively sophisticated cryptanalysts, I must have confidence 
that it is extremely hard to find my trapdoor, even when you can look closely 
at my cipher's design.   

At this point, what I have is a trapdoor one-way function.  You generate a 
random key K and then compute E(K,i) for i = 1 to 1.  The output of the 
one-way function is the ciphertext.  The input is K.  If nobody can break the 
cipher, then this is a one-way funciton.  If only I, who designed it, can break 
it, then it's a trapdoor one-way function.  

At this point, I have a perfectly fine public key encryption system.  To send 
me a message, choose a random K, use it to encrypt 1 through 1, and then 
send me the actual message encrypted after that in K.  If nobody but me can 
break the system, then this cipher works as my public key.  

The assumption that matters here is that you know enough cryptanalysis that it 
would be hard to hide a practical attack from you.  If you don't know about 
differential cryptanalysis, I can do the master key cryptosystem, but only 
until you learn about it, at which point you will break my cipher.   But if you 
can, say, hide the only good linear characteristics for some cipher in its 
S-boxes in a way that is genuinely intractible for anyone else to find, then 
you have a public key cryptosystem. You can publish the algorithm for hiding 
new linear characteristics in an S-box--this becomes the keypair generation 
algorithm.  The private key is the linear characteristic that lets you break 
the cipher with (say) 1 known plaintexts, the public key is the cipher 

The cryptography mailing list

Re: [Cryptography] Techniques for malevolent crypto hardware

2013-09-08 Thread John Kelsey
On Sep 8, 2013, at 3:55 PM, Thor Lancelot Simon wrote:
 I also wonder -- again, not entirely my own idea, my whiteboard partner
 can speak up for himself if he wants to -- about whether we're going
 to make ourselves better or worse off by rushing to the safety of
 PFS ciphersuites, which, with their reliance on DH, in the absence of
 good RNGs may make it *easier* for the adversary to recover our eventual
 symmetric-cipher keys, rather than harder!

I don't think you can do anything useful in crypto without some good source of 
random bits.  If there is a private key somewhere (say, used for signing, or 
the public DH key used alongside the ephemeral one), you can combine the hash 
of that private key into your PRNG state.  The result is that if your entropy 
source is bad, you get security to someone who doesn't compromise your private 
key in the future, and if your entropy source is good, you get security even 
against someone who compromises your private key in the future (that is, you 
get perfect forward secrecy).


The cryptography mailing list

Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-06 Thread John Kelsey
I don't see what problem would actually be solved by dropping public key crypto 
in favor of symmetric only designs.  I mean, if the problem is that all public 
key systems are broken, then yeah, we will have to do something else.  But if 
the problem is bad key generation or bad implementations, those will be with us 
even after we abandon all the public key stuff.  And as Jon said, the trust 
problems get harder, not easier.  With only symmetric crypto, whoever acts as 
the introducer between Alice and Bob can read their traffic passively and 
undetectably.  With public key crypto, the introducer can do a man in the 
middle attack (an active attack) and risks detection, as Alice and Bob now have 
things signed by the introducer associating the wrong keys with Bob and Alice, 

The cryptography mailing list

Re: [Cryptography] FIPS, NIST and ITAR questions

2013-09-06 Thread John Kelsey

Sent from my iPad

On Sep 3, 2013, at 6:06 PM, Jerry Leichter wrote:

 On Sep 3, 2013, at 3:16 PM, Faré wrote:
 Can't you trivially transform a hash into a PRNG, a PRNG into a
 cypher, and vice versa?
 hash-PRNG: append blocks that are digest (seed ++ counter ++ seed)
 Let H(X) = SHA-512(X) || SHA-512(X)
 where '||' is concatenation.  Assuming SHA-512 is a cryptographically secure 
 hash H trivially is as well.  (Nothing in the definition of a cryptographic 
 hash function says anything about minimality.)  But H(X) is clearly not 
 useful for producing a PRNG.
 If you think this is obviously wrong, consider instead:
 H1(X) = SHA-512(X) || SHA-512(SHA-512(X))
 Could you determine, just from black-box access to H1, that it's equally bad 
 as a PRNG?  (You could certainly do it with about 2^256 calls to H1 with 
 distinct inputs - by then you have a .5 chance of a duplicated top half of 
 the output, almost certainly with a distinct bottom half.  But that's a 
 pretty serious bit of testing)
 I don't actually know if there exists a construction of a PRNG from a 
 cryptographically secure hash function.  (You can build a MAC, but even 
 that's not trivial; people tried all kinds of things that failed until the 
 HMAC construction was proven correct.)
-- Jerry
 The cryptography mailing list
The cryptography mailing list

Re: [Cryptography] FIPS, NIST and ITAR questions

2013-09-06 Thread John Kelsey

 Let H(X) = SHA-512(X) || SHA-512(X)
 where '||' is concatenation.  Assuming SHA-512 is a cryptographically secure 
 hash H trivially is as well.  (Nothing in the definition of a cryptographic 
 hash function says anything about minimality.)  But H(X) is clearly not 
 useful for producing a PRNG.

You won't get a prf or stream cipher or prng or block cipher just out of 
collision resistance--you need some kind of pseudorandomness assumption.  We 
expect general purpose hash functions like Keccak to provide that, but it 
doesn't follow from the collision resistance assumption, for exactly the reason 
you gave there--it's possible to design collision-resistant functions that leak 
input or are predictable in some bits. 
 I don't actually know if there exists a construction of a PRNG from a 
 cryptographically secure hash function.  (You can build a MAC, but even 
 that's not trivial; people tried all kinds of things that failed until the 
 HMAC construction was proven correct.)

The HMAC construction wouldn't give a PRF for your example of 

h(x) = sha512(x) || sha512(x)

A single output would be trivial to distinguish from a 1024 bit random number.  

-- Jerry

The cryptography mailing list

Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-05 Thread John Kelsey
First, I don't think it has anything to do with Dual EC DRGB.  Who uses it?  

My impression is that most of the encryption that fits what's in the article is 
TLS/SSL.  That is what secures most encrypted content going online.  The easy 
way to compromise that in a passive attack is to compromise servers' private 
keys, via cryptanalysis or compromise or bad key generation.  For server side 
TLS using RSA, guessing just the client's random values ought to be enough to 
read the traffic.  

For active attacks, getting alternative certs issued for a given host and 
playing man in the middle would work.  

Where do the world's crypto random numbers come from?  My guess is some version 
of the 
Windows crypto api and /dev/random or /dev/urandom account for most of them.  
What does most of the world's TLS?  OpenSSL and a few other libraries, is my 
guess.  But someone must have good data about this.  

My broader question is, how the hell did a sysadmin in Hawaii get hold of 
something that had to be super secret?  He must have been stealing files from 
some very high ranking people.  


The cryptography mailing list

Re: [Cryptography] Backup is completely separate

2013-09-02 Thread John Kelsey
The backup access problem isn't just a crypto problem, it's a social/legal 
problem.  There ultimately needs to be some outside mechanism for using social 
or legal means to ensure that, say, my kids can get access to at least some of 
my encrypted files after I drop dead or land in the hospital in a coma.  Or 
that I can somehow convince someone that it's really me and I'd like access to 
the safe deposit box whose password I forgot and lost my backup copy of.  Or 

This is complicated by the certainty that if someone has the power to get 
access to my encrypted data, they will inevitably be forced to do so by courts 
or national security letters, and will also be subject to extralegal pressures 
or attacks to make them turn over some keys.  I suspect the best that can be 
workably done now is to make any key escrow service's key accesses transparent 
and impossible to hide from the owner of the key, and then let users decide 
what should and shoudn't be escrowed.  But this isn't all that great an answer. 

The cryptography mailing list

Re: [Cryptography] NSA and cryptanalysis

2013-09-01 Thread John Kelsey
What I think we are worried about here are very widespread automated attacks, 
and they're passive (data is collected and then attacks are run offline).  All 
that constrains what attacks make sense in this context.  You need attacks that 
you can run in a reasonable time, with minimal requirements on the amount of 
plaintext or the specific values of plaintext.  The perfect example of an 
attack that works well here is a keysearch on DES; another example is the 
attack on WEP.

All the attacks we know of on reduced-round AES and AES-like ciphers require a 
lot of chosen plaintexts, or related key queries, or both.  There is no way to 
completely rule out some amazing new break of AES that makes the cipher fall 
open and drop your plaintext in the attacker's lap, but I don't see anything at 
all in the literature that supports that fear, and there are a *lot* of smart 
people trying to find new ways to attack or use AES-like designs.  So I put 
this at the bottom of my list of likely problems.

Some attacks on public key systems also require huge numbers of encryptions or 
specially formed ciphertexts that get sent to the target for decryption--we can 
ignore those for this discussion.  So we're looking at trying to factor an RSA 
modulus or to examine a lot of RSA encryptions to a particular public key (and 
maybe some signatures from that key) and try to get somewhere from that.  I 
don't know enough about the state of the art in factoring or attacking RSA to 
have a strong intuition about how likely this is.  I'm pretty skeptical, 
though--the people. know who are experts in this stuff don't seem especially 
worried.  However, a huge breakthrough in factoring would make for workable 
passive attacks of this kind, though it would have to be cheap enough to use to 
break each user's public key separately.  

Finally, we have the randomness sources used to generate RSA and AES keys.  
This, like symmetric cryptanalysis, is an area I know really well.  And my 
intuition (backed by plenty of examples) is that this is probably the place 
that is most likely to yield a practical offline attack of this kind.  When 
someone screws up the implementation of RSA or AES, they may at least notice 
some interoperability problems.  They will never notice this when they screw up 
their implementation so that RNG only gets 32 bits of entropy before generating 
the user's RSA keypair.  And if I know that your RSA key is likely to have one 
of these 2^{32} factors, I can make a passive attack work really well.  


The cryptography mailing list

Re: [Cryptography] NSA and cryptanalysis

2013-08-31 Thread John Kelsey
If I had to bet, I'd bet on bad rngs as the most likely source of a 
breakthrough in decrypting lots of encrypted traffic from different sources. 

The cryptography mailing list

Re: [Cryptography] Functional specification for email client?

2013-08-31 Thread John Kelsey
I think it makes sense to separate out the user-level view of what happens (the 
first five or six points) from how it's implemented (the last few points, and 
any other implementation discussions).  In order for security to be usable, the 
user needs to know what he is being promised by the security mechanisms--not 
which digital signature scheme is being used or whether decoy messages are sent 
to frustrate traffic analysis.  

If something arrives in my inbox with a from address of, 
then I need to know that this means that's who it came from.  If I mail 
something to, then I need to know that only the owner of 
that address will be able to read it.  I need to know that nobody should be 
able to know with whom I'm communicating.  But I don't need to know about keys, 
digital signatures, mix nets, etc.  That's what I want to know as a 
cryptographer, but as a user, I just want to know what the security system is 
promising and how reliable its promises are. 

My intuition is that binding the security promises to email addresses instead 
of identities is the right way to proceed.  I think this is something most 
people can understand, and more importantly it's something we can do with 
existing technology and no One True Name Authority In The Sky handing out 

One side issue here is that this system's email address space needs to somehow 
coexist with the big wide internet's address space.  It will really suck if 
someone else can get my gmail address n the secure system, but it will also be 
confusing if my inbox has a random assortment of secure and insecure emails, 
and I have to do some extra step to know which is which.  

The cryptography mailing list

Re: Interesting bit of a quote

2006-07-16 Thread John Kelsey
Sent: Jul 14, 2006 11:22 PM
To: David Mercer [EMAIL PROTECTED]
Subject: Re: Interesting bit of a quote

The problem with this is determining if the media has been replaced.
Absent other protections, one could simply write a new WORM media with
falsified information.

I can see two ways of dealing with this:

1) Some kind of physical authenticity, such as signing one's name on
the media as they are produced (this assumes the signer is not
corruptible), or applying a frangible difficult-to-duplicate seal of
some kind (this assumes access controls on the seals).

I think this is going to resolve to chain-of-custody rules of some
kind.  One problem is that so long as the company making the records
is storing them onsite, it's hard for an outside auditor to be sure
they aren't being tampered with.  (Can the CEO really not work out a
way to get one of his guys access to the tape storage vault?) 

2) Some kind of hash chain covering the contents, combined with
publication of the hashes somewhere where they cannot be altered (e.g.
publish hash periodically in a classified ad in a newspaper).

You could do the whole digital timestamping thing here.  You could
also just submit hashes of this week's backup tape to your auditor and
the SEC or something.  

Another solution is to use cryptographic audit logs.  Bruce Schneier
and I did some work on this several years ago, using a MAC to
authenticate the current record as it's written, and a one-way
function to derive the next key.  (This idea was apparently developed
by at least two other people independently.)  Jason Holt has extended
this idea to use digital signatures, which makes them far more
practical.  One caveat is that cryptographic audit logs only work if
the logging machine is honest when the logs are written.  


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: Interesting bit of a quote

2006-07-13 Thread John Kelsey
From: Anne  Lynn Wheeler [EMAIL PROTECTED]
Sent: Jul 11, 2006 6:45 PM
Subject: Re: Interesting bit of a quote

my slightly different perspective is that audits in the past have 
somewhat been looking for inconsistencies from independent sources. this 
worked in the days of paper books from multiple different corporate 
sources. my claim with the current reliance on IT technology ... that 
the audited information can be all generated from a single IT source ... 
invalidating any assumptions about audits being able to look for 
inconsistencies from independent sources. A reasonable intelligent 
hacker could make sure that all the information was consistent.

It's interesting to me that this same kind of issue comes up in voting
security, where computerized counting of hand-marked paper ballots (or
punched cards) has been and is being replaced with much more
user-friendly DREs, where paper poll books are being replaced with
electronic ones, etc.  It's easy to have all your procedures built
around the idea that records X and Y come from independent sources,
and then have technology undermine that assumption.  The obvious
example of this is rules for recounts and paper record retention which
are applied to DREs; the procedures make lots of sense for paper
ballots, but no sense at all for DREs.  I wonder how many other areas
of computer and more general security have this same kind of issue.   

--John Kelsey, NIST

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

A weird macro virus story

2006-06-23 Thread John Kelsey

Some of my co-workers here at NIST got an email macro virus which
appeared to be targeted to cryptographers.  It appeared to be
addressed to Moti Yung, and come from Lawrie Brown and Henri Gilbert
(though that name was misspelled, maybe a transcription error from an
alternate character set).  Did any of you notice something like this?
The email appeared to be addressed to several submission addresses for
various crypto conferences.  

Wow, we've really made it as an influential group of people.  We're
important enough to be *targets* now


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: [Cfrg] HMAC-MD5

2006-04-01 Thread John Kelsey

Sent: Mar 30, 2006 3:38 PM
Subject: Re: [Cfrg] HMAC-MD5

I think that we have the evidence. The security MD5 depends
heavily on a lot of nonlinearities in functions F,G,I and on
carries in arithmetic additions. Nonlinearities in F,G,I are
bitwise and very weak. Carries are much stronger, but the collision
attacks showed that it is possible to controll them also.

The question is, can these still be controlled when the attacker
doesn't know the internal state of the chaining variables?  If not, we
may end up with second preimage attacks (which would finish off MD5
for most hashing applications!), but still not know how to attack
HMAC.  The attack model is really different!  

For what it's worth, though, I agree that we need to get rid of MD5
anywhere it's still in place, since the only thing we know about its
security is that it's a lot less than anyone expected it to be even a
year ago.  In fact, we should have started this when Dobbertin had his
free-start collision result.  If we had, we'd be able to regard the
devastating MD5 collisions we're seeing now in the same way we regard
devastating attacks on FEAL.  (If someone extends the best attack on
FEAL to 64 rounds, that will be cool, but nobody will be scrambling to
replace FEAL in their products and protocols.)

Vlastimil Klima

--John Kelsey, NIST

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: Entropy Definition (was Re: passphrases with more than 160 bits of entropy)

2006-03-26 Thread John Kelsey
From: John Denker [EMAIL PROTECTED]
Sent: Mar 24, 2006 11:57 AM
To: Erik Zenner [EMAIL PROTECTED],
Subject: Re: Entropy Definition (was Re: passphrases with more than 160 bits 
of entropy)

Erik Zenner wrote:

0 occurs with probability 1/2
each other number from 1 to 2^{160}+1 happens with 
probability 2^{-161}.

  Is anyone aware of whether (and where) this was 
 discussed in the literature, or what other approaches are taken?

This particular problem is contrived or at least exaggerated.  

The example is a contrived, pathological case.  And there are a bunch
of easy solutions once you know this is the distribution.  The point
is to demonstrate that Shannon entropy gives you the wrong answer when
you try to use it here.

Now, you might ask if this is a problem in a real-world setting.  So
let's imagine a very simple distribution--sequences of flips of a
biased coin.  There are nice ways to remove the bias, but let's
imagine we're not doing that--instead, we're going to take a sequence
of coin flips, turn it into a 0/1 sequence, and then hash it down to
get a key.  

Suppose the coin is biased so that heads comes up 0.9 of the time, and
that we generate 16-bit sequences at a time.  The Shannon entropy of a
16-bit sequence is about 7.5, but the most likely symbol (all heads)
comes up about 18% of the time.  So if we tried to generate a 7-bit
key, we'd lose on the attacker's first guess 18% of the time. 

So, let's go further with this.  We want to generate a DES key, with
56 bits of entropy.  A 64-bit sequence produced with this biased coin
has Shannon entropy of about 60 bits, but an attacker has about a
1/1000 chance of guessing the DES key we derive from it on his first
try, which is unacceptable for just about any crypto application.
(The min-entropy is about ten bits.)

So yes, I used a contrived example to demonstrate the problem, but no,
the problem isn't just an ivory-tower concern.

The intuition here is that Shannon entropy is concerned with
communications channels, where we assume we have to send every
symbol.  So when you have lots of low-probability symbols, and one of
those low-probability symbols is chosen 999 times out of a 1000, the
amount of bandwidth you need to transmit those symbols easily becomes
dominated by them.  Most of the time, you're sending one of a large
set of symbols, and they all have to be distinguished from one

The situation for the attacker is a lot more like having a channel
where it's acceptable to only send a small fraction of those symbols,
and just drop the rest.  That is, it's okay for the attacker's model
to just ignore the huge set of low-probability symbols that occur
999/1000 of the time, and instead just transmit the highest
probability symbol 1/1000 of the time.  Instead of transmitting it,
he's just guessing it.  When he gets it right, he learns your DES

This version serves to illustrate, in an exaggerated way, the necessity
of not assuming that the raw data words are IID (independent and identically

Yes!  The independent part is much harder to deal with than the
per-symbol distribution, in many real-world applications.  The worst
of these are operating system events (used in Windows and various
Unixes to get random bits).  Peter Gutmann did some really nice work
on this on a bunch of operating systems in a Usenix paper, and he
updated that and has a version of it on his website.  If you're doing
OS-event sampling to generate random bits, you really ought to look at
his stuff.  (But if you can get some hardware support, either directly
or via sampling the microphone like the Turbid design does, you're
probably on much firmer ground.)  

--John Kelsey, NIST

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

RE: Entropy Definition (was Re: passphrases with more than 160 bits of entropy)

2006-03-25 Thread John Kelsey
From: Erik Zenner [EMAIL PROTECTED]
Sent: Mar 24, 2006 4:14 AM
Subject: RE: Entropy Definition (was Re: passphrases with more than 160 bits 
of entropy)

 [I wrote:]
 0 occurs with probability 1/2
 each other number from 1 to 2^{160}+1 happens with 
 probability 2^{-161}.
 The Shannon entropy on this distribution is 81.5 bits.  But 
 if you tried to sample it once to generate an 80-bit Skipjack 
 key, half the time, I'd guess your key on my first try.  

It's entirely correct that entropy is the wrong measure here, but
the question is how a good measure would look like. 

There are a bunch of different entropy measures.  Shannon entropy is
the right one for compression ratios and channel capacity, but not for
difficulty of guessing the string.

Assume that you have a sample space with N elements and an intelligent 
attacker (i.e., one that tries the most probable elements first). Then 
what you actually are interested in is that the attacker's probability 
of success after q sampling attempts is as close as possible to the 
lowest possible, namely q * 2^{-N}. A natural way of measuring this 
seems to be some kind of distance between Pr[succ after q samples] and 
the ideal function q * 2^{-N}. Such a measure might allow a designer
to decide whether a non-perfect distribution is still acceptable or
simply far out. Is anyone aware of whether (and where) this was 
discussed in the literature, or what other approaches are taken?

The best discussion I've seen of this is in a couple papers by John
Pliam, about something he calls the work function, W(n).  This is just
the probability of having guessed some secret value after n operations
(typically n guesses). You can generalize this to the number of
operations you have to carry out to have a given probability of
violating any security property (repeating a nonce, getting a block
collision in a block cipher chaining mode, etc).  It's a very general
way of thinking about limiting parameters of crypto algorithms.
You're basically heading toward this idea in what you said above.

Let's think of this for an ideal case: a 128-bit key.  When you have
done 2^k guesses of the key, you have a 2^{n-k} chance of having
guessed the key correctly.  So if you graphed the work vs probability
of success on a log/log chart, you'd get a straight line for the ideal
case.  W(n) = 2^{-128} n, as you said above.  

Now, consider the case where you are generating a 128-bit key from a
bunch of sampled events on your computer that have been concatenated
together in a string S.  Imagine making a list of all the possible
values of that string, and sorting them in descending order of
probability.  You now have the best possible sequence of guesses.
W(n) is the sum of the first n probabilities.  

If W(n)  2^{-128} n for any n, then the attacker has some point where
it is to his advantage to guess S instead of guessing your key.

So, this is where min-entropy comes in.  Min-entropy is just
-lg(P[max]), the base 2 log of the maximum probability in the
distribution.  You can convince yourself than if the min-entropy of S
is at least 128, then it's never easier to guess S than to guess K.
This is because each possible input string must have probability lower
than 2^{-128}, so the sum of the first n, W(n)  n 2^{-128}.  

This doesn't solve the much harder engineering/data analysis problem
of getting some reasonable approximation for the probabilities of S,
unfortunately.  The easy way to solve that is to design a hardware RNG
in such a way that you pretty-much know the probabilty model to use
and can work out how to sample it to get a good estimate of the
probabilities.  However, it is kind of nice to recognize that you only
have to estimate the largest probability to compute the min-entropy.
(It ought to be the easiest probability to measure and bound!)  


Dr. Erik Zenner   Phone:  +45 39 17 96 06Cryptico A/S
Chief Cryptographer   Mobile: +45 60 77 95 41Fruebjergvej 3
[EMAIL PROTECTED]   DK 2100 Copenhagen

--John Kelsey, NIST

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: Entropy Definition (was Re: passphrases with more than 160 bits of entropy)

2006-03-23 Thread John Kelsey
From: Jack Lloyd [EMAIL PROTECTED]
Sent: Mar 22, 2006 11:30 PM
Subject: Re: Entropy Definition (was Re: passphrases with more than 160 bits 
of entropy)


As an aside, this whole discussion is confused by the fact that there
are a bunch of different domains in which entropy is defined.  The
algorithmic information theory sense of entropy (how long is the
shortest program that produces this sequence?) is miles away from the
information theory sense of entropy, and even that has a bunch of
different flavors.  

For the information theory meanings of entropy, what you're really
talking about is a property of a probability distribution.  You can do
that in terms of Shannon entropy if you want to know about bounds on
average bandwidth requirements for transmitting symbols from that
distribution.  You can look at guessing entropy if you want to know
the -lg(expected work to guess the next symbol).  You can look at
min-entropy if you want a hard bound on how many symbols you need to
sample to derive a 128-bit key that won't ever expose you to
weaknesses based on how you selected it.  And so on.  

Shannon entropy is the one most people know, but it's all wrong for
deciding how many samples you need to derive a key.  The kind of
classic illustration of this is the probability distirbution:

0 occurs with probability 1/2
each other number from 1 to 2^{160}+1 happens with probability

The Shannon entropy on this distribution is 81.5 bits.  But if you
tried to sample it once to generate an 80-bit Skipjack key, half the
time, I'd guess your key on my first try.  


Yes. Let's say the contents of tommorrow's NY Times has n bits of entropy (we
probably can't actually calculate n, but we know it is a finite and fixed
value). And the LA Times from the same day will also have some amount of
entropy (call it n'). However, the entropy of the two papers combined would
(probably) not be n+n', but some number x st min(n,n') = x = n+n', because
the two papers will report on many of the same issues, carry some of the same
AP stories, and so forth. This redundant information doesn't increase the
entropy (reading the same AP story in a second newspaper wouldn't give you any
new information).

Right.  If the probabilities are independent, you can add the
entropies.  That's why they're defined in log terms.  


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: Entropy Definition (was Re: passphrases with more than 160 bits of entropy)

2006-03-23 Thread John Kelsey
From: John Denker [EMAIL PROTECTED]
Sent: Mar 23, 2006 1:44 PM
To: John Kelsey [EMAIL PROTECTED],
Subject: Re: Entropy Definition (was Re: passphrases with more than 160 bits 
of entropy)

With some slight fiddling to get the normalization right, 1/2
raised to the power of (program length) defines a probability
measure.  This may not be the probability you want, but it
is a probability, and you can plug it into the entropy definition.

No, this isn't right for the algorithmic information theory meaning at
all.  For that measure, we can intelligently discuss the entropy of a
specific random string, without reference to a probability model.
Indeed, what's the probability distribution of the sequence of bits
defined by Chaitin's Omega?  

You can certainly complain that they should have used a different term
than entropy here, but you're not going to get these to be the same.  

The problem is almost never understanding the definition of
entropy.  The problem is almost always ascertaining what is
the correct probability measure applicable to the problem
at hand.

Well, you need to decide which of the probability distribution kinds
of entropy measures to use, and that differs in different
applications.  If you use min-entropy or guessing entropy to estimate
the limits on how well some sequence of symbols will compress, you'll
get a pretty lousy answer.  The same goes for using Shannon entropy to
determine whether you have collected enough entropy in your pool to
generate a 128-bit AES key.  

 0 occurs with probability 1/2
 each other number from 1 to 2^{160}+1 happens with probability
 The Shannon entropy on this distribution is 81.5 bits. 

I think it is very close to 81 bits, not 81.5, but that is a minor
point that doesn't change the conclusion:

  But if you
 tried to sample it once to generate an 80-bit Skipjack key, half the
 time, I'd guess your key on my first try.  

Yeah, but if I sample it N times, with high probability I can generate
a large number of very good keys.  This problem is faced by (and solved
by) any decent TRNG.

Hmmm.  I've seen plenty of people get this wrong.  If you use Shannon
entropy as your measure, and then say when I have collected 128 bits
of Shannon entropy in my pool, I can generate a 128-bit AES key, you
will generate keys that aren't as secure as you think they are.  Now,
most TRNGs seem to evade this and many other problems by designing the
hardware to give a relatively simple probability model.  If your
probability model is close to uniform, then all these probability
distribution based entropy measurements converge (with a constant

In the case above, we had better specify N.  If you sample it 16
times, then you have a 2^{-16} chance of still being in a weak state.
Once you get enough samples that the probability of being in the
pathological worst case is negligible (whatever that means in your
application), then you can start generating output bits.  That's
probably somewhere between N=32 and N=64.  


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: Linux RNG paper

2006-03-23 Thread John Kelsey
From: David Malone [EMAIL PROTECTED]
Sent: Mar 23, 2006 3:43 PM
Cc: Heyman, Michael [EMAIL PROTECTED],, [EMAIL 
Subject: Re: Linux RNG paper

One metric might be guessability (mean number of guesses required
or moments there of).  As you point out, Arikan and Massey have
shown that Shannon entropy are not particularly good estimates of
guessability. Generalisations of entropy, like Reni entropy do seem
to have meaning. The min-entropy mentioned in RFC 4086 seems
reasonable, though I don't think the rational given in the RFC is
actually correct.

Starting clarification:  

Min-entropy of a probability distribution is 

-lg ( P[max] ), 

minus the base-two log of the maximum probability.  

The nice thing about min-entropy in the PRNG world is that it leads to
a really clean relationship between how many bits of entropy we need
to seed the PRNG, and how many bits of security (in terms of
resistance to brute force guessing attack) we can get.

Suppose I have a string S with 128 bits of min-entropy.  That means
that the highest probablity guess of S has probability 2^{-128}.  I
somehow hash S to derive a 128-bit key.  The question to ask is, could
you guess S more cheaply than you guess the key?  When the min-entropy
is 128, it can't be any cheaper to guess S than to guess the key.
That's true whether we're making one guess, two guesses, ten guesses,
or 2^{127} guesses.   

To see why this is so, consider the best case for an attacker: S is a
128-bit uniform random string.  Now, all possible values have the same
probability, and guessing S is exactly the same problem as guessing
the key.  (I'm ignoring any bad things that happen when we hash down
to a key, but those can be important to think about in a different

Now, why is this the best case for an attacker?  Because it gives the
highest probability of guessing right on the nth guess.  If the
min-entropy is 128, then the highest probability symbol must have
prob. 2^{-128}.  If the next highest probability symbol has lower than
2^{-128} probability, then his second guess has lower probability.
And then the next highest probability symbol is constrained in the
same way.   

This makes it really easy, once you're working in min-entropy terms,
to answer questions like do I have enough entropy in this string to
initialize a PRNG based on running AES-128 in counter mode?


--John Kelsey, NIST

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: pipad, was Re: bounded storage model - why is R organized as 2-d array?

2006-03-21 Thread John Kelsey

Sent: Mar 21, 2006 9:58 AM
Subject: Re: pipad, was Re: bounded storage model - why is R organized as  
2-d array?

| Anyone see a reason why the digits of Pi wouldn't form an excellent
| public large (infinite, actually) string of random bits?

The issue would be:  Are there any dependencies amoung the bits of
pi that would make it easier to predict where an XOR of n streams of
bits taken from different positions actually come from - or, more
weakly, to predict subsequent bits.

When you build this scheme, you have to compare it to all other ways
of generating random-looking keystream for a stream cipher.  That
means comparing it with generators which are guaranteed to be as hard
to predict output bits for as a large integer is hard to factor, for
example.  Beyond the coolness factor of pi, it's hard to work out what
additional guarantee we're getting from using it here.  

I don't know what the performance of the algorithm for generating the
next n bits of pi looks like, but I'm guessing that I can do a fair
number of AES/Serpent/Twofish/MARS/RC6/Blowfish/CAST/3DES/IDEA/RC5
calculations for the same cost.  And we know how to build a stream
cipher out of all those ciphers (running in OFB or counter mode) which
is guaranteed to be as strong as the strongest of them.  

It's all about tradeoffs between performance, security, and what
strong statements you can make about your security when you're done.
In some applications, I am willing to give up a lot of performance for
a solid proof of security; in others, I am willing to give up any hope
of a proof of security to get really good performance.

   -- Jerry

--John Kelsey

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: NPR : E-Mail Encryption Rare in Everyday Use

2006-02-26 Thread John Kelsey
From: Peter Saint-Andre [EMAIL PROTECTED]
Sent: Feb 24, 2006 3:18 PM
Subject: Re: NPR : E-Mail Encryption Rare in Everyday Use

We could just as well say that encryption of remote server sessions is
rare in everyday use. It's just that only geeks even do remote server
sessions, so they use SSH instead of telnet.

The thing is that email is in wide use (unlike remote server sessions).
Personally I doubt that anything other than a small percentage of email
will ever be signed, let alone encrypted (heck, most people on this list
don't even sign their mail).

I'm certain that only a small percentage of e-mail will ever be
signed, so long as the tools to do that are so hard to use, and the
value added so small.  I find it useful to use encryption all the time
on my private data, but virtually never use it for communications,
because even among cryptographers the setup hassles are too great, and
the value added too small.  What we ultimately need is encryption and
authentication that are:

a.  Automatic and transparent.

b.  Add some value or are bundled with something that does.

c.  Don't try to tie into the whole horrible set of PKI standards in
terms of uniquely identifying each human and bit in the universe, and
getting them to sign legally binding messages whose full
interpretation requires reading and understanding a 30-page CPS.  

If email encryption became as transparent as SSL, most e-mail would be
encrypted.  This would still leave various phishing issues, etc., but
eavesdropping and a lot of impersonation and spam and phishing would
get much harder.  


--John Kelsey

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: thoughts on one time pads

2006-01-27 Thread John Kelsey
biased 60/40 for your 128-bit AES key, you'll drop to 90+ bits of
security; if you do that for your OTP, you'll reveal quite a bit of
the normal ASCII text you send, because there's not very much entropy
per bit.).  

The issue of preventing reuse is enormous.  

3) How should one combine OTP with another conventional encryption
method, so that if the pad is copied, we still have conventional
cipher protection?  In this manner, one could use the same system for
different use cases; one could, for example, mail the pad, or leave it
with a third party for the recipient to pick up, and you
opportunistically theoretical security if the opponent doesn't get it,
and you get empirical (conventional) security if they do.

If you dig around, Rivest wrote a paper at Crypto 82 or 83 about
randomizing encryption which is still pretty nice.  To a first
approximation, superencryption (encrypting the OTP-encrypted text with
AES-CBC or some such thing) is always okay.

4) For authentication, it is simple to get excellent results from an
OTP.  You simply send n bytes of the OTP, which an attacker has a
2^-8n chance in guessing.  How do we ensure message integrity?  Is it
enough to include a checksum that is encrypted with the pad?  

There are provably secure authentication schemes that use much less
key material per message.  Google for universal hashing and IBC Hash,
and for provably secure authentication schemes.  I seem to recall that
Stinson has a really nice survey of this either webbed or in his
book.  (Anyone else remember?)  

Does it
depend on our method of encipherment?  Assuming the encipherment is
XOR, is a CRC sufficient, or can one flip bits in the message and CRC
field so as to cancel each other?  

Yep, you can flip bits, so long as you know the CRC polynomial.  If
the CRC polynomial is randomly chosen for each message, and used
correctly, you get one of those provably-secure authentication

If so, how should we compute a MIC?
 Just SHA-1, and include that right after the plaintext (that is, we
encrypt the MIC so as to not reveal a preimage if SHA-1 is found to be

Read The Friendly Literature.

The generation of random numbers is too important to be left to chance.
  -- Robert R. Coveyou --
GPG fingerprint: 50A1 15C5 A9DE 23B9 ED98 C93E 38E9 204A 94C2 641B

--John Kelsey

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: Encryption using password-derived keys

2005-12-02 Thread John Kelsey
From: Jack Lloyd [EMAIL PROTECTED]
Sent: Nov 29, 2005 11:08 AM
Subject: Encryption using password-derived keys

The basic scenario I'm looking at is encrypting some data using a
password-derived key (using PBKDF2 with sane salt sizes and iteration
counts). I am not sure if what I'm doing is sound practice or just
pointless overengineering and wanted to get a sanity check.

My inclination is to use the PBKDF2 output as a key encryption key,
rather than using it to directly key the cipher (with the key used
for the cipher itself being created by a good PRNG). For some reason
the idea of using it directly makes me nervous, but not in a way I
can articulate, leading me to suspect I'm worried over nothing.

I think this is sensible for convenience reasons:

a.  You can now change passwords without decrypting and re-encrypting
all your data.  And similarly, if you ever had a reason to change
keys, you could do that without changing passwords.  

b.  You can now check the correctness of the entered password when you
decrypt the data encryption key (using authenticated encryption!),
rather than needing to process the whole data.  (You could also just
generate a few extra check bits from PBKDF2.)  

So, assuming using it as a KEK makes sense: At first I thought to use
XOR to combine the two keys, but realized that could lead to related
key attacks (by just flipping bits in the field containing the
encrypted key). That is probably not a problem with good algorithms,
but, then again, why take the chance; so I was thinking instead using
NIST's AES-wrap (or perhaps a less weirdly designed variant of it
that uses HMAC for integrity checking and AES in CBC mode for

You almost certainly need to do encryption and authentication both on
your bulk data and your encrypted key.  So why not do some mode that
does both (CCM being the obvious choice) for both the key encryption
and the bulk data encryption?


salt = PRNG_output(128)
iteration_count = 100
nonce = current nonce
DEK = PRNG_output(128)
KEK = PBKDF2(password,salt,iteration_count,128)
KeyBlob = CCM(KEK,0,DEK)
BulkData = CCM(DEK,nonce,plaintext)

Am I thinking about this far harder than I should?

I'll toss two other random ideas out there to see if they're useful to

a.  You may be worried about having a properly seeded PRNG available
to do your data encryption key generation.  I think a sensible way
around this is to use both a PRNG output and some extra bits from
PBKDF2 to derive the first data encryption key.  Like you could do:

X = PBKDF2(password,salt,iteration_count,256)
KEK = left 128 bits of X
S = right 128 bits of X
DEK = S xor PRNG_output(128)  

b.  You can use a clever trick by Abadi, Lomas and Needham to save
yourself most of the work you do on iterating the password hash during
the creation of the KEK, but not when rederiving it.  Basically, what
you do is instead of setting an iteration count of 2^{21}, you
generate a big random salt, and omit 20 bits of it from the salt
that's stored with the encrypted file.  This forces anyone trying to
re-derive the KEK to do about 2^{20} work on average, but it makes
generating the original encrypted file almost free.  I'm always
surprised that this isn't used more often, because it's such a clever


--John Kelsey

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: timing attack countermeasures (nonrandom but unpredictable delays)

2005-11-17 Thread John Kelsey
Sent: Nov 16, 2005 11:37 PM
To: David Wagner [EMAIL PROTECTED]
Subject: Re: timing attack countermeasures (nonrandom but unpredictable delays)

I don't follow; averaging allows one to remove random variables from
the overall time, but you're still left with the real computation
time plus the the deterministic delay I suggested as a function of
the input.

Specifically, time t(k,x) = f(k,x) + d(k,x) + r

Where r is a random variable modelling all random factors, f is the
time to compute the function, and d is the deterministic delay I
suggested that is a function of the inputs.  Averaging with repeated
evaluations of the same k and x allows one to compute the mean value
of r, and the sum f+d, but I don't see how that helps one seperate f
from d.  What am I missing?

Let's assume d(k,x) is a random function of k||x, uniformly
distributed between 0 and T where T is the average time of the
encryption.  I choose a set of inputs to the cipher x[0,1,2,...,n-1]
so that if my guess of some part of k is right, I expect their total
timing to be much lower than the average case.  

I get back Sum(f(k,x[i])+d(k,x[i])+r[i]).  

Suppose my guess is wrong.  Now what we expect is:

a.  Sum(f(k,x[i]) = average value 
b.  Sum(d(k,x[i]) = average value
c.  Sum(r[i]) = average value

Suppose my guess is right.  Now what we expect is:

a.  Sum(f(k,x[i]) = unusually low value 
b.  Sum(d(k,x[i]) = average value
c.  Sum(r[i]) = average value

So, right guesses still give me unusually low values, and wrong
guesses still give me average-looking values.  That means the timing
channel is still there--d(k,x) only adds random noise.

The only way to avoid this is to make d(k,x) somehow related to
f(k,x).  That's the idea behind things like having software or
hardware go through both the 0 and 1 case for each bit processed in an
exponent.  In that case, we get d(k,x) being fast when f(k,x) is slow,
and vice versa, and we close the timing channel.  

As long as d(k,x) is independent of f(k,x), I can still test guesses
of parts of k or parts of x.  

--John Kelsey

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: the effects of a spy

2005-11-17 Thread John Kelsey

Sent: Nov 16, 2005 12:26 PM
Subject: Re: the effects of a spy

Remember Clipper?  It had an NSA-designed 80-bit encryption
algorithm.  One interesting fact about it was that it appeared to be
very aggressively designed.  Most published algorithms will, for
example, use (say) 5 rounds beyond the point where differential
cryptoanalysis stops giving you an advantage.  Clipper, on the other
hand, falls to differential cryptoanalysis if you use even one less
round than the specification calls for.

Nipick: The system was Clipper, the algorithm was Skipjack.  

Why the NSA would design something so close to the edge has always
been a bit of a mystery (well, to me anyway).  One interpretation is
that NSA simply has a deeper understanding than outsiders of where
the limits really are.  What to us looks like aggressive design, to
them is reasonable and even conservative.

Three comments here:

a.  Maybe they really do have a good generic differential-probability
limiting algorithm.  There are algorithms like this in the public
world (they can be really computationally expensive, and they only
tell you upper bounds on a subset of possible attacks), and you'd
expect NSA to be interested, since they design a lot of algorithms.
It's not so intuitive to me that this would have applied to impossible
differentials unless they designed it to, though.  In that case,
you're looking at differentials that can't appear instead of
differentials that appear too often.

b.  Maybe they don't care that much about attacks that require some
huge number of plaintexts.  The academic world has defined the game in
terms of total work being the critical parameter in the attack, and
we're seeing a push over time to move that to total attack cost.
(That is, it's not so interesting if you have a 2^{100} attack on a
128-bit block cipher, if the attack is impossible to parallelize.)  If
someone publishes an attack on Twofish tomorrow which requires 2^{96}
plaintexts to break it faster than brute-force, we'll all agree that's
an attack.  But there's no reason NSA has to think that way--maybe
they have some other parameter like 2^{n/2} texts for n-bit block
ciphers, after which they don't care about attacks because they're not

c.  Maybe they just got it wrong.  SHA0 and SHA1 demonstrate that this
is all too possible.  (It's quite plausible to me that they have very
good tools for analyzing block ciphers, but that they aren't or
weren't sure how to best apply them to hash functions.)  

Or maybe ... the reasoning Perry mentions above applies here.  Any
time you field a system, there is a possibility that your opponents
will get hold of it.  In the case of Clipper, where the algorithm was
intended to be published, there's no possibility about it.  So why
make it any stronger than you have to?

Reducing Skipjack to 31 rounds wouldn't make a practical trapdoor
appear!  You're still talking about 2^{34} chosen plaintexts!

   -- Jerry

--John Kelsey

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: On Digital Cash-like Payment Systems

2005-10-31 Thread John Kelsey
From: cyphrpunk [EMAIL PROTECTED]
Sent: Oct 27, 2005 9:15 PM
To: James A. Donald [EMAIL PROTECTED]
Subject: Re: On Digital Cash-like Payment Systems

On 10/26/05, James A. Donald [EMAIL PROTECTED] wrote:
 How does one inflate a key?

Just make it bigger by adding redundancy and padding, before you
encrypt it and store it on your disk. That way the attacker who wants
to steal your keyring sees a 4 GB encrypted file which actually holds
about a kilobyte of meaningful data. Current trojans can steal files
and log passwords, but they're not smart enough to decrypt and
decompress before uploading. They'll take hours to snatch the keyfile
through the net, and maybe they'll get caught in the act.

Note that there are crypto schemes that use huge keys, and it's
possible to produce simple variants of existing schemes that use
multiple keys.  That would mean that the whole 8GB string was
necessary to do whatever crypto thing you wanted to do.  A simple
example is to redefine CBC-mode encryption as

C[i] = E_K(C[i-1] xor P[i] xor S[C[i-1] mod 2^{29}])

where S is the huge shared string, and we're using AES.  Without
access to the shared string, you could neither encrypt nor decrypt.



The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: On the orthogonality of anonymity to current market demand

2005-10-26 Thread John Kelsey
From: R.A. Hettinga [EMAIL PROTECTED]
Sent: Oct 25, 2005 8:34 AM
Subject: On the orthogonality of anonymity to current market demand

That is to say, your analysis conflicts with the whole trend towards
T-0 trading, execution, clearing and settlement in the capital
markets, and, frankly, with all payment in general as it gets
increasingly granular and automated in nature. The faster you can
trade or transact business with the surety that the asset in question
is now irrevocably yours, the more trades and transactions you can
do, which benefits not only the individual trader but markets as a

The prerequisite for all this is that when the asset changes hands,
it's very nearly certain that this was the intention of the asset's
previous owner.  My point isn't to express my love for book-entry
payment systems.  There's plenty to hate about them.  But if the
alternative is an anonymous, irreversible payment system whose control
lies in software running alongside three pieces of spyware on my
Windows box, they probably still win for most people.  Even bad
payment systems are better than ones that let you have everything in
your wallet stolen by a single attack.  

However anonymous irrevocability might offend one's senses and
cause one to imagine the imminent heat-death of the financial
universe (see Gibbon, below... :-)), I think that technology will
instead step up to the challenge and become more secure as a

What's with the heat-death nonsense?  Physical bearer instruments
imply stout locks and vaults and alarm systems and armed guards and
all the rest, all the way down to infrastructure like police forces
and armies (private or public) to avoid having the biggest gang end up
owning all the gold.  Electronic bearer instruments imply the same
kinds of things, and the infrastructure for that isn't in place.  It's
like telling people to store their net worth in their homes, in gold.
That can work, but you probably can't leave the cheapest lock sold at
Home Depot on your front door and stick the gold coins in the same
drawer where you used to keep your checkbook.

And, since internet bearer transactions are, by their very
design, more secure on public networks than book-entry transactions
are in encrypted tunnels on private networks, they could even be said
to be secure *in spite* of the fact that they're anonymous; that --
as it ever was in cryptography -- business can be transacted between
two parties even though they don't know, or trust, each other.

Why do you say internet bearer transactions are more secure?  I can
see more efficient, but why more secure?  It looks to me like both
kinds of payment system are susceptible to the same broad classes of
attacks (bank misbehavior (for a short time), someone finding a
software bug, someone breaking a crypto algorithm or protocol).  What
makes one more secure than the other?  


--John Kelsey

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: [fc-discuss] Financial Cryptography Update: On Digital Cash-like Payment Systems

2005-10-25 Thread John Kelsey
From: cyphrpunk [EMAIL PROTECTED]
Sent: Oct 24, 2005 5:58 PM
Subject: Re: [fc-discuss] Financial Cryptography Update: On Digital Cash-like 
Payment Systems

Digital wallets will require real security in user PCs. Still I don't
see why we don't already have this problem with online banking and
similar financial services. Couldn't a virus today steal people's
passwords and command their banks to transfer funds, just as easily
as the fraud described above? To the extent that this is not
happening, the threat against ecash may not happen either.

Well, one difference is that those transactions can often be undone,
if imperfectly at times.  The whole set of transactions is logged in
many different places, and if there's an attack, there's some
reasonable hope of getting the money back.  And that said, there have
been reports of spyware stealing passwords for online banking systems,
and of course, there are tons of phishing and pharming schemes to get
the account passwords in a more straightforward way.   The point is,
if you're ripped off in this way, there's a reasonable chance you can
get your money back, because the bank has a complete record of the
transactions that were done.  There's no chance of this happening when
there's no record of the transaction anywhere.  

 The payment system operators will surely be sued for this, because
 they're the only ones who will be reachable.  They will go broke, and
 the users will be out their money, and nobody will be silly enough to
 make their mistake again.

They might be sued but they won't necessarily go broke. It depends on
how deep the pockets are suing them compared to their own, and most
especially it depends on whether they win or lose the lawsuit. 

I don't think so.  Suppose there's a widespread attack that steals
money from tens of thousands of users of this payment technology.
There seem to be two choices:

a.  The payment system somehow makes good on their losses.

b.  Everyone who isn't dead or insane pulls every dime left in that
system out, knowing that they could be next.  

It's not even clear that these are mutually exclusive, but if (a)
doesn't happen, (b) surely will.  Nobody wants their money stolen, and
I don't think many people are so confident of their computer security
that they're willing to bet huge amounts of money on it.  If you have
to be that confident in your computer security to use the payment
system, it's not going to have many clients.  



The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: multiple keys to 1

2005-09-15 Thread John Kelsey
From: rbg9000 [EMAIL PROTECTED]
Sent: Sep 8, 2005 3:01 PM
Subject: multiple keys to 1

Sorry, I really don't know much about encryption, and my
google searches haven't turned up much.  I wondering if it's
possible to reduce a set of symmetric keys (aes, twofish,
etc..) used to encrypt data into 1 key that can decrypt it?

The straightforward symmetric crypto approach to this (it's
not pretty or elegant, but it works) is to have each
encrypting key be shared with the owner of the recipient
key.  Thus, we might have:

Alice has key K_a
Bob has key K_b
Carol has key K_c

Dave, the recipient, knows all three keys.

Now, to encrypt a message to Dave, anyone can just do
something like


Dave can try all possible keys, or can require that the
sender prepend some kind of unique identifier, like

Alice, Encrypt(K_a,Message)

If you're looking for more elegant solutions, you end up
needing to look at public key cryptosystems as Perry pointed
out.  Or look at multicast encryption and signature schemes,
which have some neat stuff right at the boundary between
symmetric and public key crypto.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Another entry in the internet security hall of shame....

2005-08-23 Thread John Kelsey

Recently, Earthlink's webmail server certificate started showing up as expired. 
 (It obviously expired a long time ago; I suspect someone must have screwed up 
in changing keys over or something, because the problem wasn't happening up 
until recently.)  So, I contacted Earthlink's technical support, and I got this 
really encouraging reply

Dear John Kelsey,

Thank you for contacting us.

I understand that you are having problems viewing Webmail and that it send out 
error on SSL certificate.

I suggest that you try lowering the security settings of your Internet Explorer.
Please follow the steps below on how to lower the security settings on your 

1. Open Internet Explorer.
2. On the Task panel click on Internet Options.
3. Click on the Advance Tab.
4. Scroll down and uncheck [Warn about invalid site certificates].
5. Remember to click on Apply.
6. Click on OK.

You have successfully lower your Internet Explorer settings.

Should you have any other concerns, please get back to us. You will receive a 


Therese B. 3613
EarthLink Electronic Customer Support
EarthLink, Inc.
Case ID 69080634

Looking for easy access to news, stocks, sports, and your favorite links?
With the EarthLink Personal Start Page you can customize everything from
the background colors to your local weather. For more information please

Resolve your customer service questions on-line at our Account maintenance
web site. To add email mailboxes, change passwords, or update your credit
card information, go to:

You can also trade real-time messages with one of our friendly Live Chat

Or email us and get a response that day:

Original Message Follows:

*   Presented Article: 142454
*   Name: John Kelsey
*   Account Type: EarthLink Experience
*   Issue: Spam/Internet Fraud Problem
*   Detailed Issue: Report an Issue
*   Article Title: Protecting Yourself Against Email/Internet Fraud
*   Message Body: The SSL certificate for is
expired. The certificate is fine, it's just
the certificate.

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

herding attack paper submitted to ePrint archive

2005-08-22 Thread John Kelsey

Yoshi and I have submitted a draft of the Herding Hash Functions
paper up on the IACR ePrint server, and assuming there are
no problems, it should be up reasonably soon.  The core of
the result is that when I can find lots of collisions for a
hash function by brute force (or maybe analytically, though
that gets more complicated), I can also break most systems
that use a hash function to prove prior knowledge.  I gave a
rump session talk on this a few days ago at Crypto.

--John Kelsey, NIST, August 2005

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: How much for a DoD X.509 certificate?

2005-08-11 Thread John Kelsey
From: Peter Gutmann [EMAIL PROTECTED]
Sent: Aug 11, 2005 7:42 AM
Subject: How much for a DoD X.509 certificate?

$25 and a bit of marijuana, apparently.  See:

Although the story doesn't mention this, the ID in
question was the DoD Common Access Card, a smart card
containing a DoD-issued certificate.  To get a CAC, you
normally have to provide two forms of verification... in
this case I guess the two were photo ID of dead presidents
and empirical proof that you know how to buy weed.

Ah, so this was more of an attribute certificate, then.  And
that the certificate was issued based partly on a
nonstandard proof of possession protocol.  (More
specifically, proof of possession with intent to



The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: Possible non-extension property for hash functions

2005-08-08 Thread John Kelsey
From: Bill Frantz [EMAIL PROTECTED]
Sent: Aug 6, 2005 6:27 PM
Subject: Possible non-extension property for hash functions

[Talking about the length-extension property.]
H(x) = H(y) == H(x||s) = H(y||s)

It seems to me that there might be a class of hash
functions for which this property did not hold.  While
hashes in this class might require random access to the
entire input, they could prevent the message extension
class of attack exploited by Lucks and Daum (see
when they generated two Postscript files with very
different output, but the same MD5 hash.

There are actually a couple ways to do this.  Either:

a.  Process the message sequentially, but do a final
operation on the intermediate result before putting out an
output hash, which introduces new collisions.  Any truncated
hash like SHA384 or SHA224 does this.  As an added benefit,
once you truncate enough bits (SHA384 truncates 128 bits),
the length extension attack on prefix MAC goes away, and the
Joux multicollisions and long message second preimage
attacks Bruce and I came up with become much more
difficult.  (On the other hand, it doesn't seem easy to do
any nice reduction proof from the strength of the SHA256
compression function to the strength of the SHA384 hash

b.  Process the message multiple times, or give yourself
random access, or whatever.  Just processing through the
message twice sequentially does eliminate the simple
length-extension property, but there are variations on it
that can still be used--that's why Joux multicollisions can
be found even when you process the message twice

Are there other ways I'm not seeing to do this?   

Cheers - Bill

--John Kelsey

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: draft paper: Deploying a New Hash Algorithm

2005-08-06 Thread John Kelsey
From: Steven M. Bellovin [EMAIL PROTECTED]
Sent: Aug 5, 2005 12:04 PM
To: Steve Furlong [EMAIL PROTECTED]
.Subject: Re: draft paper: Deploying a New Hash Algorithm 

I'd have phrased it differently than Perry did.  I'd say
that the attackers are often cleverer *about security* than
protocol designers, because insecurity is their specialty.
Ordinary protocol desingers are good at designing those
protocols, but they haven't been trained to think about

Yes!  I've noticed that it's really common for me to work on
a project for a very short time (like an hour or two), and
start noticing all kinds of security holes, including a lot
of stuff with nothing to do with cryptography.  I'll still
be asking very basic questions of the other people on the
project about how things are *supposed* to work, but be
pointing out attacks they never thought of at the same time.
I think this is just a different way of thinking.  Attackers
and security people do this all the time.  Most normal
people never do--it's like once they've got the rules in
their heads, that's what's possible, and they don't even
think about it.  

How many times, working on security for some system, have
you pointed out an attack, only to hear some variation on
but who would think of that?  And you can see the same
thing happening in discussions of homeland security and
counterterrorism stuff.  It's like most people look at the
national guardsmen in the airport, and say whew, I feel
safer, rather than what the heck are those guys supposed
to do to stop hijacked planes crashing into buildings? 

I like your starting points, but I think the real approach
to thinking about this is a bit broader.  It has to do with
understanding the rules, and trying to ask, for each one,
and what makes me obey that rule? or what would happen if
I didn't do such and so?  

   --Steven M. Bellovin,

--John Kelsey

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: solving the wrong problem

2005-08-06 Thread John Kelsey
From: Perry E. Metzger [EMAIL PROTECTED]
Sent: Aug 6, 2005 2:28 PM
Subject: solving the wrong problem

Frequently, scientists who know nothing about security come
up with ingenious ways to solve non-existent problems. Take
this, for example:

Basically, some clever folks have found a way to fingerprint the
fiber pattern in a particular piece of paper so that they know they
have a particular piece of paper on hand. It is claimed that this
could help stop forged passports.

Unfortunately, the invention is wholely useless for the
 stated purpose.

A couple of these guys gave a talk at NIST recently.  The
thing is, I can think of a bunch of uses for the thing
they're doing.  This looks genuinely useful as a tool.
Whether they've worked out how to use the tool to best
effect is a different question.

The passport idea doesn't add much, as you pointed out.  The
reason is that the thing you care about there is that the
information on the passport hasn't been tampered with and
originated from the right source.  An identical copy of my
passport is no worse than the original.  

On the other hand, think about the uses of this technology
for paper bearer instruments.  Design travelers' checks that
include a 2D barcode with a BLS signature, bound to the
piece of paper, and you can print the damned thing on
regular paper if the readers are cheap enough.  Similar
things apply to stamps, tickets, etc.  If you can get
readers into peoples' homes, you can even allow home
printing of tickets, travelers' checks, etc., each bound to
a specific piece of paper.  Add a reader to your favorite
DVD player platform (I think it's the same basic hardware as
is used in a DVD player), and you can uniquely sign content
on a disc, and use the player's hardware to enforce only
playing content when the disc's biometric matches the signed
content.  You could use the technique to scan small bits of
flat surfaces of all your stuff (the basic technique works
on paper, plastic, and metal, at least; I'm not sure if it
works on wood or glass), record the biometrics and locations
of the scans, and provide this to the police when your house
gets burgled.  There are some wonderful potential uses for
this technology in making paper-based voting systems *much*
more secure.  And on and on.  If I were in the business of
producing tamper-resistant paper, I'd be scared to death.

Anyway, I have a larger point.

I read about such stuff every day -- wacky new ways of
building tamper proof tokens, quantum cryptography, and
other mechanisms invented by smart people who don't
understand threat models at all.

Yes.  As I said, sometimes this stuff looks almost useless
(like quantum cryptography), other times it looks like it
may provide powerful tools, despite the fact that its
designers don't know much about how to use those tools yet.
The same is often true in cryptography, where we have some
very theoretical work which sometimes ends up having
enormous practical consequences.  

We already have the term snake oil for a very different
type of bad security idea, and the term has proven valuable
for quashing such things. We need a term for this sort of
thing -- the steel tamper resistant lock added to the
tissue paper door on the wrong vault entirely, at great
expense, by a brilliant mind that does not understand the
underlying threat model at all.

In my consulting days, I used to use the term padlocking
the screen door for the related phenomenon of piling
security on one part of the system while ignoring the bigger
vulnerabilities.  But this is a bit different


--John Kelsey

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Possibly new result on truncating hashes

2005-08-01 Thread John Kelsey

I have what seems like a new and interesting result, which I
haven't seen before, but which may or may not be new.  

The high order bit is that you can't generally guarantee
that truncating your hash (chopping off some bits) won't
weaken it.  That is, if you chop SHA256 off to 160 bits as a
replacement for SHA1 (something I'm working on with Niels
Ferguson for X9 right now), it's possible that there's no
attack on SHA256, but there is an attack on SHA160.  

How could this work?  Suppose we have an algorithm like the
Wang attacks on MD5, SHA0, or SHA1 for finding a single
collision pair.  The algorithm returns a single collision
pair on the first 160 bits of SHA256 for (say) 2^{64} work.
(Remember that this is just an example--I don't have any
such algorithm!)  Each time the algorithm is run, it gives a
new, unrelated collision pair, and the remaining 96 bits are
completely randomized by the collision pair.  

Now, this is an attack on SHA256 truncated to 160 bits.
Does it lead to an attack on SHA256 as a whole?  If it does,
then we can make a reduction proof that says that the
truncated hash is strong if the original hash is strong.
Unfortunately, we can't make this argument, because this
postulated collision algorithm can't be used to find a
collision in the whole SHA256 more efficiently than brute

Let's do the counting argument:  Each time we call the
160-bit collision algorithm, we get a new pair which has the
same first 160 bits of SHA256 output, and random unrelated
last 96 bits of SHA256 output.  Each pair has a probability
of 2^{-96} of colliding in the remaining bits.  So, to get a
collision on the whole SHA256 using this 160-bit collision
algorithm, we expect to have to try about 2^{96} collision
pairs, each found at a cost of 2^{64}.  The resulting work
is 2^{64} * 2^{96} = 2^{160}, more than a straight
brute-force collision search on SHA256.  

What does this mean?  It means that just because you have a
good 256-bit hash, you can't necessarily make a good 160 bit
hash from it.  You might be able to--it seems like you
usually will be able to--but there's no guarantee.  

Comments?  Is this some well-known result that I'm


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: ID theft -- so what?

2005-07-14 Thread John Kelsey
From: Aram Perez [EMAIL PROTECTED]
Sent: Jul 14, 2005 10:45 AM
To: Cryptography
Subject: Re: ID theft -- so what?

RANT-PET_PEEVEWhy do cryptography folks equate PKI with
certificates and CAs? 

One nontrivial reason is that many organizations have spent
a lot of time and money building up elaborate rules for
using PKI, after long negotiations between legal and
technical people, many hours of writing and revising,
gazillions of dollars in consultants' time, etc.  So,
anytime you start doing anything involving public key
cryptography, all this machinery gets invoked, for
bureaucratic reasons.  That is, you've now trespassed on PKI
turf, and you'll have to comply with this enormous set of

I know of a couple cases where this led to really irritating
results.  In one, a friend of mine was using a digital
signature to verify some fairly trivial thing, but was told
it was against policy to use a digital signature without the
whole PKI.  (I believe he changed the documentation, so that
he was using a modular arithmetic checksum instead of a
signature verification.) 

As a consultant, I designed and evaluated a lot of systems
that used public key cryptography.  None of the successful
ones tried to use the whole X.509 + CRL + CPS + everything
else overhead--typically, they just used a one-deep
hierarchy, where the keypair was put into the device by the
manufacturer along with a copy of the top-level public key
used to sign all device public keys.  This works, because it
doesn't try to incorporate the output of 20 years of
make-work programs in cryptography (they weren't intended
that way, but that's largely how they turned out), and it
doesn't try to incorporate every idea that might be useful
anywhere in the world into some very limited and simple

Aram Perez

--John Kelsey

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

halloween hash bash reminder--July 15 deadline

2005-07-11 Thread John Kelsey

This is just a reminder that the NIST hash workshop (Oct
31-Nov 1 of this year) is still taking submitted talks,
abstracts, etc., until July 15.  There are no proceedings,
so there should not be any problem publishing things that
you discuss at this workshop.  A major goal of doing this is
to get people to discuss interesting ongoing work so we can
understand it now, rather than after we've made decisions
about how to react to all the excitement in the hash
function world.  (For what it's worth, I plan on presenting
some new hash function results with Yoshi Kohno that we
intend to publish somewhere else.  I expect we'll post these
on the ECRYPT server before that.)

This workshop is going to have a big impact on decisions
like whether we should do some AES-like process to get a new
hash function standard, whether we should try to standardize
on some additional algorithms (like Whirlpool or the Russian
standard GOST hash function), etc.  Taking part is a great
opportunity to influence those decisions.  If you have
something new to say about hash functions, or something old
that should be repeated, send us some slides, or at least an
extended abstract, and we'll see whether we can fit you onto
the agenda for some discussion time.  

--John Kelsey, NIST, July 2005

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: /dev/random is probably not

2005-07-05 Thread John Kelsey
From: Charles M. Hannum [EMAIL PROTECTED]
Sent: Jul 3, 2005 7:42 AM
Subject: Re: /dev/random is probably not

Also, I don't buy for a picosecond that you have to gather
all timings in order to predict the output.  As we know
from countless other attacks, anything that gives you some
bits will reduce the search space and therefore weaken the
system, even if it does not directly give you the result.

I think you're landing on the genuinely hard problem here.
Designing a PRNG intelligently is an exercise in design and
cryptanalysis, so long as you get to assume that you know
how much entropy you're getting.  But actually getting
reliably entropy estimates is:

a.  A data analysis problem, where you never really get a
final answer, you just get the best model you knew how to
test and don't find strong evidence to discard it, and 

b.  Enormously sensitive to implementation details that are
hard to deal with in software.  In a hardware RNG design,
you can at least analyze test-mode raw outputs of the ring
oscillator or whatever, build a statistical model, and know
that the same basic model will also describe the devices in
the field.  They may vary because of manufacturing defects,
changes in the field (like heating/cooling or component
failure), etc., but you at least know what kind of thing
you've got.  With software sources, there's pretty much no
limit to what changes the designer of the hardware is
allowed to make to your devices, so long as he keeps the
interface the same.  You do lots of analysis on a machine
with a spinning disk drive, and end up on one with one
networked drive and one flash drive, or some such horrible

Additionally, building a probability model for stuff you
observe on a general purpose computer is *hard*, because
there's so much complicated deterministic stuff going on.
Even if you're using the same machine and setup to collect
entropy in production as you did to build your probability
model for entropy estimation, it's hard to have enormous
confidence in the correctness of your estimates.  How much
of that apparently random behavior you were getting when you
sampled the cycle counter in a tight loop was because of
genuine unpredictability, and how much was because of the
very patterned but complicated stuff going on behind the
scenes on your machine? 


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: NIST Public Workshop on Cryptographic Hashes

2005-06-15 Thread John Kelsey
From: Perry E. Metzger [EMAIL PROTECTED]
Sent: Jun 15, 2005 8:00 AM
Subject: NIST Public Workshop on Cryptographic Hashes

[Discussing the NIST public hash function workshop.]
   The workshop will be held on October 31 and November 1, 2005,  
  from 9 a.m. to 5:30 p.m.

Informally, we're calling this the halloween hash bash.  Come dressed
as your favorite hash function!  If you want to have some impact on
where we go with hash functions, this is a good thing to attend

Perry E. Metzger   [EMAIL PROTECTED]

--John Kelsey, NIST 

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: expanding a password into many keys

2005-06-15 Thread John Kelsey
Sent: Jun 14, 2005 2:54 PM
Subject: Re: expanding a password into many keys

You know, the proof that HMAC is a good MAC requires that the
*compression function* of the underlying hash is good. And for almost
all applications like this one, both the input password and the
sequence number, tag name, or whatever the second input is, all fit
into a single compression function block.

Actually, there are quite a few attacks on these schemes that amount
to messing up that property--getting the message to span multiple
blocks (as with the length extension attacks).  The other thing that
any direct use of a crypto primitive can't help with is sliding data
between fields.  If you don't encode your fields in some parseable way
before feeding them into the hash/MAC/block cipher/whatever, you get
these weird attacks where I either request key XXX with optional
additional string Y, or I request key XX with optional additional
string XY, and I get the same result.  Usually this doesn't lead to
a practical attack, but it puts an extra burden on the engineer using
the KDF, since he or she had better understand this attack and make
sure it doesn't apply.  Adams, Kramer, Mister, and Zucherato have a
recent paper pointing out a bunch of these problems with widely used
KDFs.  I think of these as analogous to the length-extension property
of hash functions or the complementation property of DES--it doesn't
keep the crypto mechanism from being used securely, but it does make
the job of an engineer trying to use it needlessly more complicated.  


--John Kelsey

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: Collisions for hash functions: how to exlain them to your boss

2005-06-15 Thread John Kelsey
From: Eric Rescorla [EMAIL PROTECTED] Sent: Jun 14, 2005 9:36 AM
Subject: Re: Collisions for hash functions: how to exlain them to
your boss

[Discussing the MD5 attacks and their practicality, especially the
recent postscript demonstration.]


But everything you've just said applies equally to my
JavaScript example. It's not a security vulnerability in
the browser or the data format that it displays differently
depending on the context.  It's an intentional feature!

I think our disagreement here has to do with what we're
seeing from the attack.  You're seeing a specific attack
vector--use conditional execution/display + the ability to
find specific collisions of a particular form to yield these
nice attacks where we have two messages that amount to

X ||M0||M1

where when the first part of the message is X, some kind of
conditional execution displays M0, while X* leads to the
display of M1.  And I think you're right to say that in many
cases, once you're viewing the result of blindly executing
programs that I send you, you're vulnerable to other attacks
that are about as damaging.  Now, it's certainly possible
imagine cases where this kind of conditional execution
wouldn't be allowed to access anything outside the file, but
once you've decided to put in a full featured scripting
language, it's not that much of a stretch to think you'll
let me read the system time.

I'm seeing a more general pattern of attacks, in which X and
X* amount to context for the display of whatever follows
them.  That seems to me to encompass a lot more than macros
and script files with conditional execution.  And even when
I don't have a specific attack in mind, it worries me that
if I'm trying to help someone safely use MD5, I've got to
think through whether there is any way at all to make this
kind of attack pattern work.  It's a heck of a lot easier to
say don't use MD5.


--John Kelsey

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: expanding a password into many keys

2005-06-13 Thread John Kelsey
Sent: Jun 12, 2005 11:27 AM
Subject: expanding a password into many keys

I'd like to take a password and expand it into several keys.  It
seems like a fairly simple operation of hashing the concatonatonation
of the password with each key name in turn to get each key.

Are there any 'gotchas' with that?

There's a length extension property with what you're doing, so if I
get to choose your key names, I can do something unpleasant to you.
Suppose I know the length of pass, and get to choose two key names,
K1_name and K2_name.  You give me K1 = sha1( pass||K1_name), then I
need to guess K2_name.  I can choose K2_name to be K1_name,
appropriately padded to the full block size exactly as it will be in
the SHA1 computation that produces K1.  Then, I can compute K2 on my
own, because the only effect of the secret value pass on K2 is going
through K1.  

This doesn't look like an especially realistic attack model, but I'm
not sure what you're doing with this


--John Kelsey

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: Papers about Algorithm hiding ?

2005-06-07 Thread John Kelsey
Sent: Jun 7, 2005 7:43 AM
Cc: Steve Furlong [EMAIL PROTECTED],
Subject: Re: Papers about Algorithm hiding ?

[My comment was that better crypto would never have prevented the
Choicepoint data leakage. --JMK]

Sure it would.  The reason they are not using the tools is because
they are too hard to use.  If the tools were so easy to use that it
was harder to not use them, then they'd be used.  Consider Citigroup
posted today by Bob.  They didn't encrypt the tapes because the tools
don't work easily enough for them.

So, this argument might make sense for some small business, but
Citigroup uses a *lot* of advanced technology in lots of areas, right?
I agree crypto programs could be made simpler, but this is really not
rocket science.  Here's my guess: encrypting the data would have
required that someone make a policy decision that the data be
encrypted, and would have required some coordination with the credit
agency that was receiving the tapes.  After that, there would have
been some implementation costs, but not all *that* many costs.
Someone has to think through key management for the tapes, and
that's potentially a pain, but it's not intractible.  Is this really
more complicated than, say, maintaining security on their publically
accessible servers, or on their internal network?  


The other way of looking at Choicepoint - change the incentives - is
a disaster.  It will make for a compliance trap.  Compliance *may*
protect the data or it may have completely the opposite effect, the
situation with 'unintended consequences' in such a case is likely to
be completely unpredictable.  The only thing we can guarantee is that
costs will go up.

Well, Choicepoint is a bit different, right?  I mean, as I understand
it the big disclosure happened because they sold peoples' data to
criminals, but they were in the business of selling peoples' data.
They just intended to sell it only to people of good intention, as far
as I can tell.  (Perhaps they should have demanded X.509 certificates
from the businesses buying the data and checked the evil bit.)  I
just can't see how cryptography could have helped prevent that attack,
other than by making the data that Choicepoint depends on harder to
get in the first place.

It's much cheaper and much more secure to simply
improve the tools.

But this does no good whatsoever if there's not some reason for the
people holding the data to use those tools.  Everyone with a network
presence and any kind of high profile does, in fact, use moderately
complicated computer security tools like routers, firewalls, VPNs,
virus scanners, and spyware detectors.  Everyone has to deal with
keeping their boxes up to date on patches.  However imperfectly, it
seems like Citigroup and Choicepoint and the rest can actually do
those things.  So when you excuse their failures to secure customer
data with the tools aren't there, this sounds absolutely implausible
to me.  

I'm not crazy about a HIPAA-style mandate for encryption and shredders
either, but we have this basic problem:

a.  It's basically easy to buy or find some amount of data about many

b.  It's basically easy to use that amount of data to get credit in
their name.

I suspect a better solution than trying to regulate data brokers is to
make it more expensive to give credit to Alice under Bob's name.  The
thing that imposes the cost on me isn't when someone finds my SSN,
it's when someone takes out a bunch of loans which I'm then expected
to pay back.  Then it becomes my problem to resolve the disputes
created by the lender's desire to extend credit at minimal cost.  (The
lender also loses money, of course.  But much of the cost is shifted
to the identity theft victim.)  


--John Kelsey

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: Papers about Algorithm hiding ?

2005-06-06 Thread John Kelsey
Sent: Jun 4, 2005 6:43 AM
To: Steve Furlong [EMAIL PROTECTED]
Subject: Re: Papers about Algorithm hiding ?

GPG is an application that could be delivered by default
in all free OSs.  BSD is more or less installed automatically
with SSH installed.  Linux machines that are set up are
also generally set up with SSH.

I think you need one more step here to get the protective coloration
effect you'd like, where encrypted files aren't automatic evidence of
wrongdoing: During installation, generate 50 or so random passwords
with too much entropy to feasibly guess (easy to do when no user need
ever remember them), and encrypt some reasonable-length files full of
binary zeros with them.  The number of randomly-generated files needs
to be randomized, naturally, and probably should follow some kind of
distribution with a big tail to the right, so that it's not that
uncommon for a random install to put several hundred encrypted files
on the drive.  The value of this is that an attacker now sees
encrypted files on every machine, most of which nobody on Earth can
decrypt.  If this is normal, then it's not evidence.  (There are
probably a bunch of issues here with putting plausible tracks in the
logs, datestamps on the files, etc.  But it seems like something like
this could work)

Certainly using another app is fine.  What would be more
relevant to the direct issue is that it becomes routine to
encrypt and to have encryption installed.  See the recent
threads on where all the data is being lost - user data is
being lost simply because the companies don't protect
it.  Why aren't they protecting it?  Because there are no
easy tools that are built in to automatically and easily
protect it.

Huh?  There have been effective tools for protecting data from
disclosure for a long time, though it's not clear what good they'd do
for a company whose whole business was just selling access to that
data for a fee.  I'll bet the Choicepoints of the world are pretty
careful protecting, say, their payroll and HR records from disclosure.
It's just *your* data they don't mind giving out to random criminals.
No amount of crypto could have helped this.


--John Kelsey

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: NSA warned Bush it needed to monitor networks

2005-03-25 Thread John Kelsey
Obviously any bureaucrat with the authority to categorize
something as secret will more or less automatically so stamp
any information that passes through his hands, to inflate his
importance, and thus his job security and prospects for

I think a bigger issue here is a sort of rational (to the bureaucrat) risk 
aversity: if he declassifies something and it turns out he's leaked something 
valuable (in the eyes of his boss), he's in trouble.  As long as there's no 
cost to stamping secret or FOUO on every document his office produces, this 
is safer for him than any other course of action.   Along with this, going 
through a document to make sure there's nothing secret in there is a lot more 
work than just classifying it.  The same logic works in the private world--how 
much of the stuff you've seen under NDA was genuinely going to cause a problem 
to the company that produced it, if someone just posted it to their website?

This results in top secret information being treated as not
very secret at all, as documented by Richard Feynman, which in
turn results in ever higher secrecy classifications, more top
than top, a process of classification inflation and debasement. 

I suspect something very similar happens with the watchlists.  I wonder how 
many different layers of watchlist there are by now

 James A. Donald

--John Kelsey

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: SHA-1 cracked

2005-02-22 Thread John Kelsey
From: Joseph Ashwood [EMAIL PROTECTED]
Sent: Feb 17, 2005 12:15 AM
Subject: Re: SHA-1 cracked

This attack means that we need to begin the process for a quick and painless 
retirement of SHA-1 in favor of SHA-256/384/512 in the immediate future and 
begin further preparations to move to Whirlpool and other hashes in the near 
future. I say this because with MD5 completely broken, SHA-0 effectively 
completely broken, and SHA-1 showing big cracks, the entire SHA series is in 
doubt, and needs to be heavily reconsidered, otherwise we're looking at a 
continuing failure of hash functions apparently in a yearly fashion until we 
run out of the SHA series.

Yep.  The thing that's interesting here is that the more-or-less obvious 
fallbacks for SHA1 are RIPE-MD160 and SHA256/512.  But given the pile of bodies 
in front of Wang's door already (MD4,MD5, Haval, RIPE-MD, SHA0, SHA1), it's 
hard to have any confidence at all that RIPE-MD160 will survive long.  All the 
remaining SHA functions are the same, modulo some constants and the wordsize 
used--SHA512 is just SHA256 using 64-bit words, different constants, and a few 
more rounds.  So there's really only one SHA function left.  It's different 
enough from SHA1 that it's plausible Wang's attacks won't work, but I can't see 
any really strong reason to trust in that.  

Whirlpool looks like the best bet for a fallback right now,  but it really 
hasn't seen anything like the amount of analysis I'd like.   This is what it 
looks like when someone develops a new class of attack that breaks a whole 
bunch of your available cryptographic primitives in a big hurry.  


--John Kelsey

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: SHA-1 cracked

2005-02-17 Thread John Kelsey
From: Steven M. Bellovin [EMAIL PROTECTED]
Sent: Feb 15, 2005 11:29 PM
Subject: SHA-1 cracked

According to Bruce Schneier's blog 
(, a 
team has found collisions in full SHA-1.  It's probably not a practical 
threat today, since it takes 2^69 operations to do it and we haven't 
heard claims that NSA et al. have built massively parallel hash 
function collision finders, but it's an impressive achievement 
nevertheless -- especially since it comes just a week after NIST stated 
that there were no successful attacks on SHA-1.

Well, there *weren't* any a week ago

   --Prof. Steven M. Bellovin,

--John Kelsey

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: Simson Garfinkel analyses Skype - Open Society Institute

2005-01-30 Thread John Kelsey
From: Adam Shostack [EMAIL PROTECTED]
Sent: Jan 29, 2005 12:45 PM
To: Mark Allen Earnest [EMAIL PROTECTED]
Subject: Re: Simson Garfinkel analyses Skype - Open Society Institute

But, given what people talk about on their cell phones and cordless
phones, and what they send via unencrypted email, they are acting like
they think their communications are secure in the absence of any
encryption.  So I don't think adding some 'cryptographic mumbo jumbo'
is going to change their sense of security in the wrong direction.

One thing most people seem to miss about this, though, is that cellphones and 
cordless phones are *great* for privacy from other humans who live in your 
house or work in your office.  When you don't want your children to hear a 
conversation, you can go take the call in the bathroom or in the car while 
you're driving alone.  Everybody seems to miss this--cellphones and cordless 
phones don't diminish privacy, they just move it around.  Sophisticated 
eavesdroppers can violate more of your privacy, but nosy family members, 
roommates, and office mates can violate a lot less.  I thnk most people 
correctly evaluate which of these groups is more likely to do something 
unpleasant with what they learn by eavesdropping.  

It seems to me that VOIP pushes this in a somewhat different direction, because 
it's probably easy for your high-speed internet access (maybe a wireless hop to 
a router that talks to a cable modem) to be eavesdropped by moderately 
technically savvy nosy neighbors, and because there are a lot of criminals who 
are using more technology, and will surely target VOIP if they think they can 
make any money off it.  


--John Kelsey

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: Entropy and PRNGs

2005-01-10 Thread John Kelsey
From: John Denker [EMAIL PROTECTED]
Sent: Jan 10, 2005 12:21 AM
To: David Wagner [EMAIL PROTECTED]
Subject: Re: Entropy and PRNGs

 Conditioned on everything known to the attacker, of course.

Well, of course indeed!  That notion of entropy -- the entropy
in the adversary's frame of reference -- is precisely the
notion that is appropriate to any adversarial situation, as I
have consistently and clearly stated in my writings;  see
e.g. the end of the definitions section, i.e.

 Actually, I think Ben got it right.  Entropy depends on context.
 The attacker might have extra context that allows him to narrow down
 the possible values of the randomness samples.

Heed your own of course statement.  It is hard to imagine a
situation where my adversary has more context about my
generator than I have myself.

Well, the broader problem isn't the context, it's the model.  If your attacker 
(who lives sometime in the future, and may have a large budget besides) comes 
up with a better model to describe the process you're using as a source of 
noise, you could be out of luck.   The thing that matters is H(X| all 
information available to the attacker), which is based on P(X | all information 
available to the attacker), which includes a model that may be better than 

But I think it's of practical value to consider the different attackers whose 
information might not include some information you use for seeding a PRNG.  
Some sources of entropy, such as packet arrival times, are not worth much for 
attackers on your local network who are attacking you in real time, but are 
quite valuable against attackers who attack you later.  Other sources of 
entropy, such as the hash of the contents of your Windows registry, or a full 
directory tree from your hard drive, are worthwhile against real-time attackers 
without access to your machine, but worthless against  attackers with your 
machine in their hands.  Using cheaply-available sources of each kind in 
seeding a PRNG decreases the set of attackers that will be able to attack you,  
while not preventing you from also using some source of entropy you believe to 
be good against all attackers.  

If you want people to believe that unconditional entropy is a
worthwhile concept, you'll have to come up with a nontrivial
application for it.

Differentiate between measures of entropy.  Collision entropy (Renyi entropy of 
order two) is very useful in determining how many samples you can take before 
expecting a collision, and it's not conditioned on an attacker's information.  
And collision probabilities do matter, in both obvious and subtle ways, for 
PRNG security.  


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: entropy depletion (was: SSL/TLS passive sniffing)

2005-01-07 Thread John Kelsey
From: John Denker [EMAIL PROTECTED]
Sent: Jan 5, 2005 2:06 PM
To: Enzo Michelangeli [EMAIL PROTECTED]
Subject: Re: entropy depletion (was: SSL/TLS passive sniffing)

You're letting your intuition about usable randomness run roughshod over
the formal definition of entropy.  Taking bits out of the PRNG *does*
reduce its entropy.  This may not (and in many applications does not)
reduce its ability to produce useful randomness.

Right.  The critical question is whether the PRNG part gets to a secure state, 
which basically means a state the attacker can't guess in the amount of work 
he's able to do.   If the PRNG gets to a secure state before generating any 
output, then assuming the PRNG algorithm is secure, the outputs are 
indistinguishable from random.  

The discussion of how much fresh entropy is coming in is sometimes a bit 
misleading.  If you shove 64 bits of entropy in, then generate a 128-bit 
output, then shove another 64 bits of entropy in, you don't end up in a secure 
state, because an attacker can guess your first 64 bits of entropy from your 
first output.  What matters is how much entropy is shoved in between the time 
when the PRNG is in a known state, and the time when it's used to generate an 

--John Kelsey

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: The Pointlessness of the MD5 attacks

2004-12-22 Thread John Kelsey
From: Ben Laurie [EMAIL PROTECTED]
Sent: Dec 22, 2004 12:24 PM
To: David Wagner [EMAIL PROTECTED]
Subject: Re: The Pointlessness of the MD5 attacks

Assuming you could find a collision s.t. the resulting decryption looked 
safe with one version and unsafe with the other (rather than gibberish), 
which I find an even bigger stretch than the idea that you could find a 
block that looked safe in one version and unsafe in another, I would 
have said that the mere fact of using a pointless decryption to control 
the action of the code would be suspect.

Hmm.  So maybe I'm missing something.  Here's my scenario:

a.  Alice decides to use GOST for encryption.  She finds an implementation in C 
from Eve, which includes the S-boxes hard-coded in.  Note that the specific 
S-boxes used for GOST are potentially pretty important for their security.  

b.  She also finds a review from some well-known cryptanalyst, Bob, discussing 
the requirements on the S-boxes, and verifying that the above implementation 
uses good S-boxes, which includes the md5 hash of the C source code.

c.   Alice downloads the C source code, and checks the md5 hash.  Since the 
hash is correct, she compiles the code, and starts using her known-secure 
version of GOST to encrypt sensitive data.

d.   Unknown to her, though, Eve has slipped in a changed version of the C 
source code, with the S-boxes changed in a way that makes the encryption much 

What prevents this attack from working?  Alice counts on a review done by 
someone competent and a hash of the source code, but the weakness of the hash 
function means that she's vulnerable to an attack.  The only thing that might 
keep it from working is if it happens to be impossible to choose a pair of sets 
of S-boxes so that one is weak, the other is strong, and the pair allows an md5 
collision.  I don't know whether this is possible or not, but there's no 
inherent reason to think it's impossible--just making some of the 4-bit wide 
S-boxes in GOST non-bijective has pretty big security implications.  (Though 32 
rounds covers a lot of sins in S-box selection, in terms of practical attacks 
rather than academic ones.)

Indeed. Not the point I am making. In a nutshell, yes, it is scary that 
Wang found collisions. No, it is not _more_ scary that you can use those 
collisions to fool people who aren't looking anyway.

I think you can use them in ways that may fool people who *are* looking.  All 
you need is a legitimate reason to have a more-or-less arbitrarily chosen block 
of bits in a part of your program, and then the person reviewing the code says 
okay, that's random-looking, but reasonable enough.  

As an alternative example, consider embedding a large prime number in your 
code, to be used as the modulus when you're doing Diffie-Hellman.  Someone 
reviews the code, and verifies that the number is prime and has all the other 
required properties.  Then, you swap in a different bitstring of equal length, 
but which is composite and yields a reasonably easy attack on Diffie-Hellman.  
What prevents this?



The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: The Pointlessness of the MD5 attacks

2004-12-15 Thread John Kelsey
From: Ben Laurie [EMAIL PROTECTED]
Sent: Dec 14, 2004 9:43 AM
To: Cryptography [EMAIL PROTECTED]
Subject: The Pointlessness of the MD5 attacks

Dan Kaminsky's recent posting seems to have caused some excitement, but 
I really can't see why. In particular, the idea of having two different 
executables with the same checksum has attracted attention.

But the only way I can see to exploit this would be to have code that 
did different things based on the contents of some bitmap. My contention 
is that if the code is open, then it will be obvious that it does 
something bad if a bit is tweaked, and so will be suspicious, even if 
the something bad is not triggered in the version seen.

So, to exploit this successfully, you need code that cannot or will not 
be inspected. My contention is that any such code is untrusted anyway, 
so being able to change its behaviour on the basis of embedded bitmap 
changes is a parlour trick. You may as well have it ping a website to 
find out whether to misbehave.

So, are you sure there can never be a program which allows such an exploit?  
I've seen programs that had embedded components (state machines in particular) 
which were not easily human-readable, and had themselves been generated by 
computer.  And even large graphics, sound, or video sequences can really change 
the meaning of a program's actions in some ways; those might be susceptible to 
the requirements of the attack.  I agree it's hard to see how to exploit the 
existing MD5 collision attacks in programs that would look innocent, but I 
don't see what makes it *impossible*.  

Then you have data files, as Adam Back mentioned, which are often not human 
readable, but you'd still like to know if the signature on them is valid, or if 
they've been changed surreptitiously since the last time they were checked 

Finally, I'm very skeptical that the attacks that have been found recently are 
the best or only ones that can be done.
Do we have any special reason to think that there will never be a way to adapt 
the attack to be able to slip something plausible looking into a C program?  
Once your hash function starts allowing collisions, it really just becomes a 
lot less valuable.  



The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: Blinky Rides Again: RCMP suspect al-Qaida messages

2004-12-13 Thread John Kelsey
From: Adam Shostack [EMAIL PROTECTED]
Sent: Dec 11, 2004 4:52 PM
Subject: Re: Blinky Rides Again: RCMP suspect al-Qaida messages

It seems consistent that Al Qaeda prefers being 'fish in the sea' to
standing out by use of crypto. Also, given the depth and breadth of
conspiracies they believe in, it seems that they might see all us
cryptographers as a massive deception technique to get them to use bad
crypto. (And hey, they're almost right! We love that they use bad

They're going to have the same problems as the rest of us using strong 
cryptography--configuration and usability problems, key management hassles, 
incompatibilities between versions and programs, etc.  They have to do this 
with no central authority, no single support line or person who can reliably 
start things up and help them, in a basically decentralized way.  The 
cypherpunkish idea of a decentralized conspiracy using strong crypto only works 
if either the tools are a lot easier to use, or if the conspiracy is made up of 
cryptographically sophisticated people.  AQ is presumably made up of people who 
know a lot about the Koran, and probably a lot about day-to-day operational 
security against the Pakistani or Indonesian secret police, but there's not 
much reason to think they are very sophisticated about cryptography.  If you 
can't get most computer-literate people you know to use PGP to send you e-mail, 
how well is it going to work to do with a bunch of random jihadis?


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: MD5 To Be Considered Harmful Someday

2004-12-08 Thread John Kelsey
From: James A. Donald [EMAIL PROTECTED]
Sent: Dec 7, 2004 6:57 PM
Subject: MD5 To Be Considered Harmful Someday

But even back when I implemented Crypto Kong, the orthodoxy was 
that one should use SHA1, even though it is slower than MD5, so 
it seems to me that MD5 was considered harmful back in 1997, 
though I did not know why at the time, and perhaps no one knew 

The pseudocollision on MD5 paper was published in 1994, and Doebbertin's full 
collisions for MD5's compression function were published in 1996, so there was 
plenty of reason by 1997 to want to move to a different hash function.  People 
who stuck with MD5 for collision resistance after that were demonstrating 
seriously bad judgement, since the only argument left for MD5's security was 
well, but nobody's published a way to exploit the attack on full messages 

 James A. Donald

--John Kelsey

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: Gov't Orders Air Passenger Data for Test

2004-11-21 Thread John Kelsey
News story quoted by RAH:

WASHINGTON -  The government on Friday ordered airlines to turn over
personal information about passengers who flew within the United States in
June in order to test a new system for identifying potential terrorists.

The interesting thing here is that they can't really test how effective the 
system is until they have another terrorist event on an airline.  Otherwise, 
they can assess the false positive rate of their list (people who were on the 
no-fly-list, shouldn't have flown according to the rules, but did without 
trying to hijack the plane), and the false positive and false negative rate of 
their search for names in the list (e.g., when it becomes obvious that Benjamin 
Ladon from Peoria, IL would have matched, but wasn't the guy they were hoping 
to nab, or when it becomes obvious that a suspected terrorist was in the data, 
did fly, but wasn't caught by the software).  

 The system, dubbed Secure Flight, will compare passenger data with names
on two government watch lists, a no fly list comprised of people who are
known or suspected to be terrorists, and a list of people who require more
scrutiny before boarding planes.

Presumably a lot of the goal here is to stop hassling everyone with a last name 
that starts with al or bin, stop hassling Teddy Kennedy getting on a plane, 
etc., while still catching most of the people on their watchlists who fly under 
their real name.  

 Currently, the federal government shares parts of the list with airlines,
which are responsible for making sure suspected terrorists don't get on
planes. People within the commercial aviation industry say the lists have
the names of more than 100,000 people on them.

This is a goofy number.  If there were 100,000 likely terrorists walking the 
streets, we'd have buildings and planes and bus stops and restaurants blowing 
up every day of the week.  I'll bet you're risking your career if you ever take 
someone off the watchlist who isn't a congressman or a member of the Saudi 
royal family, but that it costs you nothing to add someone to the list.  In 
fact, I'll bet there are people whose performance evaluations note how many 
people they added to the watchlist.  This is what often seems to make 
watchlists useless--eventually, your list of threats has expanded to include 
Elvis Presley and John Lennon, and at that point, you're spending almost all 
your time keeping an eye on (or harassing) random harmless bozos.  

R. A. Hettinga mailto: [EMAIL PROTECTED]


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

A new academic hash result on the preprint server

2004-11-18 Thread John Kelsey

Bruce and I have a new result on hash function security, which uses Joux' 
multicollision trick in a neat way to allow long-message second preimage 
attacks.  We've posted it to the e-print server.

The basic result is that for any n-bit hash function built along the lines of 
SHA1 or Whirlpool (e.g., using an n-bit compression function and Damgard-Merkle 
strengthening), we can mount a second preimage attack on long messages for a 
lot less than 2^n work.  For a 2^k message-block message, we do about 2^{n-k+1} 
work (when kn/2) to get a second preimage.  We also have a little cheaper way 
to find a kind of goofy set of multicollisions than Joux gives.  

None of this result leads to a practical attack on anything, as far as I can 
see.  The messages that are vulnerable are impractically long, and there's 
never an attack cheaper than offline collision finding.  But I think this 
result raises some kind-of interesting questions about the security of hash 
functions between the 2^{n/2} bound for collision finding and the 2^n bound for 
first and second preimage finding.  

Comments appreciated,


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: AES Modes

2004-10-12 Thread John Kelsey
Sent: Oct 10, 2004 11:11 AM
To: Metzdowd Crypto [EMAIL PROTECTED]
Subject: AES Modes

I'm looking for basic mode to encrypt blocks (using AES)
of about 1k in length, +/- an order of magnitude.  Looking
at the above table (2nd link) there are oodles of proposed

It would be nice to have a mode that didn't also require
a separate MAC operation - I get the impression that
this is behind some of the proposals?

I think CCM is just about perfect for this goal.  The MAC isn't free, but it's 
integrated into the chaining mode.  There are also some patented modes that provide a 
MAC for almost no extra computation(OCB, IACBC), and some proposed modes that combine 
an efficient, parallelizeable MAC with encryption in a secure way (CWC,GCM), though 
none of these are standards yet.



The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: IBM's original S-Boxes for DES?

2004-10-06 Thread John Kelsey
Sent: Oct 5, 2004 12:32 PM
Subject: Re: IBM's original S-Boxes for DES?

   More accurately, they didn't protect against linear cryptanalysis - 
there is no way to know if they knew about it and either didn't want to 
make changes to protect against that (they weakened the key, so may have 
wished to keep *some* attacks viable against it to weaken it still 
further), had to choose (against *either* differential or linear, as 
they didn't know how to protect against both) or simply the people doing 
the eval on DES didn't know, as it was rated above their clearance level.

I believe people have since come up with S-boxes that resist both linear and 
differential cryptanalysis.  But we don't know whether there were still other attacks 
or constraints they were trying to address.  However, it makes no sense to assume that 
they left linear attacks in as a backdoor, for two reasons:

a.  They already left a 56-bit key, which was a practical backdoor for people with 
experience and expertise in building keysearch machines.  (Think of all the expertise 
in parallel and distributed keysearch that has come out in the public world in the 
last fifteen years; surely, that was an area NSA had worked on at great depth years 
earlier!  Things like time-memory tradeoffs, parallel collision search and 
meet-in-the-middle search, clever optimization tricks for getting the keysearch to run 
efficiently, etc., along with a large hardware budget, must have made a 56-bit key 
look much worse from inside the agency than from outside.  (Though there were plenty 
of people who saw the problems from outside, as well, thus leading to our current 
understanding of keysearch techniques.)  

b.  Linear attacks on DES, at least the ones we know about, are spectacularly 
impractical, requiring more plaintexts than you could ever hope to get from an 
innocent party using the speeds of hardware available when DES was designed and 


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: Linux-based wireless mesh suite adds crypto engine support

2004-10-05 Thread John Kelsey
From: John Gilmore [EMAIL PROTECTED]
Sent: Sep 30, 2004 6:25 PM
Subject: Re: Linux-based wireless mesh suite adds crypto engine support 

Crypto hardware that does algorithms can be tested by periodically
comparing its results to a software implementation.  Production
applications should probably be doing this -- maybe 1% of the time.

I think the need for interoperability constrains the ability for a crypto module to 
implement some weak algorithm in place of AES or 3DES.  Unless the designer can know 
which encrypted messages have to be handled by someone else's non-hacked module, he 
can't safely do this.  

Crypto hardware that generates random numbers can't be tested in
production in many useful ways.  My suggestion would be to XOR a
hardware-generated and a software-generated random number stream.  If
one fails, whether by accident, malice, or design, the other will
still randomize the resulting stream.  Belt AND suspenders will keep
your source of randomness from being your weakest link.

I'll note that this is supported two separate ways in the (in progress) X9.82 

a.  A standard way to produce a random bit generator with a guaranteed fallback to 
computational security is to XOR  the outputs of some good hardware generator with the 
outputs of a crypto PRNG (aka DRBG in X9.82-ese).  

b.  Any approved random bit generator can always be combined with an unapproved 
generator by XORing.  The only security requirement here is that the unapproved 
generator be independent of the approved one.

All that said, though, it's far from clear how you monitor this in the standard crypto 
environment, since you usually take great pains to make it hard for anyone to get key 
material out of the tamper-resistant modules.  You provide the random value to XOR 
into the RNG output, and the module says Thanks, I XORed it in.  Trust me.  Or, you 
demand the random value from its RNG, XOR in your own, but now, you've exposed the key 
outside the tamper-resistant module, which introduces a whole different set of 
problems.  I'm sure there are some clever crypto protocol ways to address this 
(basically, do a zero-knowledge proof of the value of the random number you used in 
deriving the key), but I have a hard time thinking this is at all practical


--John Kelsey

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

  1   2   >