> On Thu, 5 Sep 2013 19:14:53 -0400 John Kelsey <crypto....@gmail.com>
> wrote:
>> First, I don't think it has anything to do with Dual EC DRGB.  Who
>> uses it?
> 
> It did *seem* to match the particular part of the story about a
> subverted standard that was complained about by Microsoft
> researchers. I would not claim that it is the most important part of
> the story.

One thing I can say for certain:  NSA did not write SP 800-90.  (That was the 
implication of from the New York Times article.). That was mostly Elaine 
Barker, my coworker, with me providing a lot of text around technical issues.  
The algorithms all came out of work from ANSI X9.82 Part 3, which Elaine also 
wrote with lots of input and text from me and the rest of the editing 
committee.  We worked with NSA people on this, and X9.82 Part 2 (the entropy 
source section) was written entirely by NSA people, but it isn't easy for me to 
see how anyone could stick a trapdoor in that.  We're still working with NSA 
people on SP 800-90B, which includes a lot of stuff on testing entropy sources. 
 

When I started there was an RSA based algorithm which we eventually dropped 
(analogous to bbs), Dual EC DRBG, the hash drbg (which is still in 800-90 in a 
much-changed version for backward comatibility reasons, because the 
standardization process took forever and people started designing to the X9.82 
standard before it was done--this was also the reason we ended up keeping the 
Dual EC DRBG) and an X9.17 based thing we got rid of.  A bunch of the symmetric 
stuff in there is mine--I made changes the the hash DRBG to address 
time/memory/data tradeoffs, and wrote the HMAC DRBG and CTR DRBG.  I also 
changed around the hash df and wrote the bc df.  

It is possible Dual EC DRBG had its P and Q values generated to insert a 
trapdoor, though I don't think anyone really knows that (except the people who 
generated it, but they probably can't prove anything to us at this point).  
It's also immensely slower than the other DRBGs, and I have a hard time seeing 
why anyone would use it.  (But if you do, you should generate your own P and Q.)

...
>> Where do the world's crypto random numbers come from?  My guess is
>> some version of the Windows crypto api and /dev/random
>> or /dev/urandom account for most of them.
> 
> I'm starting to think that I'd probably rather type in the results of
> a few dozen die rolls every month in to my critical servers and let
> AES or something similar in counter mode do the rest.
> 
> A d20 has a bit more than 4 bits of entropy. I can get 256 bits with
> 64 die rolls, or, if I have eight dice, 16 rolls of the group. If I
> mistype when entering the info, no harm is caused. The generator can
> be easily tested for correct behavior if it is simply a block cipher.

If you're trying to solve the problem of not trusting your entropy source, this 
is reasonable, but it doesn't exactly scale to normal users.  Entropy 
collection in software is a pain in the ass, and my guess is that the 
overwhelming majority of developers are happy to punt and just use the OS' 
random numbers.  That looks to be what happened with the Henninger and Lenstra 
results regarding lots of RSA keys with shared moduli.  That flaw implies a 
huge number of easily factored RSA keys out there, thanks to insufficient 
entropy and calling /dev/urandom on a system where it won't block even if it 
thinks it has zero entropy.  (This was a multi-level screw up, as I understand 
it, between a Linux version and OpenSSL and the implementors of various 
appliance network devices.). 

It would be smarter for any crypto software to use the OS source for some seed 
material, and then combine it with both unique or nearly unique information and 
anything that's likely to have some entropy, and use that to initialize a good 
PRNG.  (I'd use CTR DRBG.)   Your ethernet address doesn't have any entropy, 
but it works like a salt in a password hashing scheme--an attacker must now 
rerun his attack for each individual instance of the crypto software. 

In general, software that uses crypto should probably be trying to gather 
entropy wherever possible, not just trusting the OS.  And it should use that 
entropy plus the OS' entropy to seed its own PRNG.  And it should throw in any 
information that might distinguish this instance from other instances, to force 
any attackers to rerun their attack on each instance individually.  

When I saw the keystore stuff, I thought "bad key generation."  Henninger found 
a bunch of RSA 
keys from the same make of devices that were sharing primes, thanks to really 
awful entropy collection.  If those devices had been collecting 40 bits of 
entropy for the first prime, she might never have found a pair of keys with a 
shared prime.  But someone using analogous methods might be able to generate 
2^{40} primes ahead of time, and efficiently run each new RSA key sent in 
against that list and break a large fraction of the keys.

I think long-term public keys are the big weakness here, because they tend to 
be generated relatively early.  It's easy to see how you don't have any entropy 
in your pool a minute after starting up; it's much harder to see how that 
happens after a year of operation.  

> Perry
> -- 
> Perry E. Metzger        pe...@piermont.com

--John
_______________________________________________
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Reply via email to