> Even plain DSA would be much more space efficient on the signature
> side - a DSA key with p=2048 bits, q=256 bits is much stronger than a
> 1024 bit RSA key, and the signatures would be half the size. And NIST
> allows (2048,224) DSA parameters as well, if saving an extra 8 bytes
> is really that important.

DSA was (designed to be) full of covert channels.

> Given that they are attempted to optimize for minimal packet size, the
> choice of RSA for signatures actually seems quite bizarre.

It's more bizarre than you think.  But packet size just isn't that big
a deal.  The root only has to sign a small number of records -- just
two or three for each top level domain -- and the average client is
going to use .com, .org, their own country, and a few others).  Each
of these records is cached on the client side, with a very long
timeout (e.g. at least a day).  So the total extra data transfer for
RSA (versus other) keys won't be either huge or frequent.  DNS traffic
is still a tiny fraction of overall Internet traffic.  We now have
many dozens of root servers, scattered all over the world, and if the
traffic rises, we can easily make more by linear replication.  DNS
*scales*, which is why we're still using it, relatively unchanged,
after more than 30 years.

The bizarre part is that the DNS Security standards had gotten pretty
well defined a decade ago, when one or more high-up people in the IETF
decided that "no standard that requires the use of Jim Bidzos's
monopoly crypto algorithm is ever going to be approved on my watch".
Jim had just pissed off one too many people, in his role as CEO of RSA
Data Security and the second most hated guy in crypto.  (NSA export
controls was the first reason you couldn't put decent crypto into your
product; Bidzos's patent, and the way he licensed it, was the second.)
This IESG prejudice against RSA went so deep that it didn't matter
that we had a free license from RSA to use the algorithm for DNS, that
the whole patent would expire in just three years, that we'd gotten
export permission for it, and had working code that implemented it.
So the standard got sent back to the beginning and redone to deal with
the complications of deployed servers and records with varying algorithm
availability (and to make DSA the "officially mandatory" algorithm).
Which took another 5 or 10 years.

RSA was the obvious choice because it was (and is) believed that if
you can break it, you can factor large numbers (which mathematicians
have been trying to do for hundreds of years).  No other algorithm
available at the time came with such a high pedigree.  As far as I
know, none still does.  And if we were going to go to the trouble of
rewiring the whole world's DNS for security at all, we wanted real
security, not pasted-on crap security.


     For interoperability, the RSA key size is limited to 4096 bits.  For
   particularly critical applications, implementors are encouraged to
   consider the range of available algorithms and key sizes.

That's standard-speak for "don't use the shortest possible keys all
the time, idiot".  Yes, using 1024-bit keys is lunacy -- but of course
we're talking about Verisign/NSI here, the gold standard in crap
security.  The root's DNSSEC operational procedures should be designed
so that ICANN does all the signing, though the lord knows that ICANN
is even less trustworthy than NSI.  But at least we know what kind of
larceny ICANN is into, and it's a straightforward squeeze for their
own lavish benefit, forcibly paid by every domain owner; it doesn't
involve promising security and not delivering it.

Even using keys that have a round number of bits is foolish, in my
opinion.  If you were going to use about 2**11th bits, why not 2240
bits, or 2320 bits, instead of 2048?  Your software already handles
2240 bits if it can handle 2048, and it's only a tiny bit slower and
larger -- but a 2048-bit RSA cracker won't crack your 2240-bit key.
If this crypto community was serious about resistance to RSA key
factoring, the most popular key generation software would be picking
key sizes *at random* within a wide range beyond the number of bits
demanded for application security.  That way, there'd be no "sweet
spots" at 1024 or 2048.  As it is today, if NSA (or any major country,
organized crime group, or civil rights nonprofit) built an RSA key
cracker, more than 50% of the RSA keys in use would fall prey to a
cracker that ONLY handled 1024-bit keys.  It's probably more like
80-90%, actually.  Failing to use 1056, 1120, 1168-bit, etc, keys is
just plain stupid on our (the defenders') part; it's easy to automate
the fix.


The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com

Reply via email to