Re: Possibly questionable security decisions in DNS root management

2009-10-20 Thread Greg Rose


On 2009 Oct 19, at 9:15 , Jack Lloyd wrote:


On Sat, Oct 17, 2009 at 02:23:25AM -0700, John Gilmore wrote:


DSA was (designed to be) full of covert channels.

And, for that matter, one can make DSA deterministic by choosing the k
values to be HMAC-SHA256(key, H(m)) - this will cause the k values to
be repeated, but only if the message itself repeats (which is fine,
since seeing a repeated message/signature pair is harmless), or if one
can induce collisions on HMAC with an unknown key (which seems a
profoundly more difficult problem than breaking RSA or DSA).


Ah, but this doesn't solve the problem; a compliant implementation  
would be deterministic and free of covert channels, but you can't  
reveal enough information to convince someone *else* that the  
implementation is compliant (short of using zero-knowledge proofs,  
let's not go there). So a hardware nubbin could still leak information.


Greg.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Possibly questionable security decisions in DNS root management

2009-10-20 Thread John Gilmore
> ts a fun story, but... RFC 4034 says RSA/SHA1 is mandatory and DSA is
> optional.

I was looking at RFC 2536 from March 1999, which says "Implementation
of DSA is mandatory for DNS security." (Page 2.)  I guess by March 2005
(RFC 4034), something closer to sanity had prevailed.

  http://rfc-editor.org/rfc/rfc2536.txt
  http://rfc-editor.org/rfc/rfc4034.txt

John

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Possibly questionable security decisions in DNS root management

2009-10-20 Thread bmanning
On Tue, Oct 20, 2009 at 09:20:04AM -0400, William Allen Simpson wrote:
> Nicolas Williams wrote:
> >Getting DNSSEC deployed with sufficiently large KSKs should be priority #1.
> >
> I agree.  Let's get something deployed, as that will lead to testing.
> 
> 
> >If 90 days for the 1024-bit ZSKs is too long, that can always be
> >reduced, or the ZSK keylength be increased -- we too can squeeze factors
> >of 10 from various places.  In the early days of DNSSEC deployment the
> >opportunities for causing damage by breaking a ZSK will be relatively
> >meager.  We have time to get this right; this issue does not strike me
> >as urgent.
> >
> One of the things that bother me with the latest presentation is that
> only "dummy" keys will be used.  That makes no sense to me!  We'll have
> folks that get used to hitting the "Ignore" key on their browsers
> 
> http://nanog.org/meetings/nanog47/presentations/Lightning/Abley_light_N47.pdf


the use of dummy keys in the first round is to test things like 
key rollover - the inital keys themselves are unable to be validated
and state as much.  Anyone who tries validation is -NOT- reading 
the key or the deployment plan.

> 
> Thus, I'm not sure we have time to get this right.  We need good keys, so
> that user processes can be tested.

next phase.
> 
> 
> >OTOH, will we be able to detect breaks?  A clever attacker will use
> >breaks in very subtle ways.  A ZSK break would be bad, but something
> >that could be dealt with, *if* we knew it'd happened.  The potential
> >difficulty of detecting attacks is probably the best reason for seeking
> >stronger keys well ahead of time.
> >
> Agreed.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Possibly questionable security decisions in DNS root management

2009-10-20 Thread Ben Laurie
On Sat, Oct 17, 2009 at 10:23 AM, John Gilmore  wrote:
>> Even plain DSA would be much more space efficient on the signature
>> side - a DSA key with p=2048 bits, q=256 bits is much stronger than a
>> 1024 bit RSA key, and the signatures would be half the size. And NIST
>> allows (2048,224) DSA parameters as well, if saving an extra 8 bytes
>> is really that important.
>
> DSA was (designed to be) full of covert channels.
>
>> Given that they are attempted to optimize for minimal packet size, the
>> choice of RSA for signatures actually seems quite bizarre.
>
> It's more bizarre than you think.  But packet size just isn't that big
> a deal.  The root only has to sign a small number of records -- just
> two or three for each top level domain -- and the average client is
> going to use .com, .org, their own country, and a few others).  Each
> of these records is cached on the client side, with a very long
> timeout (e.g. at least a day).  So the total extra data transfer for
> RSA (versus other) keys won't be either huge or frequent.  DNS traffic
> is still a tiny fraction of overall Internet traffic.  We now have
> many dozens of root servers, scattered all over the world, and if the
> traffic rises, we can easily make more by linear replication.  DNS
> *scales*, which is why we're still using it, relatively unchanged,
> after more than 30 years.
>
> The bizarre part is that the DNS Security standards had gotten pretty
> well defined a decade ago, when one or more high-up people in the IETF
> decided that "no standard that requires the use of Jim Bidzos's
> monopoly crypto algorithm is ever going to be approved on my watch".
> Jim had just pissed off one too many people, in his role as CEO of RSA
> Data Security and the second most hated guy in crypto.  (NSA export
> controls was the first reason you couldn't put decent crypto into your
> product; Bidzos's patent, and the way he licensed it, was the second.)
> This IESG prejudice against RSA went so deep that it didn't matter
> that we had a free license from RSA to use the algorithm for DNS, that
> the whole patent would expire in just three years, that we'd gotten
> export permission for it, and had working code that implemented it.
> So the standard got sent back to the beginning and redone to deal with
> the complications of deployed servers and records with varying algorithm
> availability (and to make DSA the "officially mandatory" algorithm).
> Which took another 5 or 10 years.

ts a fun story, but... RFC 4034 says RSA/SHA1 is mandatory and DSA is
optional. I wasn't involved in DNSSEC back then, and I don't know why
it got redone, but not, it seems, to make DSA mandatory. Also, the new
version is different from the old in many more ways that just the
introduction of DSA.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Possibly questionable security decisions in DNS root management

2009-10-20 Thread William Allen Simpson

Nicolas Williams wrote:

Getting DNSSEC deployed with sufficiently large KSKs should be priority #1.


I agree.  Let's get something deployed, as that will lead to testing.



If 90 days for the 1024-bit ZSKs is too long, that can always be
reduced, or the ZSK keylength be increased -- we too can squeeze factors
of 10 from various places.  In the early days of DNSSEC deployment the
opportunities for causing damage by breaking a ZSK will be relatively
meager.  We have time to get this right; this issue does not strike me
as urgent.


One of the things that bother me with the latest presentation is that
only "dummy" keys will be used.  That makes no sense to me!  We'll have
folks that get used to hitting the "Ignore" key on their browsers

http://nanog.org/meetings/nanog47/presentations/Lightning/Abley_light_N47.pdf

Thus, I'm not sure we have time to get this right.  We need good keys, so
that user processes can be tested.



OTOH, will we be able to detect breaks?  A clever attacker will use
breaks in very subtle ways.  A ZSK break would be bad, but something
that could be dealt with, *if* we knew it'd happened.  The potential
difficulty of detecting attacks is probably the best reason for seeking
stronger keys well ahead of time.


Agreed.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Possibly questionable security decisions in DNS root management

2009-10-20 Thread John Gilmore
> designed 25 years ago would not scale to today's load.  There was a  
> crucial design mistake: DNS packets were limited to 512 bytes.  As a  
> result, there are 10s or 100s of millions of machines that read *only*  
> 512 bytes.

Yes, that was stupid, but it was done very early in the evolution of
the Internet (when there were only a hundred machines or so).

Another bizarre twist was that the Berkeley "socket" interface to UDP
packets would truncate incoming packets without telling the user
program.  If a user tried to read 512 bytes and a 600-byte packet came
in, you'd get the first 512 bytes and no error!  The other 88 bytes
were just thrown away.  When this incredible 1980-era design decision
was revised for Linux, they didn't fix it!  Instead, they return the
512 bytes, throw away the 88 bytes, and also return an error flag
(MSG_TRUNC).  There's no way to either receive the whole datagram, or
get an error and try again with a bigger read; if you get an error,
it's thrown away some of the data.

When I looked into this in December '96, the BIND code (the only major
implementation of a name server for the first 20 years) was doing
512-byte reads (which the kernel would truncate without error).  Ugh!
Sometimes the string and baling wire holding the Internet together
becomes a little too obvious.

> It is possible to have larger packets, but only if there is prior  
> negotiation via something called EDNS0.

There's no prior negotiation.  The very first packet sent to a root
name server -- a query, about either the root zone or about a TLD --
now indicates how large a packet can be usefully returned from the
query.  See RFC 2671.  (If there's no "OPT" field in the query, then
the reply packet size is 512.  If there is, then the reply size is
specified by a 16-bit field in the packet.)

In 2007, about 45% of DNS clients (who sent a query on a given day to
some of the root servers) specified a reply size.  Almost half of
those specified 4096 bytes; more than 80% of those specified 2048 or
4096 bytes.  The other ~55% of DNS clients didn't specify, so are
limited to 512 bytes.

For a few years, there was a foolish notion from the above RFC that
clients should specify arbitrarily low numbers like 1280, even if they
could actually process much larger packets.  4096 (one page) is, for
example, the size Linux allows client programs to reassemble even in
the presence of significant memory pressure in the IP stack.  See:

  http://www.caida.org/research/dns/roottraffic/comparison06_07.xml

> That in turn means that there can be at most 13 root  
> servers.  More precisely, there can be at most 13 root names and IP  
> addresses.

Any client who sets the bit for "send me the DNSSEC signatures along
with the records" is by definition using RFC 2671 to tell the server
that they can handle a larger packet size (because the DNSSEC bit is
in the OPT record, which was defined by that RFC).

"dig . ns @f.root-servers.net" doesn't use an OPT record.  It returns
a 496 byte packet with 13 server names, 13 "glue" IPv4 addresses, and
2 IPv6 "glue" addresses.

"dig +nsid . ns @f.root-servers.net" uses OPT to tell the name server
that you can handle up to 4096 bytes of reply.  The reply is 643 bytes
and also includes five more IPv6 "glue" addresses.

Older devices can bootstrap fine from a limited set of root servers;
almost half the net no longer has that restriction.

> The DNS is working today because of anycasting;  
> many -- most?  all? -- of the 13 IP addresses exist at many points in  
> the Internet, and depend on routing system magic to avoid problems.   

Anycast is a simple, beautiful idea, and I'm glad it can be made to
work in IPv4 (it's standard in IPv6).

> At that, you still *really* want to stay below 1500 bytes, the Ethernet MTU.

That's an interesting assumption, but is it true?  Most IP-based
devices with a processor greater than 8 bits wide are able to
reassemble two Ethernet-sized packets into a single UDP datagram,
giving them a limit of ~3000 bytes.  Yes, if either of those datagrams
is dropped en route, then the datagram won't reassemble, so you've
doubled the likely failure rate.  But that's still much lower overhead
than immediately falling back to an 8-to-10-packet TCP connection,
particularly in the presence of high packet drop rates that would
also cause TCP to use extra packets.

> > As it is today, if NSA (or any major country, organized crime
> > group, or civil rights nonprofit) built an RSA key cracker, more
> > than 50% of the RSA keys in use would fall prey to a cracker that
> > ONLY handled 1024-bit keys.  It's probably more like 80-90%,
> > actually.  Failing to use 1056, 1120, 1168-bit, etc, keys is just
> > plain stupid on our (the defenders') part; it's easy to automate
> > the fix.
>
> That's an interesting assumption, but is it true?

I've seen papers on the prevalence of 1024-bit keys, but don't have a 
ready URL.  It's a theory.  Any comments, NSA?

> In particular, is it really 

Re: Possibly questionable security decisions in DNS root management

2009-10-20 Thread Steven Bellovin


On Oct 17, 2009, at 5:23 AM, John Gilmore wrote:


Even plain DSA would be much more space efficient on the signature
side - a DSA key with p=2048 bits, q=256 bits is much stronger than a
1024 bit RSA key, and the signatures would be half the size. And NIST
allows (2048,224) DSA parameters as well, if saving an extra 8 bytes
is really that important.


DSA was (designed to be) full of covert channels.


The evidence that it was an intentional design feature is, to my  
knowledge, slim.  More relevant to this case is why it matters: what  
information is someone trying to smuggle out via the DNS?  Remember  
that DNS records are (in principle) signed offline; servers are  
signing *records*, not responses.  In other words, it's more like a  
certificate model than the TLS model.


Given that they are attempted to optimize for minimal packet size,  
the

choice of RSA for signatures actually seems quite bizarre.


It's more bizarre than you think.  But packet size just isn't that big
a deal.  The root only has to sign a small number of records -- just
two or three for each top level domain -- and the average client is
going to use .com, .org, their own country, and a few others).  Each
of these records is cached on the client side, with a very long
timeout (e.g. at least a day).  So the total extra data transfer for
RSA (versus other) keys won't be either huge or frequent.  DNS traffic
is still a tiny fraction of overall Internet traffic.  We now have
many dozens of root servers, scattered all over the world, and if the
traffic rises, we can easily make more by linear replication.  DNS
*scales*, which is why we're still using it, relatively unchanged,
after more than 30 years.


It's rather more complicated than that.  The issue isn't bandwidth per  
se, at least not as compared with total Internet bandwidth.  Bandwidth  
out of a root server site may be another matter.  Btw, the DNS as  
designed 25 years ago would not scale to today's load.  There was a  
crucial design mistake: DNS packets were limited to 512 bytes.  As a  
result, there are 10s or 100s of millions of machines that read *only*  
512 bytes.  That in turn means that there can be at most 13 root  
servers.  More precisely, there can be at most 13 root names and IP  
addresses.  (We could possibly have one or two more if there was just  
one name that pointed to many addresses, but that would complicate  
debugging the DNS.)  The DNS is working today because of anycasting;  
many -- most?  all? -- of the 13 IP addresses exist at many points in  
the Internet, and depend on routing system magic to avoid problems.   
At that, anycasting works much better for UDP than for TCP, because it  
will fail utterly if some packets in a conversation go to one  
instantiation and others go elsewhere.


It is possible to have larger packets, but only if there is prior  
negotiation via something called EDNS0.  At that, you still *really*  
want to stay below 1500 bytes, the Ethernet MTU.  If you exceed that,  
you get fragmentation, which hurts reliability.  But whatever the  
negotiated maximum DNS response size, if the data exceeds that value  
the server will say "response truncated; ask me via TCP".  That, in  
turn, will cause massive problems.  Many hosts won't do TCP properly  
and many firewalls are incorrectly configured to reject DNS over TCP.   
Those problems could, in principle, be fixed.  But TCP requires a 3- 
way handshake to set up the connection, then a 2-packet exchange for  
the data and response (more if the response won't fit in a single  
packet), plus another 3 packets to tear down the connection.  It also  
requires a lot of state -- and hence kernel memory -- on the server.   
There are also reclamation issues if the TCP connection stops -- but  
isn't torn down -- in just the proper way (where the server is in FIN- 
WAIT-2 state), which in turn might happen if the routing system  
happens to direct some anycast packets elsewhere.


To sum up: there really are reasons why it's important to keep DNS  
responses small.  I suspect we'll have to move towards elliptic curve  
at some point, though there are patent issues (or perhaps patent FUD;  
I have no idea) there.


The bizarre part is that the DNS Security standards had gotten pretty
well defined a decade ago,


Actually, no; the design then was wrong.  It looked ok from the crypto  
side, but there were subtle points in the DNS design that weren't  
handled properly.  I'll skip the whole saga, but it wasn't until RFC  
4033-4035 came out, in March 2005, that the specs were correct.  There  
are still privacy concerns about parts of DNSSEC.



when one or more high-up people in the IETF
decided that "no standard that requires the use of Jim Bidzos's
monopoly crypto algorithm is ever going to be approved on my watch".
Jim had just pissed off one too many people, in his role as CEO of RSA
Data Security and the second most hated guy in crypto.  (NSA export
controls was the first r

Re: Possibly questionable security decisions in DNS root management

2009-10-20 Thread Bill Stewart

At 12:31 AM 10/19/2009, Alexander Klimov wrote:

On Thu, 15 Oct 2009, Jack Lloyd wrote:
> Given that they are attempted to optimize for minimal packet size, the
> choice of RSA for signatures actually seems quite bizarre.

Maybe they try to optimize for verification time.

$ openssl speed


Verification speed for the root or TLD keys doesn't need to be fast, 
because you'll be caching them.
Verification speed for every random 2LD.gTLD or 3TLD.2TLD.ccTLD can be 
important,

but there are lots of 2LDs that are also important to sign securely.
I don't care whether my disposable Yahoo mail account login connections are 
signed securely,

but I care a lot about whether I'm really connecting to my bank or not.


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Possibly questionable security decisions in DNS root management

2009-10-20 Thread Jerry Leichter

On Oct 17, 2009, at 5:23 AM, John Gilmore wrote:

Even using keys that have a round number of bits is foolish, in my
opinion.  If you were going to use about 2**11th bits, why not 2240
bits, or 2320 bits, instead of 2048?  Your software already handles
2240 bits if it can handle 2048, and it's only a tiny bit slower and
larger -- but a 2048-bit RSA cracker won't crack your 2240-bit key.
If this crypto community was serious about resistance to RSA key
factoring, the most popular key generation software would be picking
key sizes *at random* within a wide range beyond the number of bits
demanded for application security.  That way, there'd be no "sweet
spots" at 1024 or 2048.  As it is today, if NSA (or any major country,
organized crime group, or civil rights nonprofit) built an RSA key
cracker, more than 50% of the RSA keys in use would fall prey to a
cracker that ONLY handled 1024-bit keys.  It's probably more like
80-90%, actually.  Failing to use 1056, 1120, 1168-bit, etc, keys is
just plain stupid on our (the defenders') part; it's easy to automate
the fix.
What factoring algorithms would be optimized for a fixed number of  
bits?  I suppose one could have hardware that had 1024-bit registers,  
which would limit you to no more than 1024 bits; but I can't think of  
a factoring algorithm that works for 1024 bits, the top one of which  
is 1, but not at least equally well when that top bit happens to be 0.


-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Collection of code making and breaking machines

2009-10-20 Thread John Levine
>A bit too far for a quick visit (at least for me):
>http://news.bbc.co.uk/2/hi/uk_news/england/8241617.stm

Bletchley Park is always worth a visit, with or without a special
exhibit, as is the adjacent National Museum of Computing which houses
Colossus and a lot more interesting stuff.

An important difference between this museum and computer museums in
the US is that lots of the stuff works.  The rebuilt bombe actually
works.  The rebuilt Collussus actually works.  An impressive number of
the old computers in the NMC work, including a room of old personal
computers that are set up so you can use them.

Not at all coincidentally, Bletchley is an easy day trip from
Cambridge, Oxford, and London.  (That's why they put Bletchley Park at
Bletchley Park.)

R's,
John

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Possibly questionable security decisions in DNS root management

2009-10-20 Thread Victor Duchovni
On Sat, Oct 17, 2009 at 02:23:25AM -0700, John Gilmore wrote:

> > Given that they are attempted to optimize for minimal packet size, the
> > choice of RSA for signatures actually seems quite bizarre.

> Each of these records is cached on the client side, with a very long
> timeout (e.g. at least a day).  So the total extra data transfer for
> RSA (versus other) keys won't be either huge or frequent.  DNS traffic
> is still a tiny fraction of overall Internet traffic.

Yes, normal DNS traffic is not the issue.

The optimization is for DDoS conditions, especially amplification via
forged source IP DNS requests for ". IN NS?". The request is tiny,
and the response is multiple KB with DNSSEC.

> We now have
> many dozens of root servers, scattered all over the world, and if the
> traffic rises, we can easily make more by linear replication.  DNS
> *scales*, which is why we're still using it, relatively unchanged,
> after more than 30 years.

Some (e.g. DJB, and I am inclined to take him seriously), are quite
concerned about amplification issues with DNSSEC. Packet size does matter.

> RSA was the obvious choice because it was (and is) believed that if
> you can break it, you can factor large numbers (which mathematicians
> have been trying to do for hundreds of years).  No other algorithm
> available at the time came with such a high pedigree.  As far as I
> know, none still does.

Well, most of the hundreds of years don't really matter, modern number
theory starts with Gauss in ~1800, and the study of elliptic curves begins
in the same century (also Group theory, complex analysis, ...).  It is
not clear that the pedigree of RSA is much stronger than that for ECC.

> The DNSSEC RSA RFC says:
> 
>  For interoperability, the RSA key size is limited to 4096 bits.  For
>particularly critical applications, implementors are encouraged to
>consider the range of available algorithms and key sizes.

Perhaps believed sufficiently secure, but insanely large for DNS over UDP.
Packet size does matter.

> If this crypto community was serious about resistance to RSA key
> factoring, the most popular key generation software would be picking
> key sizes *at random* within a wide range beyond the number of bits
> demanded for application security. 

There is no incentive to use keys smaller than the top of the range. An
algorithm that cracks k-bit RSA keys, will crack all keys with n That way, there'd be no "sweet spots" at 1024 or 2048. 

There is no sweet spot. These sizes are believed to approximately match
80-bit, 112-bit, 128-bit ... sizes for symmetric keys (for RSA 1024,
2048, and 3072).

Why should one bother with a random size between 1024 and 2048, if
everyone supports 2048, and 2048-bit signatures are practical in the
context of the given protocol?

-- 
Viktor.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Possibly questionable security decisions in DNS root management

2009-10-20 Thread Jack Lloyd
On Sat, Oct 17, 2009 at 02:23:25AM -0700, John Gilmore wrote:

> DSA was (designed to be) full of covert channels.

True, but TCP and UDP are also full of covert channels. And if you are
worried that your signing software or hardware is compromised and
leaking key bits, you have larger problems, no matter what algorithm
you use; for instance, with RSA, the signer could intentionally
miscalculate 1 in 2^32 signatures, which would immediately leak the
entire private key to someone who knew to watch for it. (I would have
said that using PSS also introduces a covert channel, but it appears
DNSSEC is using the scheme from PKCS1 v1.5.)

And, for that matter, one can make DSA deterministic by choosing the k
values to be HMAC-SHA256(key, H(m)) - this will cause the k values to
be repeated, but only if the message itself repeats (which is fine,
since seeing a repeated message/signature pair is harmless), or if one
can induce collisions on HMAC with an unknown key (which seems a
profoundly more difficult problem than breaking RSA or DSA).

> RSA was the obvious choice because it was (and is) believed that if
> you can break it, you can factor large numbers (which mathematicians
> have been trying to do for hundreds of years).  No other algorithm
> available at the time came with such a high pedigree.  As far as I
> know, none still does.

As far as I know even now nobody has proven that breaking RSA is
equivalent to factoring; there are results that suggest it, for
instance [http://eprint.iacr.org/2008/260] shows there is no 'generic'
attack that can break RSA without factoring - meaning such an the
attack would have to examine the bit representation of the modulus.  A
full proof of equivalence still seems to be an open problem.

If for some reason one really wanted to ensure their public key
primitives reduces to a hard problem, it would have made much more
sense to use Rabin-Williams, which does have a provable reduction to
factoring.

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com