Re: deterministic random numbers in crypto protocols -- Re: Possibly questionable security decisions in DNS root management

2009-11-02 Thread Bill Frantz
zo...@zooko.com (Zooko Wilcox-O'Hearn) on Thursday, October 29, 2009 wrote:

>I'm beginning to think that *in general* when I see a random number  
>required for a crypto protocol then I want to either  
>deterministically generate it from other data which is already  
>present or to have it explicitly provided by the higher-layer  
>protocol.  In other words, I want to constrain the crypto protocol  
>implementation by forbidding it to read the clock or to read from a  
>globally-available RNG, thus making that layer deterministic.

One concern is that if the encryption key is deterministically generated
from the data, then the same plain text will generate the same cypher text,
and a listener will know that the same message has been sent. The same
observation applies to a DSA signature. If this leakage of information is
not a problem, e.g. the signature is encrypted along with the data using a
non-deterministic key, then there doesn't seem to be anything obvious wrong
with the approach. (But remember, I'm far from an expert.)

Cheers - Bill

---
Bill Frantz|"After all, if the conventional wisdom was working, the
408-356-8506   | rate of systems being compromised would be going down,
www.periwinkle.com | wouldn't it?" -- Marcus Ranum

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


deterministic random numbers in crypto protocols -- Re: Possibly questionable security decisions in DNS root management

2009-11-01 Thread Zooko Wilcox-O'Hearn

On 2009 Oct 19, at 9:15 , Jack Lloyd wrote:


On Sat, Oct 17, 2009 at 02:23:25AM -0700, John Gilmore wrote:


DSA was (designed to be) full of covert channels.


one can make DSA deterministic by choosing the k values to be HMAC- 
SHA256(key, H(m))


I've noticed people tinkering with (EC) DSA by constraining that  
number k.  For example, Wei Dai's Crypto++ library generates k by  
hashing in the message itself as well as a timestamp into an RNG:


http://allmydata.org/trac/cryptopp/browser/c5/pubkey.h?rev=324#L1036

Wei Dai's motivation for this is to deal with the case that there is  
a rollback of the random number generator, which has always been  
possible and nowadays seems increasingly likely because of the rise  
of virtualization.  See also Scott Yilek: http://eprint.iacr.org/ 
2009/474 which appears to be a formal argument that this technique is  
secure (but I suspect that Scott Yilek and Wei Dai are unaware of one  
another's work).  Yilek's work is motivated by virtual machines, but  
one should note that the same issues have bedeviled normal old  
physical machines for years.


Since the Dai/Yilek approach also uses an RNG it is still a covert  
channel, but one could easily remove the RNG part and just use the  
hash-of-the-message part.


I'm beginning to think that *in general* when I see a random number  
required for a crypto protocol then I want to either  
deterministically generate it from other data which is already  
present or to have it explicitly provided by the higher-layer  
protocol.  In other words, I want to constrain the crypto protocol  
implementation by forbidding it to read the clock or to read from a  
globally-available RNG, thus making that layer deterministic.


This facilitates testing, which would help to detect implementation  
flaws like the OpenSSL/Debian fiasco.  It also avoids covert channels  
and can avoid relying on an RNG for security.  If the random numbers  
are generated fully deterministically then it can also provide  
engineering advantages because of "convergence" of the output -- that  
two computations of the same protocol with the same inputs yield the  
same output.


Now, Yilek's paper argues for the security of generating the needed  
random number by hashing together *both* an input random number (e.g.  
from the system RNG) *and* the message.  This is exactly the  
technique that Wei Dai has implemented.  I'm not sure how hard it  
would be to write a similar argument for the security of my proposed  
technique of generating the needed random number by hashing just the  
message.  (Here's a crack at it: Yilek proves that the Dai technique  
is secure even when the system RNG fails and gives you the same  
number more than once, right?  So then let's hardcode the system RNG  
to always give you the random number "4".  QED :-))


Okay, aside from the theoretical proofs, the engineering question  
facing me is "What's more likely: RNG failure or novel cryptanalysis  
that exploits the fact that the random number isn't truly random but  
is instead generated, e.g. by a KDF from other secrets?".  No  
contest!  The former is common in practice and the latter is probably  
impossible.


Minimizing the risk of the latter is one reason why I am so  
interested in KDF's nowadays, such as the recently proposed HKDF:  
http://webee.technion.ac.il/~hugo/kdf/kdf.pdf .


On Tuesday,2009-10-20, at 15:45 , Greg Rose wrote:

Ah, but this doesn't solve the problem; a compliant implementation  
would be deterministic and free of covert channels, but you can't  
reveal enough information to convince someone *else* that the  
implementation is compliant (short of using zero-knowledge proofs,  
let's not go there). So a hardware nubbin could still leak  
information.


Good point!  But can't the one who verifies the signature also verify  
that the k was generated according to the prescribed technique?


Regards,

Zooko

P.S.  If you read this letter all the way to the end then please let  
me know.  I try to make them short, but sometimes I think they are  
too long and make too many assumptions about what the reader already  
knows.  Did this message make sense?


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Possibly questionable security decisions in DNS root management

2009-10-25 Thread Bill Stewart

At 12:14 PM 10/22/2009, David Wagner wrote:

Back to DNSSEC: The original criticism was that "DNSSEC has covert
channels".  So what?  If you're connected to the Internet, covert
channels are a fact of life, DNSSEC or no.  The added risk due to any
covert channels that DNSSEC may enable is somewhere between negligible
and none, as far as I can tell.  So I don't understand that criticism.


I thought it was also that DSA had covert channels,
but I also don't see why that's as relevant here,
and I share Dave's skepticism about threat models.
It's unlikely that DNSSEC will let you do anything any more heinous
than Dan Kaminsky's streaming-video-over-DNS hacks have already done.

There are two obvious places that data can be leaked -
the initial key signature process, and the DNS client/server process.
If the people who certify the root or TLDs can't be trusted,
the number of those people is small enough that they can simply
send the secret data to their unindicted co-conspirators
without all the trouble of hiding it in a covert channel on a very public 
DNS server.


And if Bad Guys have compromised the software used in a DNS server,
while they could be subtle and hide data in DSA signatures of DNS records,
it would be much easier to just send it as data if the query
has the evil bit set or asks for covertchannel1.com or whatever.
There's plenty of room in the formats even without DSA.



-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Possibly questionable security decisions in DNS root management

2009-10-23 Thread Stephan Neuhaus


On Oct 22, 2009, at 16:12, Perry E. Metzger wrote:

I don't think anyone is smart enough to understand all the  
implications of this across all the systems that depend on the DNS,  
especially as we start to trust the DNS because of the authentication.


"We" trust the DNS already. As far as I can follow the discussion,  
that's part of the problem.


Fun,

Stephan

PS: If your point is that DNSSEC will not solve the problem, I agree.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Possibly questionable security decisions in DNS root management

2009-10-22 Thread David Wagner
Florian Weimer  wrote:
> And you better randomize some bits covered by RRSIGs on DS RRsets.
> Directly signing data supplied by non-trusted source is quite risky.
> (It turns out that the current signing schemes have not been designed
> for this type of application, but the general crypto community is very
> slow at realizing this discrepancy.)

Could you elaborate?  I'm not sure what you're referring to or why it
would be quite risky to sign unrandomized messages.  Modern, well-designed
signature schemes are designed to resist chosen-message attack.  They do
not require the user of the signature scheme to randomize the messages
to be signed.  I'm not sure what discrepancy you're referring to.

Back to DNSSEC: The original criticism was that "DNSSEC has covert
channels".  So what?  If you're connected to the Internet, covert
channels are a fact of life, DNSSEC or no.  The added risk due to any
covert channels that DNSSEC may enable is somewhere between negligible
and none, as far as I can tell.  So I don't understand that criticism.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Possibly questionable security decisions in DNS root management

2009-10-22 Thread Perry E. Metzger

Florian Weimer  writes:
> * Perry E. Metzger:
>
>> Actually, there are routine attacks on DNS infrastructure these days,
>> but clearly they're not cryptographic since that's not
>> deployed. However, a large part of the point of having DNSSEC is that we
>> can then trust the DNS to be accurate so we can insert things like
>> cryptographic keys into it.
>
> As far as I know, only the following classes of DNS-related incidents
> have been observed:

You're not correct. Among other things, I've personally been the subject
of deliberate DNS cache contamination attacks, and people have observed
deployed DNS response forgery in the field.

>> I'm particularly concerned about the fact that it is difficult to a
>> priori analyze all of the use cases for DNSSEC and what the incentives
>> may be to attack them.
>
> Well, this seems to be rather constructed to me.

Feel free to find it "constructed". From my point of view, if I can't
analyze the implications of a compromise, I don't want to leave the
ability for it to happen in a system. I don't think anyone is smart
enough to understand all the implications of this across all the systems
that depend on the DNS, especially as we start to trust the DNS because
of the authentication.

Perry

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Possibly questionable security decisions in DNS root management

2009-10-22 Thread Florian Weimer
* John Gilmore:

> So the standard got sent back to the beginning and redone to deal with
> the complications of deployed servers and records with varying algorithm
> availability (and to make DSA the "officially mandatory" algorithm).
> Which took another 5 or 10 years.

And it's still not clear that it works.  No additional suite of
algorithms has been approved for DNSSEC yet.  Even the upcoming
SHA-256 change is, from an implementors perspective, a minor addition
to NSEC3 support because it has been tied to that pervasive protocol
change for political reasons.

> forcibly paid by every domain owner

Not really, most ccTLDs only pay out of generosity, if they pay at all
(and if you make enough fuss at your favorite TLD operator's annual
general meeting, they are likely to cease to pay, too).

> So the total extra data transfer for RSA (versus other) keys won't
> be either huge or frequent.

Crap queries are one problem.  DNS is only efficient for regular DNS
resolution.  Caching breaks down if you use non-compliant or
compliant-to-broken-standards software.  There's also the annoying
little twist that about half of the client (resolver) population
unconditionally requests DNSSEC data, even if they are incapable of
processing it in any meaningful way (which means, in essence, no
incremental deployment on the authoritative server side).

There are some aspects of response sizes for which no full impact
analysis is publicly available.  I don't know if the 1024 bit decision
is guided by private analysis.  (It is somewhat at odds with my own
conclusions.)

-- 
Florian Weimer
BFK edv-consulting GmbH   http://www.bfk.de/
Kriegsstraße 100  tel: +49-721-96201-1
D-76133 Karlsruhe fax: +49-721-96201-99

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Possibly questionable security decisions in DNS root management

2009-10-22 Thread Florian Weimer
* Victor Duchovni:
> The optimization is for DDoS conditions, especially amplification via
> forged source IP DNS requests for ". IN NS?". The request is tiny,
> and the response is multiple KB with DNSSEC.

There's only one required signature in a ". IN NS" response, so it
isn't as large as you suggest.  (And the priming response is already
larger than 600 bytes due to IPv6 records.)

DNSKEY RRsets are more interesting.  But in the end, this is not a DNS
problem, it's a lack of regulation of the IP layer.

-- 
Florian Weimer
BFK edv-consulting GmbH   http://www.bfk.de/
Kriegsstraße 100  tel: +49-721-96201-1
D-76133 Karlsruhe fax: +49-721-96201-99

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Possibly questionable security decisions in DNS root management

2009-10-22 Thread Florian Weimer
* Jack Lloyd:

> On Sat, Oct 17, 2009 at 02:23:25AM -0700, John Gilmore wrote:
>
>> DSA was (designed to be) full of covert channels.
>
> True, but TCP and UDP are also full of covert channels.

And you better randomize some bits covered by RRSIGs on DS RRsets.
Directly signing data supplied by non-trusted source is quite risky.
(It turns out that the current signing schemes have not been designed
for this type of application, but the general crypto community is very
slow at realizing this discrepancy.)

-- 
Florian Weimer
BFK edv-consulting GmbH   http://www.bfk.de/
Kriegsstraße 100  tel: +49-721-96201-1
D-76133 Karlsruhe fax: +49-721-96201-99

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Possibly questionable security decisions in DNS root management

2009-10-22 Thread Florian Weimer
* Perry E. Metzger:

> Actually, there are routine attacks on DNS infrastructure these days,
> but clearly they're not cryptographic since that's not
> deployed. However, a large part of the point of having DNSSEC is that we
> can then trust the DNS to be accurate so we can insert things like
> cryptographic keys into it.

As far as I know, only the following classes of DNS-related incidents
have been observed:

  (a) Non-malicious incorrect DNS responses from caches

  (a1) as the result of defective software
  (a2) due to misconfiguration
  (a3) as a means to generate revenue
  (a4) as a means to generate revenue, but informed consent
   of the affected party is disputed
  (a5) to implement local community standards

  (b) Compromised service provider infrastructure

  (b1) ISP caching resolvers
  (b2) ISP-provisioned routers/DNS proxies at customer sites
  (b3) authoritative name servers and networks around authoritative
   name servers
  (b4) as the result of registrar/registry data manipulation

  (c) DNS as a traffic amplifier, used for denial-of-service attacks
  both against DNS and non-DNS targets

  (d) in-protocol, non-spoofed DNS-based reflective attacks against
  authoritative servers

  (e) unclear incidents for which sufficient data is not available

The problem is that the "attacks" you mentioned are in class (e), but
likely belong to (a1) and (a2) if we had more insight into them.
Certainly, bad data itself is not proof of malicious intent.

(NB: (a1) does *not* include software using predictable query source
ports.  There does not appear to be corresponding attack activity.)

> I'm particularly concerned about the fact that it is difficult to a
> priori analyze all of the use cases for DNSSEC and what the incentives
> may be to attack them.

Well, this seems to be rather constructed to me.  You state that
DNSSEC is a game changer, and then it's indeed pretty unclear what
level of cryptographic protection is required.  But in reality, DNSSEC
adoption is not likely to change DNS usage patterns.  If there's an
effect, it will be due to the more rigid protocol specification and a
gradual phase-out of grossly non-compliant DNS implementations, and
not due to the cryptography involved.

-- 
Florian Weimer
BFK edv-consulting GmbH   http://www.bfk.de/
Kriegsstraße 100  tel: +49-721-96201-1
D-76133 Karlsruhe fax: +49-721-96201-99

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Possibly questionable security decisions in DNS root management

2009-10-20 Thread Greg Rose


On 2009 Oct 19, at 9:15 , Jack Lloyd wrote:


On Sat, Oct 17, 2009 at 02:23:25AM -0700, John Gilmore wrote:


DSA was (designed to be) full of covert channels.

And, for that matter, one can make DSA deterministic by choosing the k
values to be HMAC-SHA256(key, H(m)) - this will cause the k values to
be repeated, but only if the message itself repeats (which is fine,
since seeing a repeated message/signature pair is harmless), or if one
can induce collisions on HMAC with an unknown key (which seems a
profoundly more difficult problem than breaking RSA or DSA).


Ah, but this doesn't solve the problem; a compliant implementation  
would be deterministic and free of covert channels, but you can't  
reveal enough information to convince someone *else* that the  
implementation is compliant (short of using zero-knowledge proofs,  
let's not go there). So a hardware nubbin could still leak information.


Greg.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Possibly questionable security decisions in DNS root management

2009-10-20 Thread John Gilmore
> ts a fun story, but... RFC 4034 says RSA/SHA1 is mandatory and DSA is
> optional.

I was looking at RFC 2536 from March 1999, which says "Implementation
of DSA is mandatory for DNS security." (Page 2.)  I guess by March 2005
(RFC 4034), something closer to sanity had prevailed.

  http://rfc-editor.org/rfc/rfc2536.txt
  http://rfc-editor.org/rfc/rfc4034.txt

John

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Possibly questionable security decisions in DNS root management

2009-10-20 Thread bmanning
On Tue, Oct 20, 2009 at 09:20:04AM -0400, William Allen Simpson wrote:
> Nicolas Williams wrote:
> >Getting DNSSEC deployed with sufficiently large KSKs should be priority #1.
> >
> I agree.  Let's get something deployed, as that will lead to testing.
> 
> 
> >If 90 days for the 1024-bit ZSKs is too long, that can always be
> >reduced, or the ZSK keylength be increased -- we too can squeeze factors
> >of 10 from various places.  In the early days of DNSSEC deployment the
> >opportunities for causing damage by breaking a ZSK will be relatively
> >meager.  We have time to get this right; this issue does not strike me
> >as urgent.
> >
> One of the things that bother me with the latest presentation is that
> only "dummy" keys will be used.  That makes no sense to me!  We'll have
> folks that get used to hitting the "Ignore" key on their browsers
> 
> http://nanog.org/meetings/nanog47/presentations/Lightning/Abley_light_N47.pdf


the use of dummy keys in the first round is to test things like 
key rollover - the inital keys themselves are unable to be validated
and state as much.  Anyone who tries validation is -NOT- reading 
the key or the deployment plan.

> 
> Thus, I'm not sure we have time to get this right.  We need good keys, so
> that user processes can be tested.

next phase.
> 
> 
> >OTOH, will we be able to detect breaks?  A clever attacker will use
> >breaks in very subtle ways.  A ZSK break would be bad, but something
> >that could be dealt with, *if* we knew it'd happened.  The potential
> >difficulty of detecting attacks is probably the best reason for seeking
> >stronger keys well ahead of time.
> >
> Agreed.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Possibly questionable security decisions in DNS root management

2009-10-20 Thread Ben Laurie
On Sat, Oct 17, 2009 at 10:23 AM, John Gilmore  wrote:
>> Even plain DSA would be much more space efficient on the signature
>> side - a DSA key with p=2048 bits, q=256 bits is much stronger than a
>> 1024 bit RSA key, and the signatures would be half the size. And NIST
>> allows (2048,224) DSA parameters as well, if saving an extra 8 bytes
>> is really that important.
>
> DSA was (designed to be) full of covert channels.
>
>> Given that they are attempted to optimize for minimal packet size, the
>> choice of RSA for signatures actually seems quite bizarre.
>
> It's more bizarre than you think.  But packet size just isn't that big
> a deal.  The root only has to sign a small number of records -- just
> two or three for each top level domain -- and the average client is
> going to use .com, .org, their own country, and a few others).  Each
> of these records is cached on the client side, with a very long
> timeout (e.g. at least a day).  So the total extra data transfer for
> RSA (versus other) keys won't be either huge or frequent.  DNS traffic
> is still a tiny fraction of overall Internet traffic.  We now have
> many dozens of root servers, scattered all over the world, and if the
> traffic rises, we can easily make more by linear replication.  DNS
> *scales*, which is why we're still using it, relatively unchanged,
> after more than 30 years.
>
> The bizarre part is that the DNS Security standards had gotten pretty
> well defined a decade ago, when one or more high-up people in the IETF
> decided that "no standard that requires the use of Jim Bidzos's
> monopoly crypto algorithm is ever going to be approved on my watch".
> Jim had just pissed off one too many people, in his role as CEO of RSA
> Data Security and the second most hated guy in crypto.  (NSA export
> controls was the first reason you couldn't put decent crypto into your
> product; Bidzos's patent, and the way he licensed it, was the second.)
> This IESG prejudice against RSA went so deep that it didn't matter
> that we had a free license from RSA to use the algorithm for DNS, that
> the whole patent would expire in just three years, that we'd gotten
> export permission for it, and had working code that implemented it.
> So the standard got sent back to the beginning and redone to deal with
> the complications of deployed servers and records with varying algorithm
> availability (and to make DSA the "officially mandatory" algorithm).
> Which took another 5 or 10 years.

ts a fun story, but... RFC 4034 says RSA/SHA1 is mandatory and DSA is
optional. I wasn't involved in DNSSEC back then, and I don't know why
it got redone, but not, it seems, to make DSA mandatory. Also, the new
version is different from the old in many more ways that just the
introduction of DSA.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Possibly questionable security decisions in DNS root management

2009-10-20 Thread William Allen Simpson

Nicolas Williams wrote:

Getting DNSSEC deployed with sufficiently large KSKs should be priority #1.


I agree.  Let's get something deployed, as that will lead to testing.



If 90 days for the 1024-bit ZSKs is too long, that can always be
reduced, or the ZSK keylength be increased -- we too can squeeze factors
of 10 from various places.  In the early days of DNSSEC deployment the
opportunities for causing damage by breaking a ZSK will be relatively
meager.  We have time to get this right; this issue does not strike me
as urgent.


One of the things that bother me with the latest presentation is that
only "dummy" keys will be used.  That makes no sense to me!  We'll have
folks that get used to hitting the "Ignore" key on their browsers

http://nanog.org/meetings/nanog47/presentations/Lightning/Abley_light_N47.pdf

Thus, I'm not sure we have time to get this right.  We need good keys, so
that user processes can be tested.



OTOH, will we be able to detect breaks?  A clever attacker will use
breaks in very subtle ways.  A ZSK break would be bad, but something
that could be dealt with, *if* we knew it'd happened.  The potential
difficulty of detecting attacks is probably the best reason for seeking
stronger keys well ahead of time.


Agreed.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Possibly questionable security decisions in DNS root management

2009-10-20 Thread John Gilmore
> designed 25 years ago would not scale to today's load.  There was a  
> crucial design mistake: DNS packets were limited to 512 bytes.  As a  
> result, there are 10s or 100s of millions of machines that read *only*  
> 512 bytes.

Yes, that was stupid, but it was done very early in the evolution of
the Internet (when there were only a hundred machines or so).

Another bizarre twist was that the Berkeley "socket" interface to UDP
packets would truncate incoming packets without telling the user
program.  If a user tried to read 512 bytes and a 600-byte packet came
in, you'd get the first 512 bytes and no error!  The other 88 bytes
were just thrown away.  When this incredible 1980-era design decision
was revised for Linux, they didn't fix it!  Instead, they return the
512 bytes, throw away the 88 bytes, and also return an error flag
(MSG_TRUNC).  There's no way to either receive the whole datagram, or
get an error and try again with a bigger read; if you get an error,
it's thrown away some of the data.

When I looked into this in December '96, the BIND code (the only major
implementation of a name server for the first 20 years) was doing
512-byte reads (which the kernel would truncate without error).  Ugh!
Sometimes the string and baling wire holding the Internet together
becomes a little too obvious.

> It is possible to have larger packets, but only if there is prior  
> negotiation via something called EDNS0.

There's no prior negotiation.  The very first packet sent to a root
name server -- a query, about either the root zone or about a TLD --
now indicates how large a packet can be usefully returned from the
query.  See RFC 2671.  (If there's no "OPT" field in the query, then
the reply packet size is 512.  If there is, then the reply size is
specified by a 16-bit field in the packet.)

In 2007, about 45% of DNS clients (who sent a query on a given day to
some of the root servers) specified a reply size.  Almost half of
those specified 4096 bytes; more than 80% of those specified 2048 or
4096 bytes.  The other ~55% of DNS clients didn't specify, so are
limited to 512 bytes.

For a few years, there was a foolish notion from the above RFC that
clients should specify arbitrarily low numbers like 1280, even if they
could actually process much larger packets.  4096 (one page) is, for
example, the size Linux allows client programs to reassemble even in
the presence of significant memory pressure in the IP stack.  See:

  http://www.caida.org/research/dns/roottraffic/comparison06_07.xml

> That in turn means that there can be at most 13 root  
> servers.  More precisely, there can be at most 13 root names and IP  
> addresses.

Any client who sets the bit for "send me the DNSSEC signatures along
with the records" is by definition using RFC 2671 to tell the server
that they can handle a larger packet size (because the DNSSEC bit is
in the OPT record, which was defined by that RFC).

"dig . ns @f.root-servers.net" doesn't use an OPT record.  It returns
a 496 byte packet with 13 server names, 13 "glue" IPv4 addresses, and
2 IPv6 "glue" addresses.

"dig +nsid . ns @f.root-servers.net" uses OPT to tell the name server
that you can handle up to 4096 bytes of reply.  The reply is 643 bytes
and also includes five more IPv6 "glue" addresses.

Older devices can bootstrap fine from a limited set of root servers;
almost half the net no longer has that restriction.

> The DNS is working today because of anycasting;  
> many -- most?  all? -- of the 13 IP addresses exist at many points in  
> the Internet, and depend on routing system magic to avoid problems.   

Anycast is a simple, beautiful idea, and I'm glad it can be made to
work in IPv4 (it's standard in IPv6).

> At that, you still *really* want to stay below 1500 bytes, the Ethernet MTU.

That's an interesting assumption, but is it true?  Most IP-based
devices with a processor greater than 8 bits wide are able to
reassemble two Ethernet-sized packets into a single UDP datagram,
giving them a limit of ~3000 bytes.  Yes, if either of those datagrams
is dropped en route, then the datagram won't reassemble, so you've
doubled the likely failure rate.  But that's still much lower overhead
than immediately falling back to an 8-to-10-packet TCP connection,
particularly in the presence of high packet drop rates that would
also cause TCP to use extra packets.

> > As it is today, if NSA (or any major country, organized crime
> > group, or civil rights nonprofit) built an RSA key cracker, more
> > than 50% of the RSA keys in use would fall prey to a cracker that
> > ONLY handled 1024-bit keys.  It's probably more like 80-90%,
> > actually.  Failing to use 1056, 1120, 1168-bit, etc, keys is just
> > plain stupid on our (the defenders') part; it's easy to automate
> > the fix.
>
> That's an interesting assumption, but is it true?

I've seen papers on the prevalence of 1024-bit keys, but don't have a 
ready URL.  It's a theory.  Any comments, NSA?

> In particular, is it really 

Re: Possibly questionable security decisions in DNS root management

2009-10-20 Thread Steven Bellovin


On Oct 17, 2009, at 5:23 AM, John Gilmore wrote:


Even plain DSA would be much more space efficient on the signature
side - a DSA key with p=2048 bits, q=256 bits is much stronger than a
1024 bit RSA key, and the signatures would be half the size. And NIST
allows (2048,224) DSA parameters as well, if saving an extra 8 bytes
is really that important.


DSA was (designed to be) full of covert channels.


The evidence that it was an intentional design feature is, to my  
knowledge, slim.  More relevant to this case is why it matters: what  
information is someone trying to smuggle out via the DNS?  Remember  
that DNS records are (in principle) signed offline; servers are  
signing *records*, not responses.  In other words, it's more like a  
certificate model than the TLS model.


Given that they are attempted to optimize for minimal packet size,  
the

choice of RSA for signatures actually seems quite bizarre.


It's more bizarre than you think.  But packet size just isn't that big
a deal.  The root only has to sign a small number of records -- just
two or three for each top level domain -- and the average client is
going to use .com, .org, their own country, and a few others).  Each
of these records is cached on the client side, with a very long
timeout (e.g. at least a day).  So the total extra data transfer for
RSA (versus other) keys won't be either huge or frequent.  DNS traffic
is still a tiny fraction of overall Internet traffic.  We now have
many dozens of root servers, scattered all over the world, and if the
traffic rises, we can easily make more by linear replication.  DNS
*scales*, which is why we're still using it, relatively unchanged,
after more than 30 years.


It's rather more complicated than that.  The issue isn't bandwidth per  
se, at least not as compared with total Internet bandwidth.  Bandwidth  
out of a root server site may be another matter.  Btw, the DNS as  
designed 25 years ago would not scale to today's load.  There was a  
crucial design mistake: DNS packets were limited to 512 bytes.  As a  
result, there are 10s or 100s of millions of machines that read *only*  
512 bytes.  That in turn means that there can be at most 13 root  
servers.  More precisely, there can be at most 13 root names and IP  
addresses.  (We could possibly have one or two more if there was just  
one name that pointed to many addresses, but that would complicate  
debugging the DNS.)  The DNS is working today because of anycasting;  
many -- most?  all? -- of the 13 IP addresses exist at many points in  
the Internet, and depend on routing system magic to avoid problems.   
At that, anycasting works much better for UDP than for TCP, because it  
will fail utterly if some packets in a conversation go to one  
instantiation and others go elsewhere.


It is possible to have larger packets, but only if there is prior  
negotiation via something called EDNS0.  At that, you still *really*  
want to stay below 1500 bytes, the Ethernet MTU.  If you exceed that,  
you get fragmentation, which hurts reliability.  But whatever the  
negotiated maximum DNS response size, if the data exceeds that value  
the server will say "response truncated; ask me via TCP".  That, in  
turn, will cause massive problems.  Many hosts won't do TCP properly  
and many firewalls are incorrectly configured to reject DNS over TCP.   
Those problems could, in principle, be fixed.  But TCP requires a 3- 
way handshake to set up the connection, then a 2-packet exchange for  
the data and response (more if the response won't fit in a single  
packet), plus another 3 packets to tear down the connection.  It also  
requires a lot of state -- and hence kernel memory -- on the server.   
There are also reclamation issues if the TCP connection stops -- but  
isn't torn down -- in just the proper way (where the server is in FIN- 
WAIT-2 state), which in turn might happen if the routing system  
happens to direct some anycast packets elsewhere.


To sum up: there really are reasons why it's important to keep DNS  
responses small.  I suspect we'll have to move towards elliptic curve  
at some point, though there are patent issues (or perhaps patent FUD;  
I have no idea) there.


The bizarre part is that the DNS Security standards had gotten pretty
well defined a decade ago,


Actually, no; the design then was wrong.  It looked ok from the crypto  
side, but there were subtle points in the DNS design that weren't  
handled properly.  I'll skip the whole saga, but it wasn't until RFC  
4033-4035 came out, in March 2005, that the specs were correct.  There  
are still privacy concerns about parts of DNSSEC.



when one or more high-up people in the IETF
decided that "no standard that requires the use of Jim Bidzos's
monopoly crypto algorithm is ever going to be approved on my watch".
Jim had just pissed off one too many people, in his role as CEO of RSA
Data Security and the second most hated guy in crypto.  (NSA export
controls was the first r

Re: Possibly questionable security decisions in DNS root management

2009-10-20 Thread Bill Stewart

At 12:31 AM 10/19/2009, Alexander Klimov wrote:

On Thu, 15 Oct 2009, Jack Lloyd wrote:
> Given that they are attempted to optimize for minimal packet size, the
> choice of RSA for signatures actually seems quite bizarre.

Maybe they try to optimize for verification time.

$ openssl speed


Verification speed for the root or TLD keys doesn't need to be fast, 
because you'll be caching them.
Verification speed for every random 2LD.gTLD or 3TLD.2TLD.ccTLD can be 
important,

but there are lots of 2LDs that are also important to sign securely.
I don't care whether my disposable Yahoo mail account login connections are 
signed securely,

but I care a lot about whether I'm really connecting to my bank or not.


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Possibly questionable security decisions in DNS root management

2009-10-20 Thread Jerry Leichter

On Oct 17, 2009, at 5:23 AM, John Gilmore wrote:

Even using keys that have a round number of bits is foolish, in my
opinion.  If you were going to use about 2**11th bits, why not 2240
bits, or 2320 bits, instead of 2048?  Your software already handles
2240 bits if it can handle 2048, and it's only a tiny bit slower and
larger -- but a 2048-bit RSA cracker won't crack your 2240-bit key.
If this crypto community was serious about resistance to RSA key
factoring, the most popular key generation software would be picking
key sizes *at random* within a wide range beyond the number of bits
demanded for application security.  That way, there'd be no "sweet
spots" at 1024 or 2048.  As it is today, if NSA (or any major country,
organized crime group, or civil rights nonprofit) built an RSA key
cracker, more than 50% of the RSA keys in use would fall prey to a
cracker that ONLY handled 1024-bit keys.  It's probably more like
80-90%, actually.  Failing to use 1056, 1120, 1168-bit, etc, keys is
just plain stupid on our (the defenders') part; it's easy to automate
the fix.
What factoring algorithms would be optimized for a fixed number of  
bits?  I suppose one could have hardware that had 1024-bit registers,  
which would limit you to no more than 1024 bits; but I can't think of  
a factoring algorithm that works for 1024 bits, the top one of which  
is 1, but not at least equally well when that top bit happens to be 0.


-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Possibly questionable security decisions in DNS root management

2009-10-20 Thread Victor Duchovni
On Sat, Oct 17, 2009 at 02:23:25AM -0700, John Gilmore wrote:

> > Given that they are attempted to optimize for minimal packet size, the
> > choice of RSA for signatures actually seems quite bizarre.

> Each of these records is cached on the client side, with a very long
> timeout (e.g. at least a day).  So the total extra data transfer for
> RSA (versus other) keys won't be either huge or frequent.  DNS traffic
> is still a tiny fraction of overall Internet traffic.

Yes, normal DNS traffic is not the issue.

The optimization is for DDoS conditions, especially amplification via
forged source IP DNS requests for ". IN NS?". The request is tiny,
and the response is multiple KB with DNSSEC.

> We now have
> many dozens of root servers, scattered all over the world, and if the
> traffic rises, we can easily make more by linear replication.  DNS
> *scales*, which is why we're still using it, relatively unchanged,
> after more than 30 years.

Some (e.g. DJB, and I am inclined to take him seriously), are quite
concerned about amplification issues with DNSSEC. Packet size does matter.

> RSA was the obvious choice because it was (and is) believed that if
> you can break it, you can factor large numbers (which mathematicians
> have been trying to do for hundreds of years).  No other algorithm
> available at the time came with such a high pedigree.  As far as I
> know, none still does.

Well, most of the hundreds of years don't really matter, modern number
theory starts with Gauss in ~1800, and the study of elliptic curves begins
in the same century (also Group theory, complex analysis, ...).  It is
not clear that the pedigree of RSA is much stronger than that for ECC.

> The DNSSEC RSA RFC says:
> 
>  For interoperability, the RSA key size is limited to 4096 bits.  For
>particularly critical applications, implementors are encouraged to
>consider the range of available algorithms and key sizes.

Perhaps believed sufficiently secure, but insanely large for DNS over UDP.
Packet size does matter.

> If this crypto community was serious about resistance to RSA key
> factoring, the most popular key generation software would be picking
> key sizes *at random* within a wide range beyond the number of bits
> demanded for application security. 

There is no incentive to use keys smaller than the top of the range. An
algorithm that cracks k-bit RSA keys, will crack all keys with n That way, there'd be no "sweet spots" at 1024 or 2048. 

There is no sweet spot. These sizes are believed to approximately match
80-bit, 112-bit, 128-bit ... sizes for symmetric keys (for RSA 1024,
2048, and 3072).

Why should one bother with a random size between 1024 and 2048, if
everyone supports 2048, and 2048-bit signatures are practical in the
context of the given protocol?

-- 
Viktor.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Possibly questionable security decisions in DNS root management

2009-10-20 Thread Jack Lloyd
On Sat, Oct 17, 2009 at 02:23:25AM -0700, John Gilmore wrote:

> DSA was (designed to be) full of covert channels.

True, but TCP and UDP are also full of covert channels. And if you are
worried that your signing software or hardware is compromised and
leaking key bits, you have larger problems, no matter what algorithm
you use; for instance, with RSA, the signer could intentionally
miscalculate 1 in 2^32 signatures, which would immediately leak the
entire private key to someone who knew to watch for it. (I would have
said that using PSS also introduces a covert channel, but it appears
DNSSEC is using the scheme from PKCS1 v1.5.)

And, for that matter, one can make DSA deterministic by choosing the k
values to be HMAC-SHA256(key, H(m)) - this will cause the k values to
be repeated, but only if the message itself repeats (which is fine,
since seeing a repeated message/signature pair is harmless), or if one
can induce collisions on HMAC with an unknown key (which seems a
profoundly more difficult problem than breaking RSA or DSA).

> RSA was the obvious choice because it was (and is) believed that if
> you can break it, you can factor large numbers (which mathematicians
> have been trying to do for hundreds of years).  No other algorithm
> available at the time came with such a high pedigree.  As far as I
> know, none still does.

As far as I know even now nobody has proven that breaking RSA is
equivalent to factoring; there are results that suggest it, for
instance [http://eprint.iacr.org/2008/260] shows there is no 'generic'
attack that can break RSA without factoring - meaning such an the
attack would have to examine the bit representation of the modulus.  A
full proof of equivalence still seems to be an open problem.

If for some reason one really wanted to ensure their public key
primitives reduces to a hard problem, it would have made much more
sense to use Rabin-Williams, which does have a provable reduction to
factoring.

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Possibly questionable security decisions in DNS root management

2009-10-19 Thread Nicolas Williams
Getting DNSSEC deployed with sufficiently large KSKs should be priority #1.

If 90 days for the 1024-bit ZSKs is too long, that can always be
reduced, or the ZSK keylength be increased -- we too can squeeze factors
of 10 from various places.  In the early days of DNSSEC deployment the
opportunities for causing damage by breaking a ZSK will be relatively
meager.  We have time to get this right; this issue does not strike me
as urgent.

OTOH, will we be able to detect breaks?  A clever attacker will use
breaks in very subtle ways.  A ZSK break would be bad, but something
that could be dealt with, *if* we knew it'd happened.  The potential
difficulty of detecting attacks is probably the best reason for seeking
stronger keys well ahead of time.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Possibly questionable security decisions in DNS root management

2009-10-19 Thread Alexander Klimov
On Thu, 15 Oct 2009, Jack Lloyd wrote:
> Even plain DSA would be much more space efficient on the signature
> side - a DSA key with p=2048 bits, q=256 bits is much stronger than a
> 1024 bit RSA key, and the signatures would be half the size. And NIST
> allows (2048,224) DSA parameters as well, if saving an extra 8 bytes
> is really that important.
>
> Given that they are attempted to optimize for minimal packet size, the
> choice of RSA for signatures actually seems quite bizarre.

Maybe they try to optimize for verification time.

$ openssl speed
[...]
  signverifysign/s verify/s
rsa  512 bits 0.000823s 0.69s   1215.2  14493.7
rsa 1024 bits 0.004074s 0.000200s245.4   5008.0
rsa 2048 bits 0.024338s 0.000663s 41.1   1507.5
rsa 4096 bits 0.159841s 0.002361s  6.3423.6
  signverifysign/s verify/s
dsa  512 bits 0.000651s 0.000765s   1535.2   1306.6
dsa 1024 bits 0.001922s 0.002322s520.3430.7
dsa 2048 bits 0.006447s 0.007551s155.1132.4


-- 
Regards,
ASK

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Possibly questionable security decisions in DNS root management

2009-10-19 Thread John Gilmore
> Even plain DSA would be much more space efficient on the signature
> side - a DSA key with p=2048 bits, q=256 bits is much stronger than a
> 1024 bit RSA key, and the signatures would be half the size. And NIST
> allows (2048,224) DSA parameters as well, if saving an extra 8 bytes
> is really that important.

DSA was (designed to be) full of covert channels.

> Given that they are attempted to optimize for minimal packet size, the
> choice of RSA for signatures actually seems quite bizarre.

It's more bizarre than you think.  But packet size just isn't that big
a deal.  The root only has to sign a small number of records -- just
two or three for each top level domain -- and the average client is
going to use .com, .org, their own country, and a few others).  Each
of these records is cached on the client side, with a very long
timeout (e.g. at least a day).  So the total extra data transfer for
RSA (versus other) keys won't be either huge or frequent.  DNS traffic
is still a tiny fraction of overall Internet traffic.  We now have
many dozens of root servers, scattered all over the world, and if the
traffic rises, we can easily make more by linear replication.  DNS
*scales*, which is why we're still using it, relatively unchanged,
after more than 30 years.

The bizarre part is that the DNS Security standards had gotten pretty
well defined a decade ago, when one or more high-up people in the IETF
decided that "no standard that requires the use of Jim Bidzos's
monopoly crypto algorithm is ever going to be approved on my watch".
Jim had just pissed off one too many people, in his role as CEO of RSA
Data Security and the second most hated guy in crypto.  (NSA export
controls was the first reason you couldn't put decent crypto into your
product; Bidzos's patent, and the way he licensed it, was the second.)
This IESG prejudice against RSA went so deep that it didn't matter
that we had a free license from RSA to use the algorithm for DNS, that
the whole patent would expire in just three years, that we'd gotten
export permission for it, and had working code that implemented it.
So the standard got sent back to the beginning and redone to deal with
the complications of deployed servers and records with varying algorithm
availability (and to make DSA the "officially mandatory" algorithm).
Which took another 5 or 10 years.

RSA was the obvious choice because it was (and is) believed that if
you can break it, you can factor large numbers (which mathematicians
have been trying to do for hundreds of years).  No other algorithm
available at the time came with such a high pedigree.  As far as I
know, none still does.  And if we were going to go to the trouble of
rewiring the whole world's DNS for security at all, we wanted real
security, not pasted-on crap security.

The DNSSEC RSA RFC says:

 For interoperability, the RSA key size is limited to 4096 bits.  For
   particularly critical applications, implementors are encouraged to
   consider the range of available algorithms and key sizes.

That's standard-speak for "don't use the shortest possible keys all
the time, idiot".  Yes, using 1024-bit keys is lunacy -- but of course
we're talking about Verisign/NSI here, the gold standard in crap
security.  The root's DNSSEC operational procedures should be designed
so that ICANN does all the signing, though the lord knows that ICANN
is even less trustworthy than NSI.  But at least we know what kind of
larceny ICANN is into, and it's a straightforward squeeze for their
own lavish benefit, forcibly paid by every domain owner; it doesn't
involve promising security and not delivering it.

Even using keys that have a round number of bits is foolish, in my
opinion.  If you were going to use about 2**11th bits, why not 2240
bits, or 2320 bits, instead of 2048?  Your software already handles
2240 bits if it can handle 2048, and it's only a tiny bit slower and
larger -- but a 2048-bit RSA cracker won't crack your 2240-bit key.
If this crypto community was serious about resistance to RSA key
factoring, the most popular key generation software would be picking
key sizes *at random* within a wide range beyond the number of bits
demanded for application security.  That way, there'd be no "sweet
spots" at 1024 or 2048.  As it is today, if NSA (or any major country,
organized crime group, or civil rights nonprofit) built an RSA key
cracker, more than 50% of the RSA keys in use would fall prey to a
cracker that ONLY handled 1024-bit keys.  It's probably more like
80-90%, actually.  Failing to use 1056, 1120, 1168-bit, etc, keys is
just plain stupid on our (the defenders') part; it's easy to automate
the fix.

John

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Possibly questionable security decisions in DNS root management

2009-10-19 Thread Ben Laurie
On Thu, Oct 15, 2009 at 12:39 AM, Jack Lloyd  wrote:
> On Wed, Oct 14, 2009 at 10:43:48PM -0400, Jerry Leichter wrote:
>> If the constraints elsewhere in the system limit the number of bits of
>> signature you can transfer, you're stuck.  Presumably over time you'd
>> want to go to a more bit-efficient signature scheme, perhaps using
>> ECC.
>
> Even plain DSA would be much more space efficient on the signature
> side - a DSA key with p=2048 bits, q=256 bits is much stronger than a
> 1024 bit RSA key, and the signatures would be half the size. And NIST
> allows (2048,224) DSA parameters as well, if saving an extra 8 bytes
> is really that important.
>
> Given that they are attempted to optimize for minimal packet size, the
> choice of RSA for signatures actually seems quite bizarre.

DSA can be used in DNSSEC - unfortunately it is optional, though.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Possibly questionable security decisions in DNS root management

2009-10-16 Thread Jack Lloyd
On Wed, Oct 14, 2009 at 10:43:48PM -0400, Jerry Leichter wrote:
> If the constraints elsewhere in the system limit the number of bits of  
> signature you can transfer, you're stuck.  Presumably over time you'd  
> want to go to a more bit-efficient signature scheme, perhaps using  
> ECC.

Even plain DSA would be much more space efficient on the signature
side - a DSA key with p=2048 bits, q=256 bits is much stronger than a
1024 bit RSA key, and the signatures would be half the size. And NIST
allows (2048,224) DSA parameters as well, if saving an extra 8 bytes
is really that important.

Given that they are attempted to optimize for minimal packet size, the
choice of RSA for signatures actually seems quite bizarre.

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Possibly questionable security decisions in DNS root management

2009-10-16 Thread Perry E. Metzger

Jerry Leichter  writes:
>> Do we really believe we won't be able to
>> attack a 1024 bit key with a sufficiently large budget even in 10
>> years? ...
>
> Currently, the cryptographic cost of an attack is ... 0.  How many
> attacks have there been?  Perhaps the perceived value of owning part
> of DNS isn't as great as you think.

Actually, there are routine attacks on DNS infrastructure these days,
but clearly they're not cryptographic since that's not
deployed. However, a large part of the point of having DNSSEC is that we
can then trust the DNS to be accurate so we can insert things like
cryptographic keys into it. Once we've made the DNS trusted, we have the
problem that people will go off and trust it, you see.

I'm particularly concerned about the fact that it is difficult to a
priori analyze all of the use cases for DNSSEC and what the incentives
may be to attack them. If you can't analyze something, that's a warning
that you don't understand the implications. That makes me fear anything
that says "the key doesn't need to be more than strength X".

Sure, perhaps it is true that the expense of DNSSEC isn't worth it -- we
limp along without it now, as you point out -- but if that is true, what
do we gain by deploying a system which could be compromised in so
straightforward a way, with money being the only constraint? Why deploy
at all if we aren't going to be able to use it as we want? If we can't
trust the data very well, we've spent lots of time and money and gained
nothing?

I'm doubly questioning because it seems pointless anyway -- the point of
the shorter keys is to avoid needing TCP connections to DNS servers, but
so far as I can tell that will end up becoming rapidly necessary anyway,
at which point one has to ask what one is gaining by lowering key length.

BTW, I've come across some (old) estimates from Shamir et all that
indicate a TWIRL machine that could break 1024 bit keys in a year would
have cost about $10M something like 5 years ago using a 90nm process. At
this point, with 32nm processes available, they'd be substantially
cheaper, and thus with a serious budget it seems like we're really quite
on the edge here.

Even $10M may now be enough to break them fast enough if you can come up
with a clever speedup of only a small factor, and I don't like trusting
security to the idea that no one with a large budget is clever enough to
find a small constant factor speedup. I presume that in another 10 years
we'll have a quite serious reduction in cost, which is yet worse. All in
all, that's too close for comfort, especially since I can see the point
in a Large Bad Actor spending orders of magnitude more on this than just
$10M.

Perry
-- 
Perry E. Metzgerpe...@piermont.com

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Possibly questionable security decisions in DNS root management

2009-10-14 Thread Jerry Leichter

On Oct 14, 2009, at 7:54 PM, Perry E. Metzger wrote:
...We should also recognize that in cryptography, a small integer  
safety

margin isn't good enough. If one estimates that a powerful opponent
could attack a 1024 bit RSA key in, say, two years, that's not even a
factor of 10 over 90 days, and people spending lots of money have a  
good

record of squeezing out factors of 10 here and there. Finding an
exponential speedup in an algorithm is not something one can do, but
figuring out a process trick to remove a small constant is entirely
possible.

Meanwhile, of course, the 1024 bit "short term" keying system may  
end up

staying in place far longer than we imagine -- things like this often
roll out and stay in place for a decade or two even when we imagine we
can get rid of them quickly.
As I read it, "short term" refers to the lifetime of the *key*, not  
the lifetime of the *system*.



Do we really believe we won't be able to
attack a 1024 bit key with a sufficiently large budget even in 10  
years? ...
Currently, the cryptographic cost of an attack is ... 0.  How many  
attacks have there been?  Perhaps the perceived value of owning part  
of DNS isn't as great as you think.


If the constraints elsewhere in the system limit the number of bits of  
signature you can transfer, you're stuck.  Presumably over time you'd  
want to go to a more bit-efficient signature scheme, perhaps using  
ECC.  But as it is, the choice appears to be between (a) continuing  
the current completely unprotected system and (b) *finally* rolling  
out protection sufficient to block all but very well funded attacks  
for a number of years.


Should we let the best be the enemy of the good here?

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Possibly questionable security decisions in DNS root management

2009-10-14 Thread Paul Hoffman
At 7:54 PM -0400 10/14/09, Perry E. Metzger wrote:
>There are enough people here with the right expertise. I'd be interested
>in hearing what people think could be done with a fully custom hardware
>design and a budget in the hundreds of millions of dollars or more.

What part of owning a temporary private key for the root zone would be worth 
even 10% of that much? There are attacks, and there are motivations. Until we 
know the latter, we cannot put a price on the former.

Related question: if all the root keys were 2048 bits, who do you think would 
change the way they rely on DNSSEC?

--Paul Hoffman, Director
--VPN Consortium

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Possibly questionable security decisions in DNS root management

2009-10-14 Thread Perry E. Metzger

bmann...@vacation.karoshi.com writes:
> er... there is the root key and there is the ROOT KEY.
> the zsk only has a 90 day validity period.  ... meets the
> "spec" and -ought- to be good enough.   that said, it is
> currently a -proposal- and if credible arguments can be made
> to modify the proposal, I'm persuaded that VSGN will do so.

Well, you might look at Ekr's argument, which I largely agree with. I
think the two key observations are that 1024 bit keys are already
considered iffy, large (perhaps hundreds of millions of dollars or even
more) may be thrown by opponents at this particular key, and that
technology for factoring will only get better. Given the sums that could
be spent, very specialized hardware could be built -- far more
specialized than ordinary PCs on which the problem doesn't scale that
well in its most expensive steps.

Security is usually not limited by cryptography in the modern
world. Crypto systems are usually far stronger than opponents will to
spend, and bugs are the more obvious way to attack things.  However, if
you're talking about a really high value target and "weak enough"
crypto, the economics change, and with them so does everything else.
Crypto being a potential weak spot is an exceptionally rare situation,
but the DNS root key is insanely high value.

We should also recognize that in cryptography, a small integer safety
margin isn't good enough. If one estimates that a powerful opponent
could attack a 1024 bit RSA key in, say, two years, that's not even a
factor of 10 over 90 days, and people spending lots of money have a good
record of squeezing out factors of 10 here and there. Finding an
exponential speedup in an algorithm is not something one can do, but
figuring out a process trick to remove a small constant is entirely
possible.

Meanwhile, of course, the 1024 bit "short term" keying system may end up
staying in place far longer than we imagine -- things like this often
roll out and stay in place for a decade or two even when we imagine we
can get rid of them quickly. Do we really believe we won't be able to
attack a 1024 bit key with a sufficiently large budget even in 10 years?

Again, normally, crypto isn't where you attack an opponent, but in this
case, I'd suggest that key length might not be a silly thing to worry
about.

There are enough people here with the right expertise. I'd be interested
in hearing what people think could be done with a fully custom hardware
design and a budget in the hundreds of millions of dollars or more.

Perry
-- 
Perry E. Metzgerpe...@piermont.com

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Possibly questionable security decisions in DNS root management

2009-10-14 Thread bmanning
On Wed, Oct 14, 2009 at 07:22:27PM -0400, Perry E. Metzger wrote:
> 
> bmann...@vacation.karoshi.com writes:
> > On Wed, Oct 14, 2009 at 06:24:06PM -0400, Perry E. Metzger wrote:
> >> Ekr has a very good blog posting on what seems like a bad security
> >> decision being made by Verisign on management of the DNS root key.
> >>
> >> http://www.educatedguesswork.org/2009/10/on_the_security_of_zsk_rollove.html
> >>
> >> In summary, a decision is being made to use a "short lived" 1024 bit key
> >> for the signature because longer keys would result in excessively large
> >> DNS packets. However, such short keys are very likely crackable in short
> >> periods of time if the stakes are high enough -- and few keys in
> >> existence are this valuable.
> >
> > however - the VSGN proposal meets current NIST guidelines.
> 
> That doesn't say anything about how good an idea it is, any more than an
> architect can make a building remain standing in an earthquake by
> invoking the construction code.
> 
> We are the sort of people who write these sorts of guidelines, and if
> they're flawed, we can't use them as a justification for designs.
> 
> (Well, a bureaucrat certainly can use such documents as a form of CYA,
> but we're discussing technology here, not means of evading blame.)
> 
> The fact is, the DNS root key is one of the few instances where it is
> actually worth someone's time to crack a key because it provides
> enormous opportunities for mischief, especially if people start trusting
> it more because it is authenticated. Unlike your https session to view
> your calendar or the password for your home router, the secret involved
> here are worth an insane amount of money.


er... there is the root key and there is the ROOT KEY.
the zsk only has a 90 day validity period.  ... meets the
"spec" and -ought- to be good enough.   that said, it is
currently a -proposal- and if credible arguments can be made
to modify the proposal, I'm persuaded that VSGN will do so.



> Perry

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Possibly questionable security decisions in DNS root management

2009-10-14 Thread Perry E. Metzger

bmann...@vacation.karoshi.com writes:
> On Wed, Oct 14, 2009 at 06:24:06PM -0400, Perry E. Metzger wrote:
>> Ekr has a very good blog posting on what seems like a bad security
>> decision being made by Verisign on management of the DNS root key.
>>
>> http://www.educatedguesswork.org/2009/10/on_the_security_of_zsk_rollove.html
>>
>> In summary, a decision is being made to use a "short lived" 1024 bit key
>> for the signature because longer keys would result in excessively large
>> DNS packets. However, such short keys are very likely crackable in short
>> periods of time if the stakes are high enough -- and few keys in
>> existence are this valuable.
>
>   however - the VSGN proposal meets current NIST guidelines.

That doesn't say anything about how good an idea it is, any more than an
architect can make a building remain standing in an earthquake by
invoking the construction code.

We are the sort of people who write these sorts of guidelines, and if
they're flawed, we can't use them as a justification for designs.

(Well, a bureaucrat certainly can use such documents as a form of CYA,
but we're discussing technology here, not means of evading blame.)

The fact is, the DNS root key is one of the few instances where it is
actually worth someone's time to crack a key because it provides
enormous opportunities for mischief, especially if people start trusting
it more because it is authenticated. Unlike your https session to view
your calendar or the password for your home router, the secret involved
here are worth an insane amount of money.

Perry

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Possibly questionable security decisions in DNS root management

2009-10-14 Thread bmanning
On Wed, Oct 14, 2009 at 06:24:06PM -0400, Perry E. Metzger wrote:
> 
> Ekr has a very good blog posting on what seems like a bad security
> decision being made by Verisign on management of the DNS root key.
> 
> http://www.educatedguesswork.org/2009/10/on_the_security_of_zsk_rollove.html
> 
> In summary, a decision is being made to use a "short lived" 1024 bit key
> for the signature because longer keys would result in excessively large
> DNS packets. However, such short keys are very likely crackable in short
> periods of time if the stakes are high enough -- and few keys in
> existence are this valuable.


however - the VSGN proposal meets current NIST guidelines.

--bill


> 
> Perry
> -- 
> Perry E. Metzger  pe...@piermont.com
> 
> -
> The Cryptography Mailing List
> Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com