Re: [DNSOP] Key sizes was Re: I-D Action:draft-ietf-dnsop-rfc4641bis-01.txt
Thanks, Paul. At 11:32 AM -0400 4/24/09, Paul Wouters wrote: So it seems to me that using 1024 bit RSA keys for ZSK, and 2048 bit keys for KSK, assuming RFC 4641 rollover periods, are still many orders of magnitude safe for our use within the DNSSEC realm. In fact, it seems RFC4641, as written in 2006, is still extremely conservative in its estimates two and a half years after its publication date. That is fine, but so is 1024 bit KSKs. The text in RFC 4641bis makes it clear that KSKs should be rollable in case of an emergency; the effort to do so is greater, but not that much greater, than rolling a ZSK. The WG should decide which seems better to recommend: a) KSKs longer than ZSKs because KSKs are thought of as needing to be stronger b) KSKs the same strength as ZSKs because neither should be weak enough to be attacked I prefer (b), but (a) keeps coming up in this discussion. Note that the same does not apply for DSA. As I understood it, DSA requires the use of some randomness for each signature, and the errors in the random number generator are cummulative when attempting to crack this key. In other words, the more data you sign, the more vulnerable you become to the tiniest imperfection in your HWRNG. That's not the problem. If the per-message random number used in signing is found, the private key is disclosed; there is no requirement for a cumulative error. That has not proven to be a problem for DSA in its current uses, but the random number generator *must* always be a concern. --Paul Hoffman, Director --VPN Consortium ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Key sizes was Re: I-D Action:draft-ietf-dnsop-rfc4641bis-01.txt
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Paul Hoffman wrote: At 11:32 AM -0400 4/24/09, Paul Wouters wrote: So it seems to me that using 1024 bit RSA keys for ZSK, and 2048 bit keys for KSK, assuming RFC 4641 rollover periods, are still many orders of magnitude safe for our use within the DNSSEC realm. In fact, it seems RFC4641, as written in 2006, is still extremely conservative in its estimates two and a half years after its publication date. That is fine, but so is 1024 bit KSKs. The text in RFC 4641bis makes it clear that KSKs should be rollable in case of an emergency; the effort to do so is greater, but not that much greater, than rolling a ZSK. The WG should decide which seems better to recommend: a) KSKs longer than ZSKs because KSKs are thought of as needing to be stronger b) KSKs the same strength as ZSKs because neither should be weak enough to be attacked I prefer (b), but (a) keeps coming up in this discussion. The dimension I'm usually missing in these discussions is the lifetimes of keys and the lifetimes of the signatures created with those keys (although it is mentioned above). I always understood the reason for having two key types is so that one of them can be rolled more often, and have shorter signatures lifetimes, while the other one lives longer, and is needed less often. So the first one would not need to be as strong as the second one. So on the one hand, neither key should be weak enough to be attacked at all. But on the other hand, if they are equally strong, they're gonna attack the one that has the longest lifetime and/or the longest signature lifetimes. It seems to me that it would then make sense to roll/resign with the KSK as least as often as you roll/resign with the ZSK (since they are equally strong). It's probably the friday talking, but in that case, why even have a KSK at all? Jelte -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.9 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iEYEARECAAYFAknx9v4ACgkQ4nZCKsdOncWuXwCfdQt+Ht1FvcUbracXxi6AKrV8 DWEAoLPkxvUwUrL55ymLIuEv5IyFZ9mn =n92f -END PGP SIGNATURE- ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Key sizes was Re: I-D Action:draft-ietf-dnsop-rfc4641bis-01.txt
That is fine, but so is 1024 bit KSKs. The text in RFC 4641bis makes it clear that KSKs should be rollable in case of an emergency; the effort to do so is greater, but not that much greater, than rolling a ZSK. Considering the necessity of getting new DS/DLV records into the parent/DLV zones and/or getting public keys properly distributed to everyone who needs them (either directly or via ITAR or other key repositories) it sure seems to me that the effort to roll a KSK is that much greater. Rolling a ZSK doesn't require coordination with anyone else. The WG should decide which seems better to recommend: a) KSKs longer than ZSKs because KSKs are thought of as needing to be stronger b) KSKs the same strength as ZSKs because neither should be weak enough to be attacked I prefer (b), but (a) keeps coming up in this discussion. It's a little imprecise, but I'm inclined to think of key lifetime as an aspect of key strength. A 1024-bit key that rolls over every week may be stronger, in a sense, than a 2048-bit key that stays around for twenty years--the second one could be broken within its lifetime, the first one probably not. IMHO it's reasonable to make recommendations with that tradeoff in mind; a ZSK may be as long as a KSK, or it may be shorter if it's rolled over more frequently. (I think 4641bis already says something along those lines.) -- Evan Hunt -- e...@isc.org Internet Systems Consortium, Inc. ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Key sizes was Re: I-D Action:draft-ietf-dnsop-rfc4641bis-01.txt
At 9:42 -0700 4/24/09, Paul Hoffman wrote: a) KSKs longer than ZSKs because KSKs are thought of as needing to be stronger b) KSKs the same strength as ZSKs because neither should be weak enough to be attacked I prefer (b), but (a) keeps coming up in this discussion. The reason (a) is held as conventional wisdom by some set of folks... In the beginning we had keys. In the early workshops we realized that there were two kinds of delegations where keys mattered. Delegations we made to ourselves and delegations made to us from someone else. (In the workshops the leader was the DNS parent to all of the participants' child zones.) We realized when we tried to do key rolls (because we knew that keys wore out) that it was much easier to change keys when it was us delegating to our self than when we had to run down the instructor (who was generally looking at a bug rooted in the specification, code, or configuration). Stirring this around in the primordial soup, the ZSK and KSK was born. (Eliding the rationale.) And along the way the SEP flag was born. In modern times this has translated into this issue. When it comes time to design our ZSK roll, we realize all of the steps involve internal matters. Running a key generation task first. Moving the private key files around is the next task. Removing the old key is the last step. We can predict the time it takes to do all of the steps, we can do all this unannounced, when can even target all the normal ops to be started via cron. When it comes to a KSK, we have to involve an external entity. When it comes to planning and writing the tasks, we (at this time) have no idea of the interface our parent will use (if they ever do sign) and what the response time will be - will if be 5 minutes or 5 days? As architect, I really don't care if the KSK is longer than the ZSK, I don't care if the KSK is stronger than the ZSK. What matters is that I just have to roll the KSK much less and have it be a less important operation in my activities. I can be more specific about ZSK life cycle, it's the KSK life cycle that has a huge unknown (the relationship with the parent). As someone who has been developing DNSSEC I certainly understand (b). There was a time the KEY RR had a signatory field which was thought to be a reflection of strength - RFC 2065, section 3.3: Bits 12-15 are the signatory field. If non-zero, they indicate that the key can validly sign RRs or updates of the same name. If the owner name is a wildcard, then RRs or updates with any name which is in the wildcard's scope can be signed. Fifteen different non-zero values are possible for this field and any differences in their meaning are reserved for definition in connection with DNS dynamic update or other new DNS commands. Zone keys always have authority to sign any RRs in the zone regardless of the value of this field. The signatory field, like all other aspects of the KEY RR, is only effective if the KEY RR is appropriately signed by a SIG RR. We realized overtime that all keys are equal when it comes to strength because the validation chain caused fate sharing. I.e., if the zone above you has a 512 bit key, your 1024 and 2048 kit keys aren't all that valuable if you are optimizing for crypto-mathematical wizardry. I.e., KSKs are no more special than ZSKs in the validation chain. The conventional wisdom to make the KSK longer is that we just want to avoid frequently having to bug our parent. It wasn't rooted in thinking the KSK is any more special cryptographically. -- -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Edward Lewis NeuStarYou can leave a voice message at +1-571-434-5468 Getting everything you want is easy if you don't want much.___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Key sizes was Re: I-D Action:draft-ietf-dnsop-rfc4641bis-01.txt
On Fri, 24 Apr 2009, Evan Hunt wrote: a) KSKs longer than ZSKs because KSKs are thought of as needing to be stronger b) KSKs the same strength as ZSKs because neither should be weak enough to be attacked I prefer (b), but (a) keeps coming up in this discussion. It's a little imprecise, but I'm inclined to think of key lifetime as an aspect of key strength. A 1024-bit key that rolls over every week may be stronger, in a sense, than a 2048-bit key that stays around for twenty years--the second one could be broken within its lifetime, the first one probably not. But the essential part is that once you see, say after 10 years, that the KSK 2048 bit key comes into the feasable but impractial scale of things due to advacements in math or technology, that you can decided to roll it over fairly quickly. IMHO it's reasonable to make recommendations with that tradeoff in mind; a ZSK may be as long as a KSK, or it may be shorter if it's rolled over more frequently. (I think 4641bis already says something along those lines.) What I was told by my lunching cryptographer was that 1024bits were more then enough even for multi year KSK's, assuming sining only, no encryption. From a cryptography strength point of view, more is always better, but now you need to consider work done by you and resolvers, and space constrictions with the DNS packet size. I don't see a cryptographic reason for Paul Hoffman's I'd like the keys to be of equal size. Unless you'd argue that the KSK could easilly also be 1024bit, and that the additional 11 months of validity of the KSK is negligable compared to the time now upto 3 years from now, to break a 1024 bit RSA key. Paul ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Key sizes was Re: I-D Action:draft-ietf-dnsop-rfc4641bis-01.txt
On 24 Apr 2009, at 16:03, Paul Wouters wrote: I don't see a cryptographic reason for Paul Hoffman's I'd like the keys to be of equal size. Unless you'd argue that the KSK could easilly also be 1024bit, and that the additional 11 months of validity of the KSK is negligable compared to the time now upto 3 years from now, to break a 1024 bit RSA key. What benefit is there of keeping the KSK small (e.g. 1024 bits) instead of just choosing the maximum that your signing software permits (e.g. 4096 bits, I think, with dnssec-signzone)? EDNS0 is a prerequisite for DNSSEC, if memory serves, so it's presumably not TCP fallback we're worried about with larger DNSKEY RDATA. A 4096 bit key only represents 384 more bytes of RDATA in a DNSKEY resource record on the wire than a 1024 bit key, and 384 bytes doesn't sound (naively, no science) like it's going to break the bank. Is the root concern the computational expense of dealing with larger keys in a validator? Or something else? Whatever the root concern is, what are the boundaries? It seems fruitless to debate whether 1024 bits is sufficient if there's no real cost to just choosing (say) 4096 bits and avoiding the discussion. Joe ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] dns data exchanged between host and local dns-sever
Ed, On Apr 23, 2009, at 9:52 AM, Edward Lewis wrote: I figure stub resolvers were needed when cpu/bandwidth/memory were a bit more expensive than now. It seems a shame to constrain our architecture to the '80s... OTOH, for the one of the same reasons DHCP is so popular, that is centralized local management, it is desirable (in some instances) to have the validation done in the commons and not in end hosts. Sorry, not sure I see a valid use case where _security_ (as opposed to configuration as would be the case in DHCP) is centralized. As I mentioned, centralization of cached data would be a reasonable optimization, but the primary pragmatic reason for stub resolvers talking over the wire has been made obsolete by Moore's law. Regards, -drc ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Key sizes was Re: I-D Action:draft-ietf-dnsop-rfc4641bis-01.txt
At 7:53 PM -0400 4/24/09, Joe Abley wrote: It seems fruitless to debate whether 1024 bits is sufficient if there's no real cost to just choosing (say) 4096 bits and avoiding the discussion. Please read the text: larger keys incur a real cost to people validating. --Paul Hoffman, Director --VPN Consortium ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Key sizes was Re: I-D Action:draft-ietf-dnsop-rfc4641bis-01.txt
On 24 Apr 2009, at 21:17, Paul Hoffman wrote: At 7:53 PM -0400 4/24/09, Joe Abley wrote: It seems fruitless to debate whether 1024 bits is sufficient if there's no real cost to just choosing (say) 4096 bits and avoiding the discussion. Please read the text: larger keys incur a real cost to people validating. The text contains the following sentence: In a standard CPU, it takes about four times as long to sign or verify with a 2048-bit key as it does with a 1024-bit key. I couldn't find any other discussion of the cost to validators of large key sizes. If there is a practical limit to key size due to concerns about peoples' validators running out of steam, then I think it needs to be stated clearly. Otherwise as a zone administrator my instinct will be to use keys that are as large as possible, since the costs incurred by doing so are going to be borne by other people and all I see is benefit (in the form of increased comfort level and a better story for upper management, even if there is no practical improvement in security). On the flip side, how can the real cost for validator-operators that you assert be quantified? I have a hand in running a couple of non-validating resolvers for a local ISP. 35,000 customers are served by two machines running BIND 9.5.x on FreeBSD 7.1, and the CPUs are 96% idle at peak load. That's a fair amount of headroom, even ignoring the fact that the ISP in question is in the process of replacing each machine with an ECMP/OSPF cluster of two machines in order to simplify ad-hoc maintenance. I'm not arguing about the assertion that there is a limit to what validators can tolerate. However, it seems reasonable to ask if it's the kind of limit that we need to worry about, and not, the kind of limit that is always going to fit in that headroom as validator hardware gets upgraded on a typical cycle and DNSSEC deployment proceeds over time. Joe ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Key sizes was Re: I-D Action:draft-ietf-dnsop-rfc4641bis-01.txt
At 9:46 PM -0400 4/24/09, Joe Abley wrote: On 24 Apr 2009, at 21:17, Paul Hoffman wrote: At 7:53 PM -0400 4/24/09, Joe Abley wrote: It seems fruitless to debate whether 1024 bits is sufficient if there's no real cost to just choosing (say) 4096 bits and avoiding the discussion. Please read the text: larger keys incur a real cost to people validating. The text contains the following sentence: In a standard CPU, it takes about four times as long to sign or verify with a 2048-bit key as it does with a 1024-bit key. I couldn't find any other discussion of the cost to validators of large key sizes. 2048 is large. :-) As I said earlier in this thread, 'openssl speed' is your friend. signverifysign/s verify/s rsa 1024 bits 0.007343s 0.000322s136.2 3107.6 rsa 2048 bits 0.042943s 0.001093s 23.3914.8 rsa 4096 bits 0.275508s 0.003920s 3.6255.1 At least on this box with this build of OpenSSL, the multiplier is about three times for each doubling of the key size. If there is a practical limit to key size due to concerns about peoples' validators running out of steam, then I think it needs to be stated clearly. Otherwise as a zone administrator my instinct will be to use keys that are as large as possible, since the costs incurred by doing so are going to be borne by other people and all I see is benefit (in the form of increased comfort level and a better story for upper management, even if there is no practical improvement in security). That's certainly your option. Another option is to listen to cryptographers about what is possible with a reasonable amount of money and time, and stop there. On the flip side, how can the real cost for validator-operators that you assert be quantified? Exactly. I have a hand in running a couple of non-validating resolvers for a local ISP. 35,000 customers are served by two machines running BIND 9.5.x on FreeBSD 7.1, and the CPUs are 96% idle at peak load. That's a fair amount of headroom, even ignoring the fact that the ISP in question is in the process of replacing each machine with an ECMP/OSPF cluster of two machines in order to simplify ad-hoc maintenance. I'm not arguing about the assertion that there is a limit to what validators can tolerate. However, it seems reasonable to ask if it's the kind of limit that we need to worry about, and not, the kind of limit that is always going to fit in that headroom as validator hardware gets upgraded on a typical cycle and DNSSEC deployment proceeds over time. How will you know? Why not stop when enough is enough? --Paul Hoffman, Director --VPN Consortium ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Key sizes was Re: I-D Action:draft-ietf-dnsop-rfc4641bis-01.txt
On 24 Apr 2009, at 22:18, Paul Hoffman wrote: If there is a practical limit to key size due to concerns about peoples' validators running out of steam, then I think it needs to be stated clearly. Otherwise as a zone administrator my instinct will be to use keys that are as large as possible, since the costs incurred by doing so are going to be borne by other people and all I see is benefit (in the form of increased comfort level and a better story for upper management, even if there is no practical improvement in security). That's certainly your option. Another option is to listen to cryptographers about what is possible with a reasonable amount of money and time, and stop there. My point is that given the choice between doing what is currently considered safe and exceeding what is currently considered safe by a factor of four with no additional cost to you I think many otherwise uninformed zone administrators are conditioned to choose the latter. On the flip side, how can the real cost for validator-operators that you assert be quantified? Exactly. So your point is that you don't know how to quantify it? I have a hand in running a couple of non-validating resolvers for a local ISP. 35,000 customers are served by two machines running BIND 9.5.x on FreeBSD 7.1, and the CPUs are 96% idle at peak load. That's a fair amount of headroom, even ignoring the fact that the ISP in question is in the process of replacing each machine with an ECMP/OSPF cluster of two machines in order to simplify ad-hoc maintenance. I'm not arguing about the assertion that there is a limit to what validators can tolerate. However, it seems reasonable to ask if it's the kind of limit that we need to worry about, and not, the kind of limit that is always going to fit in that headroom as validator hardware gets upgraded on a typical cycle and DNSSEC deployment proceeds over time. How will you know? Why not stop when enough is enough? Because there's no incentive for a zone administrator to choose anything other than the largest key her tools let her create. So what is enough? Joe ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Key sizes
Yo Joe, many moons back, it was pointed out to me by some cryto folks that there is an interesting relationship btwn key length and signature duration. One could make the argument that for persistent delegations, you might want to ensure longer length keys and possibly longer duration signatures than you might have for a DHCP lease whos's lifetime is 20 minutes. e.g. a leaf assignment that lasts no longer than 20 minutes might not justify the operational cost of a 4096bit key generation/propogation, while a well-known TLD (.JOE) might well justify a 4096bit key. you might say that key length should/could be inversely proporational to the delegation placement in the namespace. but you knew this. --bill ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Key sizes was Re: I-D Action:draft-ietf-dnsop-rfc4641bis-01.txt
At 10:25 PM -0400 4/24/09, Joe Abley wrote: My point is that given the choice between doing what is currently considered safe and exceeding what is currently considered safe by a factor of four with no additional cost to you I think many otherwise uninformed zone administrators are conditioned to choose the latter. ...which a good reason why we give actual numbers in this draft. I don't see where you are going with this. Do you want us to give hard numbers and not justify them so admins won't pick anything else? Or? On the flip side, how can the real cost for validator-operators that you assert be quantified? Exactly. So your point is that you don't know how to quantify it? Correct. How can you know how many other zone admins waste cycles on validator boxes? How can you know how many cycles are being used on those boxes for other things? How will you know? Why not stop when enough is enough? Because there's no incentive for a zone administrator to choose anything other than the largest key her tools let her create. So what is enough? An attack that would cost hundreds of millions of dollars and take longer than your key will be valid. This was covered earlier in this thread. --Paul Hoffman, Director --VPN Consortium ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Key sizes was Re: I-D Action:draft-ietf-dnsop-rfc4641bis-01.txt
On Fri, 24 Apr 2009, Joe Abley wrote: What benefit is there of keeping the KSK small (e.g. 1024 bits) instead of EDNS0 is a prerequisite for DNSSEC, if memory serves, so it's presumably not TCP fallback we're worried about with larger DNSKEY RDATA. I might run into some rally strange networks with my dnssec resolver on my laptop. At least, the chance of broken strange networks is much higher then the chance of receiving a notice that RSA 1024 can be broken within one year. (I am not advocating 1024bits for the KSK, I am just answering with a possible benefit of keeping a small KSK) only represents 384 more bytes of RDATA in a DNSKEY resource record on the wire than a 1024 bit key, and 384 bytes doesn't sound (naively, no science) like it's going to break the bank. Is the root concern the computational expense of dealing with larger keys in a validator? Or something else? Whatever the root concern is, what are the boundaries? It seems fruitless to debate whether 1024 bits is sufficient if there's no real cost to just choosing (say) 4096 bits and avoiding the discussion. I believe the main issue is packet size, not CPU power on the (stub) resolver. Paul ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop