On Mon, 10 Jul 2017, Shumon Huque wrote:

Perhaps we didn't explain it clearly enough, so let me give you a concrete 
example:

My zone is currently signed with 2048-bit RSASHA256. I want to offer signatures 
with Ed448 (or some other new algorithm) also, so that
newer validators can take advantage of it. However, I want to be able to 
continue to support the current population of validators that
don't support Ed448 until a sufficient amount of time has passed where they 
have all been upgraded - this could be some number of years.

I also don't want to double sign the zone and return multiple signatures in the 
responses, because they might be fragmented and cause
timeouts and retransmissions at the client (validator) end. I could truncate 
those responses and prompt them to re-query over TCP, but
then again I have caused an unnecessary failed roundtrip and have incurred 
additional processing costs associated with TCP, and maybe I
haven't scaled up my authoritative infrastructure sufficiently to deal with 
that.

I also don't want to deploy only Ed448 and cause my zone to be instantly 
treated as unsigned by the vast majority of resolvers. Obviously,
because I've nullified the security benefit of DNSSEC, but also because I have 
application security protocols, like DANE, that critically
depend on DNSSEC authentication, for which this would pose a grave security 
risk.

So the goal is not to have them "permanently" signed with multiple algorithms, 
but for a defined transition period, which may not be very
short. At that point, the older algorithm would be withdrawn -- so algorithm 
rollover, but over an extended period.

Okay, that explains it better. but does also confirm you basically want
to be permanently in this state. Because every few years you will have
new fancy algorithms. As a community we should really roll out updated
algorithms faster and deprecate obsoleted algorithms faster.

Of course there are initial costs. The goal is longer term - the benefits will 
increase with more adoption over time. There will be a lot
of large responses initially, which will decrease over time.

But then why not depend on adoption of (and deprecation of) old signing
algorithms?

      One would hope zones are migrated from "strong" to "even stronger"
      algorithms, and not from "weak" to "strong enough", so I don't think
      algorithm downgrade is ever an issue.

Really? RSA1024 is still widely deployed, and is frequently why DNSSEC is the 
butt of jokes in the larger security community.

You are mixing up keysize and algorithm rollover :P

RSA is not broken, and safe to use, preferably at 2048 key size. No double
signing or algorithm signaling is needed to go from RSA1024 to RSA2048 :)

also: https://www.internetsociety.org/doc/state-dnssec-deployment-2016

        "The overwhelming majority (~99%) of TLDs use 2048 bit keys"

I can't find a statistics URL for all domains (secspider seems to have
changed and I cannot find the RSA key sizes per domain anymore) but I'm
pretty sure 1024 will be on the decline.

Anyway, my point is that we should not have 20 sexy algorithms, but only
a very limited set of a few algorithms that we all agree are strong
enough. We shouldn't enter into negotiations of prefered crypto
algorithms between auth and recursive servers.

And once opendnssec algorithm rollover is available from stable
distributions, I think we will see most zones go from RSASHA1 to
RSASHA256 or ECDSA.

Paul

_______________________________________________
DNSOP mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/dnsop

Reply via email to