On 2/16/24, 11:13, "DNSOP on behalf of Petr Špaček" <dnsop-boun...@ietf.org on 
behalf of pspa...@isc.org> wrote:
> should resolvers suffer from more complex code & work, or should signers 
> suffer if they do something very unusual?

Coming from this perspective, find a solution may be difficult.

At the core, the DNS is extremely flexible, overly so.  DNSSEC arrived as a 
layer to add protection to the DNS without crimping functionality.  The DNSSEC 
approach is to present to the validator the "proof of work" that was done to 
arrive at the response given, proving that the DNS protocol was followed at 
each step.

Because the DNS is so permissive, this means that there's a lot of work to do 
in validation.  To make validation easier, what's allowed in the DNS has to be 
constrained.  If that is unacceptable, validation has to be constrained within 
some performance budget.

I think colliding key tags is a red herring.  In the Key Trap, the role 
colliding key tags plays is to sneak as many cryptographic operations into the 
validation of each signature.  I.e., instead of just having lots of RRSIG 
resource records, key tag collisions provide a multiplier to that lots of RRSIG 
resource records.  The underlying issue is resource consumption, key tags 
collision is just one ingredient in scaling.  For any data set, I could have 
many temporally overlapping RRSIG resource records signed by the same key 
(tag).  One RRSIG might be from Feb 1 to Feb 29, another Feb 2 to Feb 28, 
another Feb 1 to Feb 28, and so on.  Each would be accepted today (Feb 16), 
thus eligible to be cryptographically computed.  And if all fail - as designed 
- the validator would be tied up.

It would be good to enforce sanity on configurations.  (In poking at something, 
I found a zone with DS resource records covering 13 different key tags.  They 
zone had just one DNSKEY resource record.  Trialing a validation in the zone, 
it worked because the signing was working - they apparently forget to contact 
the parent to remove DS resource records when not needed.  I don't have history 
on this case, just a spot check.)  But that's not likely to happen.

While I can't argue for key tag collisions continued existence, I can't see a 
practical way to enforce a rule against them.  Discouraging collisions would be 
beneficial to key management crews, I can't see that it would be all that 
important to validators.

I can see encouraging validators to run in a time-resource envelope, using 
"timing out" as a valid excuse to fail a validation.  If a zone admin has a 
complex set up that exceeds validation budgets, their relying parties will let 
them know they can't get through.  I think this "crude" approach is the only 
one that is fair to all failure modes - it would even have to consider NSEC3 in 
the IPv6 reverse map - mindful of the closest encloser proof.  (Is NSEC3 
beneficial in the IPv6 reverse map?  That would take some thinking.)

IMHO, the reason this discussion is raging is that it's not a simple matter.  
What makes the DNS great has forced the design of DNSSEC to be a lot of work, 
given that DNSSEC was designed to keep all private keys air gapped from the 
network.   The "I have to show my work so you can trust me" approach is 
computationally hard on the relying party.



_______________________________________________
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop

Reply via email to