Re: [DNSOP] [Ext] About key tags

2024-03-01 Thread Paul Wouters
On Feb 29, 2024, at 20:33, Arnold DECHAMPS  wrote:
> 
> 
> Is it still a concern enough that they justify continuing using those tags 
> instead of the full key?

The full key is not there. There is only a key tag. Are you proposing a wire 
format change to DNSSEC that puts the full key there? That would be hard and 
slow to deploy and use up value bytes of the limited +/- 1400 bytes.

> Wouldn't that limit the risk of collision?

At a price, yes.

Paul
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] [Ext] About key tags

2024-03-01 Thread Philip Homburg
>The full key is not there. There is only a key tag. Are you proposing a wire f
>ormat change to DNSSEC that puts the full key there? That would be hard and sl
>ow to deploy and use up value bytes of the limited +/- 1400 bytes.
>
>> Wouldn't that limit the risk of collision?
>
>At a price, yes.

Technically only a SHA-2 hash of the key would need to be there. If somebody
can create a SHA-2 hash collision then the world has bigger problems than
a DoS on DNSSEC validation.

However, changing RRSIG is probably not practical unless there are other
reason to change it.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] [Ext] About key tags

2024-03-01 Thread Joe Abley
On 1 Mar 2024, at 16:44, Philip Homburg  wrote:

>>> Wouldn't that limit the risk of collision?
>> 
>> At a price, yes.
> 
> Technically only a SHA-2 hash of the key would need to be there. If somebody
> can create a SHA-2 hash collision then the world has bigger problems than
> a DoS on DNSSEC validation.

So really what you're suggesting is that we change the keytag algorithm to 
something that has a lower chance of collisions.

It's a shame that the design of keytags didn't anticipate a need for algorithm 
agility. 


Joe

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] [Ext] About key tags

2024-03-01 Thread Philip Homburg
>First, forbidding key tag collisions is not controversial, the
>trouble is that forbidding them is not feasible and, more
>importantly, does not prevent them from happening.  Validators
>still need to guard themselves.  Forbidding is what I'm objecting
>to - discouraging them, limiting them is fine, but forbidding
>is beyond feasibility.
> 
> 
>Second, directing validators to fail at the first sign of failure
>increases the brittleness of the protocol.  

It has been shown many times that, certainly in a security context,
"be liberal in what you accept" leads to all kinds of problems that later on.
Which leads to fragile systems because they have to create security out of
chaos.

If there is more than one way to do something, then it becomes harder to
reason about the system. At the same time cryptography is brittle by nature.
Either you get the details right, or you have a failure. There is very room
for gracefull degradation.

As far as I can tell, there are three places where a duplicate key tag
can show up:
1) In the DS RR set
2) In the DNSKEY RR set
3) In a set of RRSIGs associated with an RR set.

The first two can not happen by accident, cannot be the result of a temporary
inconsistency. Both the DS and the DNSKEY RR sets have to be signed. If we
would change to rules on duplicate key tags then all signers can be fixed to
never sign a DS or DNSKEY RR set that has a duplicate key tag. Then that would
prevent them from showing up on the wire.

This would allow validators to reject any DS or DNSKEY RR set that has a
duplicate key tag.

Note that we expect both signers and validators to do a lot of complicate 
things that have to be exactly right otherwise DNSSEC validation fails. 
Requiring key tags to be unique is not a particular hard requirement on a
signer.

Duplicate key tags in RRSIGs is a harder problem. RRSIGs do not come in sets. 
However, if every DNSKEY in an RR set has a unique key tag, then there is no
reason for an authoritative to have RRSIGs with duplicate key tags in an
answer, which in turn allows recursors to reject such answers when received,
which in turn allows validators to reject such answers when validating.

Now all of this is at the DNSSEC protocol level. At this level guaranteeing
unique key tags is doable and quite easy.

The hard part starts when (in a multi signer setup) a signer wants to insert
a DNSKEY into an RR set where that would lead to a duplicate key tag. Then
the signer has to back off and come back with a different key.

The trade off is that key tag collisions is only a small part of all 
possible DoS attacks on a validator. If we require all signers to avoid
duplicate key tags only to get rid of one possible DoS, but many 
attacks continue to exist, then it may not be worth the effort.

But for the simple question, would requiring unique key tags in DNSSEC be
doable without significant negative effects, then I think the answer is yes.


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] [Ext] About key tags

2024-03-01 Thread Philip Homburg
> So really what you're suggesting is that we change the keytag
> algorithm to something that has a lower chance of collisions.
> 
> It's a shame that the design of keytags didn't anticipate a need
> for algorithm agility.

Even if key tags would have been MD5 it would have been enough for 
statistical uniqueness.

But that's water under the bridge. Unless we have plans to redesign DS and
RRSIG.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] [Ext] About key tags

2024-03-01 Thread Edward Lewis
On 3/1/24, 11:13, "pch-b538d2...@u-1.phicoh.com on behalf of Philip Homburg" 
 wrote:

I removed a lot of logic, as it seems dead on.  But...

>This would allow validators to reject any DS or DNSKEY RR set that has a
>duplicate key tag.

"This" refers to barring keys from having duplicate key tags.  My knee-jerk 
response is that validators are already permitted to reject anything they want 
to reject.  (We used to talk about the catch-all "local policy" statements in 
the early specs.)  You don't have to bar duplicate key tags to allow validators 
to dump them, validators already have that "right."

>Duplicate key tags in RRSIGs is a harder problem

I'm not clear on what you mean.

I could have RRSIG generated by the same key (binary-ily speaking, not key 
tag-speaking) that have different, overlapping temporal validities.  If you 
want to draw a malicious use case, I could take an RRSIG resource record signed 
in January with an expiration in December for an address record that is changed 
in March, and replay that along with a new signature record, signed in April 
and valid in December.  One would validate and the other not.  But this isn't a 
key tag issue, it's a bad signing process issue.

Not completely fictious one.  There was a TLD whose signatures always expired 
on New Year's eve.  Not sure if the TLD in question does this anymore, but for 
a number of years (at least 3), all signatures they generated expired on the 
next New Year's eve.

>But for the simple question, would requiring unique key tags in DNSSEC be
>doable without significant negative effects, then I think the answer is yes.

Heh, heh, if you make the problem simpler, then solving it is possible.

Seriously, while I do believe in the need for a coherent DNSKEY resource record 
set, there are some multi-signer proposals that do not.  If the key set has to 
be coherent, then someone can guard against two keys being published with the 
same key tag.  The recovery may not be easy as you'd have to determine what key 
needs to be kicked and who does it and where (physically in HSMs or 
process-wise).  I have some doubt that key tag collisions can be entirely 
avoided.

Even if you could - you still have the probablility that someone intentionally 
concocts a key tag collision.  Not everyone plays by the rules, especially when 
they don't want to.

So - to me - it keeps coming back to - a validator has to make reasonable 
choices when it comes to using time/space/cpu to evaluate an answer.  No matter 
whether or not the protocol "bars" duplicate key tags and whether or not 
signers are instructed to avoid such duplication.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] [Ext] About key tags

2024-03-01 Thread Philip Homburg
> I removed a lot of logic, as it seems dead on.  But...
> 
> >This would allow validators to reject any DS or DNSKEY RR set that has a
> >duplicate key tag.
> 
> "This" refers to barring keys from having duplicate key tags.  My
> knee-jerk response is that validators are already permitted to
> reject anything they want to reject.  (We used to talk about the
> catch-all "local policy" statements in the early specs.)  You don't
> have to bar duplicate key tags to allow validators to dump them,
> validators already have that "right."

The basics of protocol design is that parties that want the protocol to
work follow the protocol. Of course there will be random failures, and in
the case of security protocols, also attackers. 

If we have a protocol where validators are allowed to discard RR sets with
duplicate key tags but we place no restriction on signers, then we have a 
protocol with a high chance of failure even if all parties follow the 
protocol.

So we have essentially 2 options for a successful protocol:
1) the current one where validators tolerator key tag collissions
2) or a potential one where signers ensure that key tag collisions do not
   happen.

If validators violate the protocol then all kinds of things can happen. They
just place themselves outside the protocol and cannot rely on the properties
of the protocol.

At the end of the day, following the protocol is voluntary. But if we want
to be able to reason about the protocol, then we have to assume that all
interested parties try to follow the protocol.

> >Duplicate key tags in RRSIGs is a harder problem
> 
> I'm not clear on what you mean.
> 
> I could have RRSIG generated by the same key (binary-ily speaking,
> not key tag-speaking) that have different, overlapping temporal
> validities.  If you want to draw a malicious use case, I could take
> an RRSIG resource record signed in January with an expiration in
> December for an address record that is changed in March, and replay
> that along with a new signature record, signed in April and valid
> in December.  One would validate and the other not.  But this isn't
> a key tag issue, it's a bad signing process issue.

Indeed. But the question is, if a validator finds both RRSIGs associated with a
RR set and we have guarantees about uniqueness of key tags for public key,
can the validator then discard those signatures?

> >But for the simple question, would requiring unique key tags in DNSSEC be
> >doable without significant negative effects, then I think the answer is yes.
> 
> Heh, heh, if you make the problem simpler, then solving it is
> possible.
> 
> Seriously, while I do believe in the need for a coherent DNSKEY
> resource record set, there are some multi-signer proposals that do
> not.  If the key set has to be coherent, then someone can guard
> against two keys being published with the same key tag.  The recovery
> may not be easy as you'd have to determine what key needs to be
> kicked and who does it and where (physically in HSMs or process-wise).
> I have some doubt that key tag collisions can be entirely avoided.

So now we moved the problem away from the core DNSSEC protocols to the
realm of multi signer protocols.

The first step to conclude is that for the core DNSSEC protocol, requiring
unique key tags is doable. Even without a lot of effort (other the usual
of coordinating changes to the protocol).

Then the question becomes, how hard will it be to adapt multi signer protocols
to ensure that the effective set of DNSKEYs has unique key tags.

> Even if you could - you still have the probablility that someone
> intentionally concocts a key tag collision.  Not everyone plays by
> the rules, especially when they don't want to.

That is not a problem. If we modify the core DNSSEC protocol and 
direct validators to just discard anything that has duplicate key tags,
then the attack would go nowhere.

> So - to me - it keeps coming back to - a validator has to make
> reasonable choices when it comes to using time/space/cpu to evaluate
> an answer.  No matter whether or not the protocol "bars" duplicate
> key tags and whether or not signers are instructed to avoid such
> duplication.

But the protocol also has to take reasonable measures to limit the amount
of time a validator has to spend on normal (including random exceptional)
cases.

For example, without key tags, validators would have to try all keys in
a typical DNSKEY RR set or face high random failures.

Going a step further, we have to decide where to place complexity. Unique key
tags simplifies validator code in many ways. But, it increases the
complexity of signers, in particular multi signer setups.

So the question is, does requiring unique key tags significantly reduce the
attack surface for a validator?

Are there other benefits (for example in diagnotics tools) for unique key
tags that outweight the downside or making multi signer protocols more
complex?

___
DNSOP

Re: [DNSOP] [Ext] About key tags

2024-03-01 Thread John R Levine

Technically only a SHA-2 hash of the key would need to be there. If somebody
can create a SHA-2 hash collision then the world has bigger problems than
a DoS on DNSSEC validation.


How hard would it be to add a possibility for another key algorithm?


Beyond the change to the specs, it would require significant software 
changes to every piece of software in the world that signs or validates 
DNSSEC.  I figure we could have it widely adopted by the 2050s.


We have established that the benefit would be negligible, and caches would 
still need to have defensive checks against excessive or duplicate keys 
and signatures.  Let's talk about something else.


R's,
John

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] [Ext] About key tags

2024-03-01 Thread Philip Homburg
>Remember that the keytags are just a hint to limit the number of keys
>you need to check for each signature. If I have a zone with 300
>signatures per key, it's still going to take a while to check them all
>even with no duplicate tags. It won't be as bad as the quadratic
>keytrap but it'll still be annoying.

If key tags are unique, then a validator can just discard anything that
has multiple signatures with the same key tag.

So to reach the 300 signatures on a single RR set, you would need to have
300 keys in the DNSKEY RR set. In that case, we can assume that the 
validator will just discard the DNSKEYs. So the validation effort would
be zero. Not a very good attack.


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] [Ext] About key tags

2024-03-01 Thread Edward Lewis
On 3/1/24, 13:45, "pch-b538d2...@u-1.phicoh.com on behalf of Philip Homburg" 
 wrote:

>If we have a protocol where validators are allowed to discard RR sets with
>duplicate key tags but we place no restriction on signers, then we have a 
>protocol with a high chance of failure even if all parties follow the 
>protocol.

>From what I gather, from what I've measured, and from what I've heard from 
>others, generally key tag collisions don't happen often in natural operations. 
> (They may begin in malicious operations.)

If a validator chooses to discard all signatures for which there are multiple 
DNSKEY resource records matching the key tab in the RRSIG resource record, 
there'll be SERVFAILs across the population that cares about the data involved. 
 From past observations, when there's a widespread "I can't get to that", it 
bubbles up to the service provider and then take steps to fix it.

This kind of feedback loop seems to be the state of the art in the Internet 
today.  I'm not sure we need to take on what would be a large effort to do 
better.  At least given anecdotal evidence todate.

>At the end of the day, following the protocol is voluntary. But if we want
>to be able to reason about the protocol, then we have to assume that all
>interested parties try to follow the protocol.

To use an anecdote - "When crossing a street with a cross-walk signal you 
should still look for vehicles.  While no vehicle ought to be entering the 
cross-walk against the signal, don't bet your life on it."  Something like this 
was on a police safety poster.

In designing a protocol, you can't assume that the remote end will do anything 
sensible.  You need to focus on what you can control locally.

>Indeed. But the question is, if a validator finds both RRSIGs associated with a
>RR set and we have guarantees about uniqueness of key tags for public key,
>can the validator then discard those signatures?

What if both signatures were generated by the same key (private of the pair) 
but the data changed between the inception time of one and the inception time 
of another?  One signature may be over a stale copy of the data, not from a 
different key.

>The first step to conclude is that for the core DNSSEC protocol, requiring
>unique key tags is doable. Even without a lot of effort (other the usual
>of coordinating changes to the protocol).

Back in the day, prefacing because it may no longer be true, BIND would 
generate keys and place them in a default directory.  Each key would be in a 
file whose name included the owner name, the DNSSEC security algorithm number, 
and key tag.  A key tag collision would be detected if the file name about to 
be used was already present in the directory.  This strategy only worked though 
if the user of BIND did not move the keys elsewhere, this is something the 
strategy couldn't control.

I'm not sure it's doable, even for "simple" DNSSEC, if you have to account for 
the myriad of ways signer processes are implemented.  Perhaps I'm being 
obstinate about the ease in which collisions can be detected because I still 
maintain it just doesn't matter.  Validators still need to protect themselves 
and when something that matters breaks, it'll light up the social media sphere. 
 (As it has in the past)

>But the protocol also has to take reasonable measures to limit the amount
>of time a validator has to spend on normal (including random exceptional)
>cases.
>
>For example, without key tags, validators would have to try all keys in
>a typical DNSKEY RR set or face high random failures.

For the most part, zones don't have many keys.  Usually only one ZSK and one 
KSK unless there is a roll happening.  There are some zones with lots of keys, 
but that doesn't seem the norm.  I don't know if there is a study that finds 
the average number of keys in zones weighted by the use of the data in the 
zone.  (Meaning, TLDs would be weighted more highly than an ill-managed hobby 
zone.)

>So the question is, does requiring unique key tags significantly reduce the
>attack surface for a validator?
>
>Are there other benefits (for example in diagnotics tools) for unique key
>tags that outweigh the downside or making multi signer protocols more
>complex?

Key tag collisions are not desirable, we know that.

My diagnostic tool has crashed the two times it came across them, in one case I 
could differentiate by assuming the role (KSK vs. ZSK) and in the other I shot 
off a (possibly futile) message to the operator and the collision cleared 
quickly, plus I was able to smudge my code a bit.  Sooner or later, I know I 
won't be able to distinguish a collision unless I grab more data.

The question isn't about the goodness of collisions.  It's about the best way 
to address the resource consumption problem than can exacerbate.  Ruling them 
out of bounds doesn't mean they can't come back on the field and cause 
problems.  Treat the problem - resource consumption - that can be done.

And

Re: [DNSOP] [Ext] About key tags

2024-03-01 Thread John R Levine

Remember that the keytags are just a hint to limit the number of keys
you need to check for each signature. If I have a zone with 300
signatures per key, it's still going to take a while to check them all
even with no duplicate tags. It won't be as bad as the quadratic
keytrap but it'll still be annoying.


If key tags are unique, then a validator can just discard anything that
has multiple signatures with the same key tag.


No, that's not how it works.  The signatures might have different time 
ranges or otherwise be different but still plausibly valid.  Please review 
RFCs 4034 and 4035.


R's,
John

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] [Ext] About key tags

2024-03-01 Thread Bob Harold
On Fri, Mar 1, 2024 at 2:38 PM Edward Lewis  wrote:

> On 3/1/24, 13:45, "pch-b538d2...@u-1.phicoh.com on behalf of Philip
> Homburg"  pch-dnso...@u-1.phicoh.com> wrote:
>
> >If we have a protocol where validators are allowed to discard RR sets
> with
> >duplicate key tags but we place no restriction on signers, then we
> have a
> >protocol with a high chance of failure even if all parties follow the
> >protocol.
>
> From what I gather, from what I've measured, and from what I've heard from
> others, generally key tag collisions don't happen often in natural
> operations.  (They may begin in malicious operations.)
>
> If a validator chooses to discard all signatures for which there are
> multiple DNSKEY resource records matching the key tab in the RRSIG resource
> record, there'll be SERVFAILs across the population that cares about the
> data involved.  From past observations, when there's a widespread "I can't
> get to that", it bubbles up to the service provider and then take steps to
> fix it.
>
> This kind of feedback loop seems to be the state of the art in the
> Internet today.  I'm not sure we need to take on what would be a large
> effort to do better.  At least given anecdotal evidence todate.
>
> >At the end of the day, following the protocol is voluntary. But if we want
> >to be able to reason about the protocol, then we have to assume that all
> >interested parties try to follow the protocol.
>
> To use an anecdote - "When crossing a street with a cross-walk signal you
> should still look for vehicles.  While no vehicle ought to be entering the
> cross-walk against the signal, don't bet your life on it."  Something like
> this was on a police safety poster.
>
> In designing a protocol, you can't assume that the remote end will do
> anything sensible.  You need to focus on what you can control locally.
>
> >Indeed. But the question is, if a validator finds both RRSIGs associated
> with a
> >RR set and we have guarantees about uniqueness of key tags for public key,
> >can the validator then discard those signatures?
>
> What if both signatures were generated by the same key (private of the
> pair) but the data changed between the inception time of one and the
> inception time of another?  One signature may be over a stale copy of the
> data, not from a different key.
>
> >The first step to conclude is that for the core DNSSEC protocol, requiring
> >unique key tags is doable. Even without a lot of effort (other the usual
> >of coordinating changes to the protocol).
>
> Back in the day, prefacing because it may no longer be true, BIND would
> generate keys and place them in a default directory.  Each key would be in
> a file whose name included the owner name, the DNSSEC security algorithm
> number, and key tag.  A key tag collision would be detected if the file
> name about to be used was already present in the directory.  This strategy
> only worked though if the user of BIND did not move the keys elsewhere,
> this is something the strategy couldn't control.
>
> I'm not sure it's doable, even for "simple" DNSSEC, if you have to account
> for the myriad of ways signer processes are implemented.  Perhaps I'm being
> obstinate about the ease in which collisions can be detected because I
> still maintain it just doesn't matter.  Validators still need to protect
> themselves and when something that matters breaks, it'll light up the
> social media sphere.  (As it has in the past)
>
> >But the protocol also has to take reasonable measures to limit the amount
> >of time a validator has to spend on normal (including random exceptional)
> >cases.
> >
> >For example, without key tags, validators would have to try all keys in
> >a typical DNSKEY RR set or face high random failures.
>
> For the most part, zones don't have many keys.  Usually only one ZSK and
> one KSK unless there is a roll happening.  There are some zones with lots
> of keys, but that doesn't seem the norm.  I don't know if there is a study
> that finds the average number of keys in zones weighted by the use of the
> data in the zone.  (Meaning, TLDs would be weighted more highly than an
> ill-managed hobby zone.)
>
> >So the question is, does requiring unique key tags significantly reduce
> the
> >attack surface for a validator?
> >
> >Are there other benefits (for example in diagnotics tools) for unique key
> >tags that outweigh the downside or making multi signer protocols more
> >complex?
>
> Key tag collisions are not desirable, we know that.
>
> My diagnostic tool has crashed the two times it came across them, in one
> case I could differentiate by assuming the role (KSK vs. ZSK) and in the
> other I shot off a (possibly futile) message to the operator and the
> collision cleared quickly, plus I was able to smudge my code a bit.  Sooner
> or later, I know I won't be able to distinguish a collision unless I grab
> more data.
>
> The question isn't about the goodness of collisions.  It's about the best
> way to addre

Re: [DNSOP] [Ext] About key tags

2024-03-01 Thread Mark Andrews
You don’t perform a verify if the time window is invalid. The same as you don’t 
perform a verify if the tag doesn’t match.  Mind you it’s completely pointless 
to have multiple time ranges. The RRset and it’s signatures travel as pairs. 
All the key rollover rules depend on that. 

-- 
Mark Andrews

> On 2 Mar 2024, at 06:43, John R Levine  wrote:
> 
> 
>> 
>>> Remember that the keytags are just a hint to limit the number of keys
>>> you need to check for each signature. If I have a zone with 300
>>> signatures per key, it's still going to take a while to check them all
>>> even with no duplicate tags. It won't be as bad as the quadratic
>>> keytrap but it'll still be annoying.
>> 
>> If key tags are unique, then a validator can just discard anything that
>> has multiple signatures with the same key tag.
> 
> No, that's not how it works.  The signatures might have different time ranges 
> or otherwise be different but still plausibly valid.  Please review RFCs 4034 
> and 4035.
> 
> R's,
> John
> 
> ___
> DNSOP mailing list
> DNSOP@ietf.org
> https://www.ietf.org/mailman/listinfo/dnsop

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] [Ext] About key tags

2024-03-01 Thread John R Levine

You don’t perform a verify if the time window is invalid. The same as you don’t 
perform a verify if the tag doesn’t match.  Mind you it’s completely pointless 
to have multiple time ranges. The RRset and it’s signatures travel as pairs. 
All the key rollover rules depend on that.


I agree it doesn't make much sense to have two signatures with overlapping 
time windows but the spec allows it.


For about the hundredth time, the woy you deal with any of this is 
resource limits, not trying to invent new rules about stuff we might have 
forbidden if we'd thought of it 20 years ago.


R's,
John

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] [Ext] About key tags

2024-03-01 Thread Philip Homburg
> For about the hundredth time, the woy you deal with any of this is
> resource limits, not trying to invent new rules about stuff we
> might have forbidden if we'd thought of it 20 years ago.

There are a number of problems with resource limits:
1) We haven't written it down (in an RFC). So we got to the point that many
   validators got it wrong. A bit of hand waving that implementations should
   do resource limiting doesn't magically make it happen.
2) Operators of validators don't want customer facing errors due resource
   limit constraits. So they set them generous enough that it works for
   real traffic. Nobody knows what happens during a new attack.
3) Some content providers are quite creative with the way they use DNS. 
   So the limits need to high enough to accomodate them.
4) Because there are no standards for those limits, we cannot really reason
   about them. 
5) It is tricky for researchers because they first have to figure out how
   popular software works in order to exploit it. But if it is tunable
   resource limit it doesn't result in a lot of credit.

So from a validator point is, it is better to move some of those resource
limits to the protocol. Even if the DNSSEC spec would say that you only
have to validate with 2 public keys and two signatures per RR set, then
that would be a massive improvement over the vagueness in the current specs.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] [Ext] About key tags

2024-03-01 Thread Philip Homburg
> If a validator chooses to discard all signatures for which there
> are multiple DNSKEY resource records matching the key tab in the
> RRSIG resource record, there'll be SERVFAILs across the population
> that cares about the data involved.  From past observations, when
> there's a widespread "I can't get to that", it bubbles up to the
> service provider and then take steps to fix it.

I don't think that would fly.

If the major vendors of validating software together the big public resolvers
would come together and announce a flag day where after that day
key tags would have to be unique or SERVFAIL would be the result, then
that would put a sizable group of people in a very bad position. 

If it was that easy, then we would not have this discussion, then we
could just publish an update to DNSSEC that requires key tags to be unique.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] [Ext] About key tags

2024-03-01 Thread Philip Homburg
In your letter dated Fri, 1 Mar 2024 15:42:49 -0500 you wrote:
>Offlist because I don=E2=80=99t want to feed the flames, but:
>
>>=20
>> 2) Operators of validators don't want customer facing errors due resource
>>   limit constraits. So they set them generous enough that it works for
>>   real traffic. Nobody knows what happens during a new attack.
>> 3) Some content providers are quite creative with the way they use DNS.
>>   So the limits need to high enough to accomodate them.
>
>Why do you give operators and content providers a freebie but not signers ?

That's not my intent. I think it is more that signers are less visible.
A validator does not see how a zone is signed. A validator only sees
the contents of the zone, not where the keys are located.

So any resource contraints probably don't reflect what signers do other
than to accomodate whatever shows up as the output.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop