On Oct 20, 2025, at 09:42, Erik Nygren <[email protected]> wrote:
> 
> How do we justify when it is safe to use smaller values?

By describing the threat model where it doesn't work. Note that such a threat 
model would necessitate updating the use case description in Section 5.1.

> Is typability really a requirement (vs copy-and-paste or automation via APIs)?

It is useful if you care about accessibility.

> In most cases where random tokens are used it is desirable to have automation.

Sure, but what does that have to do with MUST-level requiring 128 bits of 
randomness?

> Looking at the survey 
> (https://github.com/ietf-wg-dnsop/draft-ietf-dnsop-domain-verification-techniques/blob/main/DomainControlValidation-Survey.md
>  [github.com]) 
> the vast majority of existing random token DV schemes use at least 128 bits 
> of randomness. 

Sure, but is it required everywhere? Why? It makes sense for CAs, but this 
document is not only about CAs.

Again, this discussion would be a lot easier if you described the threat model 
and showed that the model applies to all users of this specification. I suspect 
they reason you haven't is because there are plenty of users who don't meet the 
implied model.

--Paul Hoffman

_______________________________________________
DNSOP mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to