Hello, 

Following up on the discussion on Meetecho.

My understanding is that the issue is not really about TXT records being large. 
The operational problem seems to arise when large validation records are served 
via wildcards, where bypassing normal caching may lead to more traffic toward 
authoritative servers. So the concern might be less about TXT specifically, or 
simply large responses, but rather any possible chain of record types with a 
wildcard owner name.

BIND and Unbound have mechanisms to mitigate large responses, but this may not 
apply to other cloud DNS providers that do not run BIND, or that may simply not 
be aware of the operational impact of large records under wildcard names.

For reference, draft-fujiwara-dnsop-dns-upper-limit-values-05 mentions:
"CVE-2024-1737: "BIND's database will be slow if a very large number
of RRs exist at the same name". BIND 9.18.28 introduced limits to
address this. BIND provides the 'max-records-per-type' parameter,
which limits the number of resource records in an RRset. The default
value is 100."

In the example shown in my slides today, there were 34 answers in the ANSWER 
section for *.dnslab6.xyz TXT. This is still below the default 
'max-records-per-type' limit of 100. That’s what makes TXT somewhat special in 
this case (large response + wildcard + less than 100 RRs). I don't expect most 
cloud providers to change such defaults unless they clearly understand the 
operational implications.

Bashan Zuo

https://datatracker.ietf.org/doc/draft-avoid-large-wildcard-records/




_______________________________________________
DNSOP mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to