Standardizing protocol elements for public distribution of zone files over HTTP 
(e.g. defining content-types, providing guidance on cache header usage) seems 
like a grand idea.  Let's make it easy to mirror the root zone over HTTP, sure.

Suggesting that LocalRoot resolvers use HTTP, on the other hand, seems like a 
dangerous shortcut.  HTTP is enormously complicated, and normally relies on DNS 
in many different ways.  Incorporating it as a formal dependency of DNS, even 
optionally, adds a lot of complexity, probably including the need for a 
non-HTTP fallback path to break the cyclic dependency.

LocalRoot resolvers are welcome to fetch the zone using HTTP, BitTorrent, or 
however else they like.  But when setting requirements for LocalRoot resolvers, 
or standardizing publication points, we should focus on making DNS a 
self-contained system that bootstraps from TCP/IP in the simplest achievable 
way.

If there are real concerns about the efficiency of zone distribution in DNS, I 
would prefer to invest in correcting them within the DNS protocol.  (IXFR seems 
well-suited to LocalRoot, but we could pretty easily layer on ZSTD or something 
if needed.)

I do see the appeal of taking this opportunity to break out of the 13-letters 
paradigm.  However, I think caution is also warranted there.  The proposed 
"root zone publication points" system effectively introduces a hard dependency 
on HTTP, to accomplish the equivalent of what DNS Priming does in-band.  I 
would prefer some form of in-band priming, perhaps encoding the list of "AXFR 
root servers" into the root zone.

--Ben
________________________________
From: George Michaelson <[email protected]>
Sent: Saturday, January 24, 2026 3:53 AM
To: dnsop WG <[email protected]>
Subject: [DNSOP] Re: DNSOP4 documents for consideration about the future of 
LocalRoot behavior.

One of the principal advantages of http to a file is that the world has built 
out CDN to be efficient at back and front end delivery of this content. I can 
name many many entities who could do this with zero code simply by running cron 
job on

One of the principal advantages of http to a file is that the world has built 
out CDN to be efficient at back and front end delivery of this content. I can 
name many many entities who could do this with zero code simply by running cron 
job on a frequency chosen to fit timely update, and then content placement in 
their normal distribution method.

This is very attractive to me. It means I can say to people who want to help 
distribute the root, it's next to zero public facing code change to be able to 
do it.

Please do not misunderstand this as disrespecting in band in protocol AXFR. My 
point is this increases the surface of participation in broadcasting the root, 
as a public good, in ways which demanding competency and literacy in DNS do not.

G

On Sat, 24 Jan 2026, 1:56 pm Wes Hardaker, 
<[email protected]<mailto:[email protected]>> wrote:
"John Levine" <[email protected]<mailto:[email protected]>> writes:

> ICANN has two public AXFR servers at 
> xfr.cjr.dns.icann.org<https://urldefense.com/v3/__http://xfr.cjr.dns.icann.org__;!!Bt8RZUm9aw!4we9Qu0B3pricHKBRb6NS5ac3Z6zsJrLFxv8AO9QXPgRBl_G8EFpK3k9NM6khoLuM1SmHpMF$>
>  and
> xfr.lax.dns.icann.org<https://urldefense.com/v3/__http://xfr.lax.dns.icann.org__;!!Bt8RZUm9aw!4we9Qu0B3pricHKBRb6NS5ac3Z6zsJrLFxv8AO9QXPgRBl_G8EFpK3k9NM6khoLuM6z3Pt0z$>.
>  How about asking them what their experience has
> been, how's the load, how hard is it to manage, how have they dealt
> with the sorts of attacks that people make on public servers.

If they have that information, of course it would be helpful.

I can tell you from the perspective of 
b.root-servers.net<https://urldefense.com/v3/__http://b.root-servers.net__;!!Bt8RZUm9aw!4we9Qu0B3pricHKBRb6NS5ac3Z6zsJrLFxv8AO9QXPgRBl_G8EFpK3k9NM6khoLuMxpk6WBm$>
 what our load
has been like: it's been growing since 2022 or so (and we haven't really
noticed any issues):

https://ant.isi.edu/~hardaker/tmp/xfr-counts-by-date.png<https://urldefense.com/v3/__https://ant.isi.edu/*hardaker/tmp/xfr-counts-by-date.png__;fg!!Bt8RZUm9aw!4we9Qu0B3pricHKBRb6NS5ac3Z6zsJrLFxv8AO9QXPgRBl_G8EFpK3k9NM6khoLuMyyxF0Cv$>

https://ant.isi.edu/~hardaker/tmp/xfr-counts-uniq-srcs.png<https://urldefense.com/v3/__https://ant.isi.edu/*hardaker/tmp/xfr-counts-uniq-srcs.png__;fg!!Bt8RZUm9aw!4we9Qu0B3pricHKBRb6NS5ac3Z6zsJrLFxv8AO9QXPgRBl_G8EFpK3k9NM6khoLuM62JmF7u$>

https://ant.isi.edu/~hardaker/tmp/xfr-counts-by-ASN.png<https://urldefense.com/v3/__https://ant.isi.edu/*hardaker/tmp/xfr-counts-by-ASN.png__;fg!!Bt8RZUm9aw!4we9Qu0B3pricHKBRb6NS5ac3Z6zsJrLFxv8AO9QXPgRBl_G8EFpK3k9NM6khoLuM7ktvRGH$>

(the horizontal data points is 1 sample day every 3 months since late 2016)

I'll mention again that the current documents state we should have
multiple protocol transfer options available for implementations and
operators to choose from.  This is sort of already case in existing
implementations and we should support those.  IMHO, AXFR should
definitely be one choice.  But a zonefile-over-HTTPS makes sense to me too.

--
Wes Hardaker
Google

_______________________________________________
DNSOP mailing list -- [email protected]<mailto:[email protected]>
To unsubscribe send an email to 
[email protected]<mailto:[email protected]>
_______________________________________________
DNSOP mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to