On 2/4/26 10:26, Libor Peltan wrote:

Hi,

As a DNS nerd, I also favor AXFR/IXFR for local root updates. However, the public AXFR service needs to be provided by *different* nameservers than normal root zone answering, because AXFR is easy to DoS and often can suffer even with high load of legitimate traffic. So we need to care that it doesn't disrupt the normal root DNS (even TCP) answering.

And yes, the root zone signing process should be modernized to be able to sign incrementally, in any case. But that's not critical.

As an alternative, I have also thought that as root zone is signed with NSECs, the resolvers actually can fill their cache by simply iterating the zone with normal queries :) But then I thought, that simply enabling aggresive negative caching is more efficient.

Anyway, what are the main benefits of local root against negative caching?

One benefit of transfer is that ZONEMD can be verified and validated, which provides DNSSEC grade protection for all the referrals (i.e. NS RRsets and glue).

It would also provide DNSSEC protection for the root server addresses, but that's immaterial because they would in theory not be used anymore, and in addition, specifically Knot Resolver already has DNSSEC protection for them by revalidating them in the priming process ;-)


I have a clarifying question about Knot Resolver: I read in the documentation that the current implementation of RFC 8806 by Knot Resolver is through "Cache prefilling" from a root zone file downloaded over https (https://knot-resolver.readthedocs.io/en/stable/modules-rfc7706.html and https://knot-resolver.readthedocs.io/en/stable/modules-prefill.html#mod-prefill ). Does that mean that it could get evicted from the cache if more space is needed?


-- Willem

I concur to the caution that transferring zone files over HTTP(S) looks weird and care must be taken not to fall to some circular dependency (HTTPS TLS certificate requiring working DNS?). But one argment for this is that it is already implemented and running.

Libor


On 26. 01. 26 17:57, Ben Schwartz wrote:
Standardizing protocol elements for public distribution of zone files over HTTP (e.g. defining content-types, providing guidance on cache header usage) seems like a grand idea.  Let's make it easy to mirror the root zone over HTTP, sure.

Suggesting that LocalRoot resolvers use HTTP, on the other hand, seems like a dangerous shortcut.  HTTP is enormously complicated, and normally relies on DNS in many different ways.  Incorporating it as a formal dependency of DNS, even optionally, adds a lot of complexity, probably including the need for a non-HTTP fallback path to break the cyclic dependency.

LocalRoot resolvers are welcome to fetch the zone using HTTP, BitTorrent, or however else they like.  But when setting requirements for LocalRoot resolvers, or standardizing publication points, we should focus on making DNS a self-contained system that bootstraps from TCP/IP in the simplest achievable way.

If there are real concerns about the efficiency of zone distribution in DNS, I would prefer to invest in correcting them within the DNS protocol.  (IXFR seems well-suited to LocalRoot, but we could pretty easily layer on ZSTD or something if needed.)

I do see the appeal of taking this opportunity to break out of the 13-letters paradigm.  However, I think caution is also warranted there.  The proposed "root zone publication points" system effectively introduces a hard dependency on HTTP, to accomplish the equivalent of what DNS Priming does in-band.  I would prefer some form of in-band priming, perhaps encoding the list of "AXFR root servers" into the root zone.

--Ben
------------------------------------------------------------------------
*From:* George Michaelson <[email protected]>
*Sent:* Saturday, January 24, 2026 3:53 AM
*To:* dnsop WG <[email protected]>
*Subject:* [DNSOP] Re: DNSOP4 documents for consideration about the future of LocalRoot behavior. One of the principal advantages of http to a file is that the world has built out CDN to be efficient at back and front end delivery of this content. I can name many many entities who could do this with zero code simply by running cron job on One of the principal advantages of http to a file is that the world has built out CDN to be efficient at back and front end delivery of this content. I can name many many entities who could do this with zero code simply by running cron job on a frequency chosen to fit timely update, and then content placement in their normal distribution method.

This is very attractive to me. It means I can say to people who want to help distribute the root, it's next to zero public facing code change to be able to do it.

Please do not misunderstand this as disrespecting in band in protocol AXFR. My point is this increases the surface of participation in broadcasting the root, as a public good, in ways which demanding competency and literacy in DNS do not.

G

On Sat, 24 Jan 2026, 1:56 pm Wes Hardaker, <[email protected]> wrote:

    "John Levine" <[email protected]> writes:

    > ICANN has two public AXFR servers at xfr.cjr.dns.icann.org
    
<https://urldefense.com/v3/__http://xfr.cjr.dns.icann.org__;!!Bt8RZUm9aw!4we9Qu0B3pricHKBRb6NS5ac3Z6zsJrLFxv8AO9QXPgRBl_G8EFpK3k9NM6khoLuM1SmHpMF$>
    and
    > xfr.lax.dns.icann.org
    
<https://urldefense.com/v3/__http://xfr.lax.dns.icann.org__;!!Bt8RZUm9aw!4we9Qu0B3pricHKBRb6NS5ac3Z6zsJrLFxv8AO9QXPgRBl_G8EFpK3k9NM6khoLuM6z3Pt0z$>.
    How about asking them what their experience has
    > been, how's the load, how hard is it to manage, how have they dealt
    > with the sorts of attacks that people make on public servers.

    If they have that information, of course it would be helpful.

    I can tell you from the perspective of b.root-servers.net
    
<https://urldefense.com/v3/__http://b.root-servers.net__;!!Bt8RZUm9aw!4we9Qu0B3pricHKBRb6NS5ac3Z6zsJrLFxv8AO9QXPgRBl_G8EFpK3k9NM6khoLuMxpk6WBm$>
    what our load
    has been like: it's been growing since 2022 or so (and we haven't
    really
    noticed any issues):

    https://ant.isi.edu/~hardaker/tmp/xfr-counts-by-date.png
    
<https://urldefense.com/v3/__https://ant.isi.edu/*hardaker/tmp/xfr-counts-by-date.png__;fg!!Bt8RZUm9aw!4we9Qu0B3pricHKBRb6NS5ac3Z6zsJrLFxv8AO9QXPgRBl_G8EFpK3k9NM6khoLuMyyxF0Cv$>

    https://ant.isi.edu/~hardaker/tmp/xfr-counts-uniq-srcs.png
    
<https://urldefense.com/v3/__https://ant.isi.edu/*hardaker/tmp/xfr-counts-uniq-srcs.png__;fg!!Bt8RZUm9aw!4we9Qu0B3pricHKBRb6NS5ac3Z6zsJrLFxv8AO9QXPgRBl_G8EFpK3k9NM6khoLuM62JmF7u$>

    https://ant.isi.edu/~hardaker/tmp/xfr-counts-by-ASN.png
    
<https://urldefense.com/v3/__https://ant.isi.edu/*hardaker/tmp/xfr-counts-by-ASN.png__;fg!!Bt8RZUm9aw!4we9Qu0B3pricHKBRb6NS5ac3Z6zsJrLFxv8AO9QXPgRBl_G8EFpK3k9NM6khoLuM7ktvRGH$>

    (the horizontal data points is 1 sample day every 3 months since
    late 2016)

    I'll mention again that the current documents state we should have
    multiple protocol transfer options available for implementations and
    operators to choose from.  This is sort of already case in existing
    implementations and we should support those.  IMHO, AXFR should
    definitely be one choice.  But a zonefile-over-HTTPS makes sense
    to me too.

-- Wes Hardaker
    Google

    _______________________________________________
    DNSOP mailing list -- [email protected]
    To unsubscribe send an email to [email protected]


_______________________________________________
DNSOP mailing list [email protected]
To unsubscribe send an email [email protected]

_______________________________________________
DNSOP mailing list [email protected]
To unsubscribe send an email [email protected]
_______________________________________________
DNSOP mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to