>> On Apr 4, 2018, at 4:01 AM, Shane Kerr <sh...@time-travellers.org> wrote:
>> One issue is that the algorithm proposed requires that the recipient who
>> is generating a digest has to store basically the entire zone before
>> beginning the digest calculation, since it has no way to know if the
>> zone will be delivered in any kind of canonical order.
> Yes. Certainly for large zones this could be an issue. We will have to
> find the right balance between complexity and efficiency here.
> I think it makes the most sense to perform a verification when the zone is
> loaded or received by a name server, but it could also be done in an external
> process, such as named-checkzone or ldns-verify-zone.
>> As for specific questions in the draft:
>> * I think that it should be allowed for unsigned zones. If nothing else,
>> it provides the equivalent of checksum to help prevent truncations or
>> not properly updated records (missing deletions or additions).
>> * I don't see much benefit in multiple ZONEMD. We generally expect
>> masters & slaves to co-operate, so they should be able to find
>> software that has an algorithm that both sides support.
> Yes, but master/slave transfers can already be protected with TSIG. Perhaps
> the more interesting use case is out-of-band zone distribution.
I have seen zones get broken by transfer in production. I'd say it is
useful any time you transfer zones.
I think an interesting use case is detecting intentional modification by
the master (or a case where the master master is compromised in any way).
If you're just trying to verify the integrity of a file during transfer,
then yes TSIG is sufficient, but then so is a simple sha256sum on a file
that you copy out-of-band.
>> * I think the serial in ZONEMD is helpful. Doesn't it also help prevent
>> replay attacks?
> You might have to convince me that it does...
Actually you're right. Since the SOA is included in the digest, the
serial in the ZONEMD is not necessary. Probably it should be removed.
>> * I would rather not special-case ZONEMD regarding responding. Even
>> though it is bigger than most RDATA, large responses seem like a
>> general problem. OTOH, I won't object too loudly if someone feels very
>> strongly that this should be an option.
>> A final possible concern is that generating a digest on a large zone
>> might be computationally quite expensive. Indeed it could be used as an
>> attack vector on a hosting provider by using small IXFR to cause large
>> zone digests to be repeatedly calculated. Is it worth exploring the
>> possibility of including multiple ZONEMD in a zone at different names,
>> which digest the part of the zone up to that point? So something like:
> Yes, as currently specified it probably only works well with relatively
> static zones.
> Again, we'll have to decide where we want to be in the complexity vs
> efficiency tradeoff.
I don't think that the complexity of having ZONEMD that covers part of a
zone is very high. Certainly from either a master or a slave basically
you just need to reinitialize your hash context and continue rolling
Probably given our new camel-busting revolution I should implement the
draft before I make any such claims though...
DNSOP mailing list