On 2/27/2020 12:46 PM, Wessels, Duane wrote:

On Feb 24, 2020, at 7:32 PM, Michael StJohns <[email protected]> wrote:
An improvement, but still:
Thanks Mike.

1.3 - general  - add something like "Specifically, ZONEMD covers the integrity of <your 
text here> records that are not otherwise covered by DNSSEC".
Sorry, I don't quite follow this.  There is currently no text between 1.3 and 1.3.1.  Are 
you saying that the text suggested above should go there?  Or did you mean 1.3.5?  Feels 
like "Specifically, ..." should follow some thing so I'm not sure.


Sorry - that should have been section 1.1, sixth paragraph.  You mention two items as examples, but it would be better to list the complete set of existing RRs that you can't depend upon DNSSEC for.


1.3.5 - "zone digest calculation" not simply "zone digest" - this doesn't 
require a ZONEMD record.
Ok, changed.

1.3.2, 1.3.3 and 1.3.4 are mostly "off-line" (not DNS in-band) services.  It's 
still unclear to me the value of a ZONEMD record vs something like a hash (signed or 
unsigned) separately published (ala root key down loads, or various software with 
published hashes)  1.3.1 may have some benefit... but still may be just another off-line 
service.
To me the value is quite clear.  The digest can be signed by the publisher, 
whose keys are already known and easily found.  By being part of the zone the 
digest is automatically propagated no matter how the data is transferred.

This is a question more of cost/benefit than value.   Adding RRs requires changes to the set of zone data - simply adding a hash somewhere a consumer can pick it up - does not.    I'm not convinced this is worth the cost to implement, but I'm not going to push back on the publication for that reason... I may still for others :-)




I think you still missed the point on Scheme/Digest mechanism.   For Scheme 1 - 
SIMPLE - the ancillary data is 1 byte of digest algorithm indicator and N bytes 
of the actual digest per the digest, and the digest is calculated per section 
3.  For scheme 2 -N  - the ancillary data is not specified, and the RR may not 
have a digest indicator, or may have several indicators or fields.   Describing 
the RRSet as Scheme plus data, and then describing the format of the SIMPLE 
scheme's  data field is clearer for the extension model.      That also implies 
that any RRSet with a unknown scheme has to be digested as if the entire data 
field (e.g. including the digest field for SIMPLE) is zeros*****.

I disagree.  The scheme + algorithm + digest is, IMO, the better approach.  The 
presentation format is better and more understandable.  I believe there is 
significant value in having the RDATA fully parsable by libraries and such.  It 
strikes the right balance between extensibility and simplicity.

The Scheme space is large enough to a wide variety, and even encode some 
"indicators" in the scheme value.  For example, if you wanted to have a 
Merkle-tree based zone digest one of the indicators might be how to partition the name 
space and another might be the tree depth.  The scheme could be MERKLE-DEPTH4-SHA2.

If the RDATA were scheme + ancillary then we would have to change all the rules 
about multiple ZONEMD RRs to support algorithm rollovers.  As currently 
written, multiple RRs must have unique scheme + algorithm tuples.  If that were
to be come just unique scheme, then you couldn't have an algorithm rollover 
since its opaque.

You can set what ever rules you want for the SIMPLE scheme as to acceptability of the data.  That includes doing an algorithm rollover for receivers that understand the SIMPLE scheme, and doing that without affecting any other scheme.

Basically, the rules would change to "A zone may contain any number of ZONEMD RRs with any number of schemes.   Each scheme may have its own restrictions for the number of RRs published using that scheme." and "no more than one SIMPLE scheme RR with a given Digest field" and "if you don't understand the scheme, ignore it".


Sure you can dream up some digest approaches that won't fit in the proposed 
ZONEMD RDATA but I think there are a number of workable approaches that will 
fit and will support large, dynamic zones.  IMO if there is some future 
approach that doesn't fit, then a new RRtype could be defined for it.

And here's where I push back.   Any future scheme that purports to digest a zone that uses a new RR that's not ZONEMD would have to be fully processed (e.g. digest calculated, DNSSEC signatures done) BEFORE you could do your scheme calculation.   And say there's a third - now that third one has to be done before the second, and before your scheme.  Etc.   You've provided no way for a future scheme to ignore your records or for your scheme to ignore their records so you automatically impose a set of processing orderings that may be difficult to implement.

I propose that ZONEMD be the only RR that includes a zone wide digest or digesting scheme.  I propose that this document be ZONEMD  (the RR format top level) + SIMPLE (the digest scheme and the ancillary data format) and define a digest of SHA384 for SIMPLE.  I propose that the description of ZONEMD digests OMIT for all schemes the inclusion of ZONEMD RRs for the purposes of calculating the digest.

If you don't do it this way, I strongly propose that this document not be placed on the standards track and instead be left as Experimental.




2.1 - "Included" is confusing here.  Instead "the digest is calculated over the 
data as-is and the RR is not replaced by a placeholder RR.
Ok, new proposed text:

   During digest calculation, non-apex ZONEMD RRs are treated like any
   other RRs.  They are digested as-is and the RR is not replaced by a
   placeholder RR.
Yup.


3.1 - Could be misunderstood as written. In the second para "one or more placeholder 
ZONEMD RR(s) are added (one for each digest in the SIMPLE scheme and as many as are 
required to cover new schemes)".  Last paragraph becomes redundant (and it's 
actually multiple ZONEMD RRs not digests).
New proposed text:

   Prior to calculation of the digest, and prior to signing with DNSSEC,
   one or more placeholder ZONEMD records are added to the zone apex.
   This serves two purposes: (1) it allows the digest to cover the
   Serial, Scheme, and Hash Algorithm fields, and (2) ensures that
   appropriate denial-of-existence (NSEC, NSEC3) records are created if
   the zone is signed with DNSSEC.  When multiple ZONEMD RRs are
   published in the zone, e.g., during an algorithm rollover, each must
   specify a unique Scheme and Hash Algorithm tuple.

See below. I think the right way is to omit the ZONEMD RR from the calculation. 
  It doesn't add anything to the security.  I didn't say to omit the NSEC or 
NSEC3 records....

*****(and as I think about it - what would be the harm in not including ZONEMD 
placeholder records in the ZONEMD digest -e.g. just skip them?)
For signed zones DNSSEC is sufficient to detect tampering of ZONEMD RRs.

For unsigned zones, including the placeholder protects against unintentional or 
accidental modification of the non-digest parts of the RR.

Not really.   Say someone substitutes SHA512 (digest 2) for SHA384, but doesn't change the data - that's detectable, because the digest doesn't match the length of the digest data or SHA512 calculations don't match the digest data.

This is pretty much the same part of the record as the Signature and algorithm part of a certificate.  Neither of those parts are protected.


4.1 - move the text up before the text for 4. and delete the subsection.  Style 
wise, RFCs try and avoid single subsections under a section.
New proposed format:

4.  Verifying Zone Digest

   The recipient of a zone that has a ZONEMD RR can verify the zone by
   calculating the digest as follows.  If multiple ZONEMD RRs are
   present in the zone, e.g., during an algorithm rollover, a match
   using any one of the recipient's supported Schemes and Hash
   Algorithms is sufficient to verify the zone.

   [enumerated list...]

   Note that when multiple ZONEMD RRs are present in the zone, the
   Digest field of each MUST be zeroed in step 8 above, even for
   unsupported Scheme and Hash Algorithm values.



Section 5 - what's the requirement to add new entries to the registries being 
defined?   Expert Review, RFC?
Added:

    The IANA policy for assigning new values to the ZONEMD Scheme
    registry shall be Specification Required, as described in [RFC8126].

and

    The IANA policy for assigning new values to the ZONEMD Hash Algorithm
    registry shall be Specification Required, as described in [RFC8126].


6.2 assumes the zonemd record is NOT calculated on the fly or dynamically upon 
request.
Proposed text:

6.2.  Attacks Utilizing ZONEMD Queries

   Nothing in this specification prevents clients from making, and
   servers from responding to, ZONEMD queries.  Servers SHOULD NOT
   calculate zone digests dynamically (for each query) as this can be
   used as a CPU resource exhaustion attack.



6.3 last paragraph conflicts with 4.1 - e.g. if all you have is a private hash, 
then you can't verify ZONEMD and .... well you get the point.   Basically, you 
have no signalling mechanism for when you might be dependent on this record 
past NSEC/NSEC3.   I don't know how to resolve these two.
Sorry, I don't understand the concern here.  Yes, if a recipient is given only 
a private algorithm ZONEMD then it won't be able to verify it (unless its in on 
the private arrangement).  This is similar to unsupported DNSSEC algorithms.


4.1 says:

    If multiple digests are present in the zone, e.g., during an
    algorithm rollover, a match using any one of the recipient's
    supported Hash Algorithm algorithms is sufficient to verify the zone.
6.3 says:

  Zone publishers may want to deploy ZONEMD gradually, perhaps by
    utilizing one of the private use hash algorithms listed in
    Section 5.3.  Similarly, recipients may want to initially configure
    verification failures only as a warning, and later as an error after
    gaining experience and confidence with the feature.

The problem here is that there's no benefit to using the private algorithm, given that there's no way to signal the requirement that a zone include ZONEMD RR.  This is all hand configured (and AFAICT required to be hand configured) at the verifier.  Let's say as I publisher I do what you say, and the recipients haven't talked to me and have configured verification for the real algorithm.   The zone can't be verified.  Hence verification warning or error.

This is why I harped so much of "please tell me what happens if a zone doesn't verify for each of these cases".  What you've again told me is that "it depends on what the receiver wants to do with the data".  Which requires an OOB path to coordinate between the publisher and receiver for way too many things.  Which brings me back to "how is this better than just publishing a hash value somewhere the receiver can pull it?"

I can't see how this scales if I have to have a separate configuration per zone at the receiver.



7.1 - add "This calculation does not take into the account any time required to 
canonicalize a zone for processing".
The benchmarks do in fact take that into account.  New text:

   These benchmarks attempt to emulate a worst-case scenario and take
   into account the time required to canonicalize the zone for
   processing.  Each of the 800+ zones were measured three times, and
   then averaged, with a different random sorting of the input data
   prior to each measurement.
K- thanks.

DW


Mike


_______________________________________________
DNSOP mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/dnsop

Reply via email to