> On Feb 24, 2020, at 7:32 PM, Michael StJohns <[email protected]> wrote:
> An improvement, but still:

Thanks Mike.

> 
> 1.3 - general  - add something like "Specifically, ZONEMD covers the 
> integrity of <your text here> records that are not otherwise covered by 
> DNSSEC".

Sorry, I don't quite follow this.  There is currently no text between 1.3 and 
1.3.1.  Are you saying that the text suggested above should go there?  Or did 
you mean 1.3.5?  Feels like "Specifically, ..." should follow some thing so I'm 
not sure.

> 
> 1.3.5 - "zone digest calculation" not simply "zone digest" - this doesn't 
> require a ZONEMD record.

Ok, changed.

> 
> 1.3.2, 1.3.3 and 1.3.4 are mostly "off-line" (not DNS in-band) services.  
> It's still unclear to me the value of a ZONEMD record vs something like a 
> hash (signed or unsigned) separately published (ala root key down loads, or 
> various software with published hashes)  1.3.1 may have some benefit... but 
> still may be just another off-line service.

To me the value is quite clear.  The digest can be signed by the publisher, 
whose keys are already known and easily found.  By being part of the zone the 
digest is automatically propagated no matter how the data is transferred.


> 
> I think you still missed the point on Scheme/Digest mechanism.   For Scheme 1 
> - SIMPLE - the ancillary data is 1 byte of digest algorithm indicator and N 
> bytes of the actual digest per the digest, and the digest is calculated per 
> section 3.  For scheme 2 -N  - the ancillary data is not specified, and the 
> RR may not have a digest indicator, or may have several indicators or fields. 
>   Describing the RRSet as Scheme plus data, and then describing the format of 
> the SIMPLE scheme's  data field is clearer for the extension model.      That 
> also implies that any RRSet with a unknown scheme has to be digested as if 
> the entire data field (e.g. including the digest field for SIMPLE) is 
> zeros*****.
> 

I disagree.  The scheme + algorithm + digest is, IMO, the better approach.  The 
presentation format is better and more understandable.  I believe there is 
significant value in having the RDATA fully parsable by libraries and such.  It 
strikes the right balance between extensibility and simplicity.

The Scheme space is large enough to a wide variety, and even encode some 
"indicators" in the scheme value.  For example, if you wanted to have a 
Merkle-tree based zone digest one of the indicators might be how to partition 
the name space and another might be the tree depth.  The scheme could be 
MERKLE-DEPTH4-SHA2.  

If the RDATA were scheme + ancillary then we would have to change all the rules 
about multiple ZONEMD RRs to support algorithm rollovers.  As currently 
written, multiple RRs must have unique scheme + algorithm tuples.  If that were
to be come just unique scheme, then you couldn't have an algorithm rollover 
since its opaque. 

Sure you can dream up some digest approaches that won't fit in the proposed 
ZONEMD RDATA but I think there are a number of workable approaches that will 
fit and will support large, dynamic zones.  IMO if there is some future 
approach that doesn't fit, then a new RRtype could be defined for it.


> 
> 2.1 - "Included" is confusing here.  Instead "the digest is calculated over 
> the data as-is and the RR is not replaced by a placeholder RR.

Ok, new proposed text:

  During digest calculation, non-apex ZONEMD RRs are treated like any
  other RRs.  They are digested as-is and the RR is not replaced by a
  placeholder RR.



> 3.1 - Could be misunderstood as written. In the second para "one or more 
> placeholder ZONEMD RR(s) are added (one for each digest in the SIMPLE scheme 
> and as many as are required to cover new schemes)".  Last paragraph becomes 
> redundant (and it's actually multiple ZONEMD RRs not digests).

New proposed text:

  Prior to calculation of the digest, and prior to signing with DNSSEC,
  one or more placeholder ZONEMD records are added to the zone apex.
  This serves two purposes: (1) it allows the digest to cover the
  Serial, Scheme, and Hash Algorithm fields, and (2) ensures that
  appropriate denial-of-existence (NSEC, NSEC3) records are created if
  the zone is signed with DNSSEC.  When multiple ZONEMD RRs are
  published in the zone, e.g., during an algorithm rollover, each must
  specify a unique Scheme and Hash Algorithm tuple.



> *****(and as I think about it - what would be the harm in not including 
> ZONEMD placeholder records in the ZONEMD digest -e.g. just skip them?)

For signed zones DNSSEC is sufficient to detect tampering of ZONEMD RRs.

For unsigned zones, including the placeholder protects against unintentional or 
accidental modification of the non-digest parts of the RR.

> 
> 4.1 - move the text up before the text for 4. and delete the subsection.  
> Style wise, RFCs try and avoid single subsections under a section.

New proposed format:

4.  Verifying Zone Digest

  The recipient of a zone that has a ZONEMD RR can verify the zone by
  calculating the digest as follows.  If multiple ZONEMD RRs are
  present in the zone, e.g., during an algorithm rollover, a match
  using any one of the recipient's supported Schemes and Hash
  Algorithms is sufficient to verify the zone.

  [enumerated list...]

  Note that when multiple ZONEMD RRs are present in the zone, the
  Digest field of each MUST be zeroed in step 8 above, even for
  unsupported Scheme and Hash Algorithm values.



> Section 5 - what's the requirement to add new entries to the registries being 
> defined?   Expert Review, RFC? 

Added:

   The IANA policy for assigning new values to the ZONEMD Scheme
   registry shall be Specification Required, as described in [RFC8126].

and

   The IANA policy for assigning new values to the ZONEMD Hash Algorithm
   registry shall be Specification Required, as described in [RFC8126].


> 
> 6.2 assumes the zonemd record is NOT calculated on the fly or dynamically 
> upon request. 

Proposed text:

6.2.  Attacks Utilizing ZONEMD Queries

  Nothing in this specification prevents clients from making, and
  servers from responding to, ZONEMD queries.  Servers SHOULD NOT
  calculate zone digests dynamically (for each query) as this can be
  used as a CPU resource exhaustion attack.


> 
> 
> 6.3 last paragraph conflicts with 4.1 - e.g. if all you have is a private 
> hash, then you can't verify ZONEMD and .... well you get the point.   
> Basically, you have no signalling mechanism for when you might be dependent 
> on this record past NSEC/NSEC3.   I don't know how to resolve these two.

Sorry, I don't understand the concern here.  Yes, if a recipient is given only 
a private algorithm ZONEMD then it won't be able to verify it (unless its in on 
the private arrangement).  This is similar to unsupported DNSSEC algorithms. 


> 
> 7.1 - add "This calculation does not take into the account any time required 
> to canonicalize a zone for processing".

The benchmarks do in fact take that into account.  New text:

  These benchmarks attempt to emulate a worst-case scenario and take
  into account the time required to canonicalize the zone for
  processing.  Each of the 800+ zones were measured three times, and
  then averaged, with a different random sorting of the input data
  prior to each measurement.


DW


Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
DNSOP mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/dnsop

Reply via email to