On Tue, Nov 05, 2013 at 05:40:43PM -0800, Matt McCutchen wrote:
> > My proposal is as follows:
> >
> > When a TLSA RRset contains multiple RRs of the form:
> >
> > _<port>._tcp.server.example.com. IN TLSA <U> <S> <M> <D>
> >
> > with the same values of "U" and "S" but different values of the
> > matching type, a client MAY ignore a "weaker" matching type
> > (deprecated digest algorithm) when a "stronger" matching type for
> > the same usage and selector is present. Which matching types are
> > considered "weaker" is generally at the client's discretion.
>
> > - TLSA records that specify multiple certificates or public
> > keys for a single (U,S) combination (e.g. multiple trust
> > anchors, or multiple EE certificates during key roll-over)
> > MUST use the same set of matching types for all of them!
> >
> > Otherwise, clients may fail to support one of the desired
> > certificates, when they choose to support only the RRs with
> > the strongest matching type.
>
> I.e., the same solution that is de facto used by DNSSEC DS records
> (https://www.ietf.org/mail-archive/web/dnsext/current/msg11008.html).
>
> I believe in the need for algorithm agility and proposed three possible
> solutions including the above during the original design process
> (https://trac.tools.ietf.org/wg/dane/trac/ticket/22), but got no
> traction.
Thanks, yes, so my proposal requires no changes to the DANE RRset,
rather it requires sensible DNS operator practice. For each
(usage, selector) combination the mapping:
"data" -> SET OF mtype for RRs that correspond to "data"
must be the same for all "data" that use the same (usage, selector).
I think this is the "Cartesian Product" option in your ticket, but
the set of mtypes may differ from one (usage, selector) pair to another.
This is now queued for inclusion in the next Postfix snapshot. Details
that the group may want to comment on:
- Records with mtype "0" are always considered stronger than any
hash function for where collisions are unavoidable in principle,
even if believed computationally infeasible.
- Before choosing the "best" (client selected, not specified by RFC)
digest for a particular (usage, selector) pair found in the TLSA
RRset, any plainly unusable RRs are discarded:
* With "X 0 0", discard RR if the associated data is not an
ASN.1 encoding of an X.509 certificate.
* With "X 1 0", discard RR if the associated data is not an
ASN.1 encoding of an X.509 SPKI structure.
* Otherwise, with "X [01] Y" for non-zero "Y", discard the RR
when the length of "Y" is not a valid length for digest
algorithm "Y".
Basically, when all the RRs for a particular mtype (that would otherwise
be considered strongest present) are a-priori unusable due to malformed
data, we can't be sure whether even the "selector" and "mtype" are valid,
after all the record is junk. So it seems reasonable to not impute any
meaning to such a record's meta-data in the face of broken data.
Otherwise, we commit to failing all all clients that support the
apparently stronger algorithm, without in fact knowing where the
problem lies.
I must admit that of course this is an incomplete solution, some
particularly elite domain administrators could replace all the hex
E's in their association data with 3's and clearly some clients
will again fail. So the strategy does not catch all problems, just
the most obvious ones where we would otherwise have to fail when
all the records of (apparently) the best mtype are unusable.
If the majority feel strongly that one must take as much of a broken
record as one can at face value, we should include this in the OPs
BCP, and perhaps in any future DANE bis. My preference is to
not read any meaning into unusable records.
--
Viktor.
_______________________________________________
dane mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/dane