On Mon, 17 Mar 2014, Viktor Dukhovni wrote:

  * It should be possible for servers to publish TLSA records
    employing multiple digest algorithms allowing clients to
    choose the best mutually supported digest.

Isn't that already possible?

Not based on RFC 6698 alone.  With RFC 6698 the client trusts all
TLSA records whether "weak" and "strong".

4.1 states:

      A TLSA RRSet whose DNSSEC validation state is secure MUST be used
      as a certificate association for TLS unless a local policy would
      prohibit the use of the specific certificate association in the
      secure TLSA RRSet.

Can that not be used to reject a weak digest?

My proposal is essentially the same.  The client uses the strongest
acceptable digest algorithm.  The *client* decides what "strongest"
means.  It never chooses an unsupported algorithm.

but you want to fail if that one selected one fails. I don't think that
is the right decision.

If a certain digest is so weak it is basically broken, it should not be
left in a published TLSA record.

Weak digests (say SHA2-256 if/when broken) cannot be easily removed
from RRsets until all clients support stronger ones.  The idea is
to publish stronger digests and deploy stronger clients, then remove
weak digests later.  Stronger clients will never use the published
weak records.  Otherwise there's an Internet-wide flag-day.

I don't think we disagree. the server publishes a new strong digest, and
clients that support that and consider sha2-256 weak will not use
sha2-256. If the admin messes up the new strong digest, than new clients
will fail to get a TLSA record, and old clients will use an unsafe one.

If the most prefered TLSA record fails validation, the client should try
another TLSA record.

This works poorly.  While the weak algorithm is being phased out
(years) even clients that support stronger algorithms are at risk.

New clients can have a local policy that states never to accept weak
digests. I don't see a problem with agility. The weak TLSA records
are only left in for clients that support nothing stronger.

This also gives the server admin some more protection. If they publish
digests using SHA2-256 and SHA1, and it turns out their tool generates
bad SHA2-256, than the clients still have a valid SHA1 to fall back to.

They could also publish a bogus CU or selector, or mess up in many other
ways.  I don't think that the intent of multiple algorithms in 6698 is
to mask bogus data.

Maybe I don't understand what you think the problem is?

Perhaps there is text in the DS record RFC to look at that describes
this better than I just did.

Perhaps Wes can chime in.  His comment to me was that the proposed
DAA (digest algorithm agility) is essentially the only possible
and largely analogous to the DNSSEC approach.

So aren't we all agreeing?

Paul

_______________________________________________
dane mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/dane

Reply via email to