On Thu, Jun 05, 2014 at 06:40:54PM -0700, John Gilmore wrote:
> > Finally, the WG could decide that servers which publish U/S/M
> > combinations in their TLSA RRset that contain only future or only
> > past keys are "misconfigured", and that servers SHOULD NOT do that.
>
> I don't understand how any crypto protocol can succeed when it
> authenticates only past or future keys, not the keys in present use.
Not *only*, rather concurrently both present and either past or
future keys (possibly all three, but that's generally unnecessary).
> Can you explain why you think such a server is NOT misconfigured?
During administrative changes in the TLSA RRset, given the asynchronous
nature of DNS data propagation, one needs to insert future data before
removing present data. Once the new keys are in place, one can remove
past data. At various stages clients see one of:
* (old) present
* (old) present + (new) future
* (old) past + (new) present
* (new) present
No *proper* subset of the TLSA RRs is currently mandated to contain
a record matching the server chain. Only the *complete* RRset of
a correctly configured server must contain *at least one* matching
record. If the usage/selector/mtype combinations for "old" and
"new" chains are not identical (one to one correspondence) then a
client that employs a proper subset of the TLSA RRs (say because
RPK is only capable of matching "3 1 X"), then it might be that
this subset contains only past or only future keys and authentication
fails.
> Or
> perhaps you will agree that this is misconfiguration, but you think
> that for some reason that you can state, many people will foolishly
> misconfigure their servers this way, such that the protocol should
> gracefully handle that "common misconfiguration" case?
No it is not misconfiguration as currently defined. We could
publish operational requirements that make it a misconfiguration
or at least a violation of best practice.
Is that enough for clients to throw caution to the wind and assume
consistent BCP TLSA RR practices on the server side? Perhaps it
is safer for the client to forgo an optimization and just negotiate
X.509 in some edge cases that will be rare, but not unlikely
enough to not cause people real support issues.
The best example for PRK is a transition from a current "2 0 1" TLSA
RRs to future "3 1 1" RR. The administrator needs to know to do:
Initial:
IN TLSA 2 0 1 {old TA}
Intermediate:
IN TLSA 3 1 1 {switch to old leaf}
IN TLSA 3 1 1 {new leaf}
Final:
IN TLSA 3 1 1 {new leaf}
rather than the more naively obvious:
Initial:
IN TLSA 2 0 1 {old TA}
Intermediate:
IN TLSA 2 0 1 {old TA}
IN TLSA 3 1 1 {new leaf}
Final:
IN TLSA 3 1 1 {new leaf}
because here, until the new leaf is deployed after the original
TTL expires, PRK clients might conclude that it is safe to use PRK,
but it is not because the server's current (old) key can't be
matched via an EE SPKI association.
Asking the server operator to disable PRK (which be negotiated
automatically in some toolkit whenever the client asks just because
the server also has the feature) when the published TLSA records
might not be PRK friendly is I think a greater burden than asking
for care in how DNS records are updated.
This is an operational issue for both client and server, someone
has to do the heavy lifting. I think the client should not be
entirely off the hook, because PRK for clients that support both
X.509 and PRK is an optional optimization.
The impact will be low, because servers that support "3 1 X" will
most often only support "3 1 X", and the "mixed" RRsets will be
very rare.
--
Viktor.
_______________________________________________
dane mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/dane