Katriel,
I’ve also been thinking about a threat model, though your writing is much
clearer than mine!
thanks.
tl;dr: (a) There’s a lot of text about the details of revocation that I thought
was out of scope for CT. (b) A client specification may answer many of the
questions you raise about Monitors.
well, revocation, per se, may not be in scope for CT, but unless we can
specify what actions
can/will be taken in response to the detection afforded by CT, this
seems like a hard sell.
On Thursday, 11 September 2014 at 19:08, Stephen Kent wrote:
Certificate Transparency (CT) is intended to detect and mitigate...
Is it intended to mitigate problems? (-04) says so but immediately follows it
with “the logs do not themselves prevent misissue, but they ensure that
interested parties (particularly those named in certificates) can detect such
misissuance”
fair point. I'll revise that text if there is general agreement to do so.
A fairer introduction might be “CT is intended to allow subjects to detect…"
Well, it's not necessarily subjects, in the PKI sense. 6279-bis suggest
that a Subject
can ask a Monitor to look for certs that have been issued without the
authorization
of the Subject. Only if every Subject were to perform Monitoring for
itself would your
text be accurate. Still, this will be easy to fix.
A certificate is characterized as mis-issued if the certificate is issued to an
entity that is not authorized to represent the host (web server) named in the
certificate Subject field or Subject Alternative Name extension.
Or doesn’t follow certain regulations, or some other definition (e.g. comes from a
root that the CA can but does not use to issue certificates). I think it should be
up to individual Monitors to define what they consider to be mis-issued, though it
makes sense to give a minimal criterion, and hence that the text should say
something like “In the following, a certificate is characterised as mis-issued if…
but any Monitor may choose to use an alternative definition, since CT is
definition-agnostic."
"regulations?" We need to be precise. Ben replied with a much more
general characterization
of the term, which I believe is derived from CABF docs. I agree that we
don't want each
Monitor to make up its own criteria; I don't agree that a "minimal" set
of criteria is a
good idea. I would like to see uniformity in the criteria, based on
published standards,
so that everyone knows what will be OK, and what is not OK.
Certificate mis-issuance may arise in one of several ways. The ways that CT helps detect and remedy mis-issuance depends on the context of the mis-issuance.
I think the key distinction to be made is between contexts that cause the
certificate to be submitted to a non-malicious Log Server, and those that do
not. As you say, in the former case a Monitor tracking the targeted Subject
will detect the mis-issued certificate and set off the out-of-scope revocation
process. In the latter, TLS clients which are not backwards compatible will
refuse to accept the certificate, while TLS clients which do not yet enforce CT
will act as they do today.
Submission to a log server, or lack thereof, is part of the analysis, as
noted later. We may disagree
on what is the best way to structure the taxonomy.
Once the certificate is detected, is the revocation process (e.g. what data the
CA requires) in scope for this document?
I included the topic of what info is needed to help justify what is
covered by an SCT. The
(initial) analysis I performed noted that the serial number is not
required in all cases,
and it may not suffice in others (a malicious CA). So, to the extent
that we are discussing
what data needs to be covered by an SCT, this part of the analysis is
needed, and thus in scope.
1. If a CA submits the bogus certificate to logs, but these logs are not watched by a Monitor that is tracking the targeted Subject, CT will not mitigate a mis-issuance attack. It is not clear whether every Monitor MUST offer to track every Subject that requests its certificates be monitored. Absent such a guarantee, how do TLS clients and CAs know which set of Monitors will provide “sufficient” coverage. Unless these details are addressed, use of CT does not mitigation mis-issuance even when certificates are logged.
I understood Monitors to be run by Subjects (or paid by Subjects to do their
tracking for them), so I see no reason why they MUST offer to track anybody.
It’s log servers for which you need sufficient coverage, though, not Monitors:
one of the latter will suffice as long as it watches all the former. I don’t
know what the current definition of “the set of all log servers” is (*); for
the moment I assume it is defined by Google’s list for Chrome.
I don't recall that the text made it clear who was running Monitors. if
every Subject had to
run its own Monitor, it might be hard to argue that CT will provide very
wide coverage in the
near term. Yet, near term, broad coverage is precisely the argument made
to justify the need for
pre-certs. So, there is an important question here. I could easily
envision Monitors that are
paid by Subjects to watch over certs. But, if that is part of the model,
the doc needs to say
so. One Monitor, with a "pay for protection" business model probably
would NOT suffice. Also,
relying on one company's list to define the set of log servers is not
the sort approach we follow
in IETF standards, so ...
I agree it’s important to specify how Monitors can find out the set of log
servers. (e.g. it’s Very Bad if I run my own monitor but don’t update its list,
since then a mis-issued certificate submitted to a new log server will pass me
by.)
3. If a TLS client is to reject a certificate that lacks an embedded SCT, or is not accompanied by a post-issuance SCT, this behavior needs to be defined in a way that is compatible with incremental deployment. Issuing a warning to a (human) user is probably insufficient, based on experience with warnings displayed for expired certificates, lack of certificate revocation status information, and similar errors that violate RFC 5280 path validation rules.
What’s the precedent for defining incremental deployment in this type of document? I
suppose the wording would be something like “TLS clients wishing to respect
incremental deployment MAY choose to accept certificates without embedded SCTs,
but…"
The IETF hates flag days; we almost never approve a protocol design that
relies on all instances of
something to switch to a new something in unison. We're very big on
backwards compatibility :-).
4. The targeted Subject might request the parent or the offending CA to revoke the certificate of the non-cooperative CA. However, a request of this sort may be rejected, e.g., because of the potential for significant collateral damage.
Again, is this in scope for CT? This issue exists in today’s infrastructure
just as much (c.f. when people removed DigiNotar from trust lists).
The analysis needs to address what happens in response to detection of
mis-issuance, so that the
community can decide if the burden of deploying a big system of this
sort is justified by the
benefits.
Absent a protocol (a so-called “gossip” protocol) that enables Monitors to verify that data from logs are consistent, CT does not provide protection against logs that may conspire with, or be victims of, attackers effecting certificate mis-issuance. Even when a gossip protocol is deployed, it is necessary to describe how the CT system will deal with a mis-behaving or compromised log. For example, will there be a mechanism to alert all TLS clients to reject SCTs issued by such a log.
I believe this is the same question as at (*): who controls the list of trusted
logs? One assumes that anybody doing so (e.g. Chrome) would also run a Monitor,
and thus detect misissuances and remove said logs from their list the same way
they would have removed DigiNotar.
Chrome is a browser; Google is the vendor. If a vendor operates a
Monitor, it can detect some
types of mis-issuance where knowledge of Subject cert info is not
needed. For example, if
the cert issuance criteria require a minimum of 1024-bit RSA keys, any
Monitor can detect a
cert with a 512-bit key, and flag it as "bad." But, if one is trying to
detect that a cert
for foo.com was issued to the wrong entity, one needs to have
authoritative info about the
real foo.com cert, or knowledge that foo.com has never requested a cert.
A browser vendor
would have that info for itself, but not for web sites in general. So, I
disagree with your
analysis that anyone who runs a log, such as a browser vendor, is in a
position to detect
the form of mis-issuance that I had assumed was the primary motivation
for CT. I agree that
any entity running a log could choose to run a Monitor, but by itself,
one Monitor cannot
necessarily discover a misbehaving log.
I think a section specifying the actions of TLS clients should answer a lot of
these questions. Such a section would say in part that clients must keep lists
of CAs and log servers, maintained either by themselves or by a trusted third
party, and that the signatures on the cert and the SCT should be from entities
in the respective lists. Whoever maintains the list undertakes to remove both
bad CAs and bad log servers if they are detected.
Yes, these are topics that should be addressed in the description of TLS
client behavior.
Monitors also play a critical role in detecting certificate mis-issuance, for Subjects that have requested monitoring of their certificates. Thus Monitors represent another target for adversaries who wish to effect certificate mis-issuance. If a Monitor is compromised by, or is complicit with, an attacker, it will fail to alert a Subject to a mis-issued certificate targeting the Subject. This raises the question of whether a Subject needs to request certificate monitoring from multiple sources to guard against such failures.
Subjects must choose a trusted Monitor because the Monitor is by definition the
thing that watches the logs; they can do this themselves or they can delegate
to a third party, in which case they rely on that party not to be compromised.
This doesn’t seem particular to CT: if I install a burglar alarm either I have
to monitor it, or I have to pay someone else to monitor it and trust them to
respond if it goes off, or I have to accept that the alarm might be ignored.
Your analogy is reasonable. Nonetheless, the topic of trust in Monitors
has a place in the
attack analysis.
Steve
_______________________________________________
Trans mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/trans