Frank Hecker wrote:
"CA knows otherwise"
This case seems like it's uniformly a bad thing; after all a CA
operating under a "knows otherwise" policy would deliberately and
knowingly issue me or anyone else an SSL server cert for
www.paypal.com or www.citibank.com whatever. Thus a policy that
rejected "knows otherwise" CAs would be consistent with protecting
typical users against a plausible threat, namely phishing attacks.
I would agree in general, but let's see...
This case is also pretty straightforward to craft policy language for;
the key is the "knowing" nature of the CA's actions, which in this
case would presumably be disclosed in the Certificate Policy or
Certification Practice Statement. (Otherwise the CA would be acting
contrary to the CP/CPS, and should not be able to pass an
audit/evaluation.)
Well, you might be lucky in that the audit/evaluation
would pick it up, but I wouldn't rely on that. For some
large proportion of cases, if the CA operates in this
mode, it will also keep it from the auditor/evaluator.
Now, what would we do with a (hypothetical) request to add this
organization's CA cert to Mozilla/Firefox/etc? Well, we could
certainly reject it as not being relevant to typical Mozilla users,
being an intranet CA and not a public CA. Even if it were an
Internet-based service then it's still arguably not relevant to
typical users, since the only people who would really need the cert
are people configured to use the proxy in question.
However we'll go an extra step and assume for the sake of argument
that the CA in question issues "real" certs of interest to typical
users, in addition to the "fake" certs described here. If we still
wanted to reject including this CA cert (and I presume we would, based
on the phishing-related concerns) then we'd need either "catch all"
language as I've previously described (i.e., to allow rejection based
on general security concerns) or a specific policy provision applying
to the "CA knows otherwise" case.
One problem with this whole discussion is you are trying
to predict a future case, *and* you are assuming that
someone is up to no good, so your conclusion is that once
found out, it is easy to deal with. Shaky ground.
Are there other possible "CA knows otherwise" use cases that at least
some people might consider legitimate, and that a policy might have to
allow for? I don't know -- my imagination is probably too limited. But
I'll assume for now that the answer is "no", and that having our
policy specifically address the "knows otherwise" case is both
possible and desirable.
Yes, consider the Verisign conflict of interest with respect
to their usage of the Lawful Intercept Service:
http://www.financialcryptography.com/mt/archives/000206.html
http://www.financialcryptography.com/mt/archives/000332.html
In that situation, one can say that Verisign could engage in
fake cert issuance knowingly. And we can pretty much guess
that the auditor would fall into line here. Further, it is quite
likely that we may have a suspicion of this activity, but the
company itself will be unlikely (due to the court's decree) to
permit any opening or discussion of said activity, we would
never have sufficient proof of anything to support any policy
determination.
Now, is that legitimate? It's not in the interests of the user,
I think we can make that case. But, it is in accordance with
the processes of the law and courts, at some level. So making
a decision based on this is not easy.
"CA knows nothing"
Now let's turn to the case where the CA does not vet subscribers at
all, so, for example, subscribers are free to apply for an SSL server
cert under any domain name whatsoever.
I think as a technical cryptographic issue, the purpose of the cert
is to establish the domain name. This was the pure crypto reason
for its existence, as this closed the alleged MITM hole, whereby
some bad Mallory would sit at another place and pretend to be
both endpoints.
So, if the domain name is not being checked as controlled, then
the cert is no longer performing a cryptographic role. This does
seem to be at least something to pay slightly more than lip
service to.
Is that included in your "CA knows nothing" case? Or does the
CA literally hand out certs for Amazon.com to whoever asks?
The practical effect in terms of enabling potential phishing attacks
is pretty much the same, but the underlying CA intent may be
different, so I've classified this as a different case.
As far as phishing is concerned, even if the CA knows nothing,
the cert gives that all-important relationship hook.
(Not that I'm considering legal issues here, but this distinction is
reminiscent of the distinctions that people draw in the case of P2P
systems between a P2P network provider knowingly infringing copyrights
and such a provider simply providing a service with no detailed
knowledge about what people are using it for. Some people -- like
lawyers for the RIAA -- claim that this is a distinction without a
difference, but I and others would disagree, given the potential for
subtantial non-infringing uses of P2P networks -- like distributing
copies of Firefox.)
Given the implications for phishing, one could argue that we should
also reject a CA whose policies permit "knows nothing" issuance of certs.
No, not at all. With classical phishing, the key to defence
is to map the real cert against any other cert. Just the
change is enough to base a warning on. If MinimalCA
issues an amazon look-alike, then the browser still detects
it as being different.
But, there are two phases in browsing, and they are both
distinct:
1. Introduction
2. Repeat visit.
Phishing is primarily a weakness in 2, and this can be addressed
with techniques of comparing one cert already known against
another not known.
Yet, if there exist MinimalCerts for given domains, the Introduction,
phase 1., is now wide open to an MITM. That's not as serious as
fixing phishing, but I can't see the point in winding back the model
so far that Mozilla transparently permits anyone to pretend
to be anyone.
(Such certs are logically equivalent to self-signed certs. There
is nothing wrong with self-signed certs, as long as they are
presented to the user for what they are. "This cert is not
signed by anyone important!" So in a sense, there is no point
in permitting MinimalCerts, because those are self-signed
certs and we already have those.)
How could one do so in terms of the policy language (considering only
the SSL server case for now)? Perhaps we could require that CAs
implement "reasonable measures" to verify that applicants own the
domains associated with the certs (or are acting as authorized agents
for the owners). What exactly are "reasonable measures"? That's
subjectivity creeping in again. To go beyond that we really move into
the area of implementation and what is "enough" vs. "not enough", so
I'll postpone that discussion to the next and final case.
OK.
(Incidentally, this is another reason why I'm treating the "CA knows
otherwise" case as different from the "CA knows nothing" case, because
the relevant policy language would be different.)
If we decided to reject CAs with "knows nothing" policies, could that
negatively impact legitimate use cases relevant to typical users? One
possible legitimate use case I can think of is a free Internet-based
"test CA" service that crypto developers can use to get arbitrary certs
generated for use in testing their PKI-enabled software. We could
certainly reject such a CA's application (assuming we wished to do so)
based on it's not being relevant to typical Mozilla users. What if
this test CA were combined with a more typical CA under the same root?
Then we're in the same situation I described with the "CA knows
otherwise" case: we'd have to take advantage of "catch all" policy
language allowing rejection for general reasons having to do with
security risk, or we'd have to have specific policy language
addressing this.
Are there other possible "CA knows nothing" use cases that at least
some people might consider both legitimate and of interest to typical
users, and that a policy might have to allow for? Again, I don't know,
but I'll assume for now that the answer is "no", and that having the
policy specifically address the "CA knows otherwise" case is
desirable, although doing so is not as straightforward as in the
"knows otherwise" case. So on to the final case...
There are huge untapped security usages for MinimalCerts:
* all of email should start out using MinimalCerts,
because most email users already know each other
and already have as much trust in each other as
they are likely to ever want or can get.
* code signing can quite happily use MinimalCerts
just to protect downloads from external hacking
attacks (a known and rare but actual threat).
* anyone putting together a test or low volume site
can use a Minimal Cert and later upgrade if it is
shown to need any additional protection.
(I'm using the term MinimalCert there to encompass
all three levels of certs discussed, interchangeably.)
"CA doesn't know enough"
Last (but not least) we consider the case of CAs whose policies
involve some sort of vetting of subscribers, but where in our opinion
we don't believe that the vetting is "good enough". This is an area
where I believe subjectivity is inevitable; let me be absolutely clear
on what I mean by this:
Certainly we can come up with some set of specific and "objective"
requirements on what CAs should do to vet subscribers: "provide full
name, address, date of birth, etc.", "show up in person with passport
or other national identity card", "provide evidence of organizational
affiliation on organization letterhead", and so on. That is what lots
of people have done, including ETSI and the Electronic Authentication
Partnership. But that is most emphatically *not* the point I am
making; the point of our proposed CA cert policy is *not* to be a "how
to prove your identity" checklist.
Rather the point is: how do we decide that a given set of measures to
vet CA subscribers -- the set of measures that we presumably want to
enshrine in our policy -- is the minimal set that is "good enough",
and that dropping even one minor element from that minimal set makes
the vetting "not good enough"? IMO we can make that determination only
in the context of a specific threat (or set of threats) as applied to
particular use cases.
Correct, such a determination can only be made with
broad consideration of the users, the application, the
security required, and the threats as validated as being
out there.
And that is where I think subjectivity is inevitable: you have to make
a somewhat subjective assessment on what the threats are, how likely
they are, and how serious they are. You also have to make a somewhat
subjective assessment of what use cases are legitimate and relevant,
and thus should be provided for in the policy, and what use cases are
illegitimate and/or irrelevant, and hence can be ignored as far as the
policy is concerned. IMO you can't completely apply analytical and
deductive processes here, even if you have some hard data regarding
threats, etc., because people may be proceeding from different axioms
in terms of their values and beliefs regarding the threats and use
cases -- for example, what one person considers to be an illegitimate
or irrelevant use case another person might consider to be a (or even
the) major reason to use the product.
Yes.
So, to summarize, here's the lines along which I'm thinking at the
moment:
1. It is desirable and possible to have policy language allowing
rejection of CAs with "knows otherwise" policies and practices.
OK. Just hypothetically, if we want to go down that
path, are you then prepared to proceed given the
example I gave above? Is everyone?
And if not, then what are we saying? We'd *like*
to reject "knows otherwise" behaviour, but we can't
quite stick to the policy; it's an ideal only?
(I do think we have to have this debate. Just far
can we go being the policeman for the world's CAs
and the world's users?)
2. It is desirable to have policy language allowing rejection of CAs
with "knows nothing" policies and practices, with the exact language
depending on how we approach the "not good enough" case. (Since the
instant that we say "CAs must vet subscribers" then we immediately
raise the question of which types of vetting are "good enough" and
which are not.)
I think I would say that logically, based on the
cryptographic security properties of TLS, there
should be the case where each CA does enough
to show the domain as being controlled in some
minimal sense at least. The only reason for this
is that if we don't do this, there is no crypto point
in having CAs at all.
Now, that is a completely distinct point to whether
a CA does check the domain with high reliability.
I'm not saying anything towards the question of
whether the controls are good enough or otherwise,
and in fact, I think the browser has to be capable
of defending itself against exactly that, because
as this debate shows, we cannot figure out a way
to reliabily control what the CAs do in every case.
3. It is possible to have policy language addressing the question of
whether CA's vetting of subscribers is "good enough", but it likely
will prove to be impossible to completely eliminate subjectivity in
the implementation of such policy language (i.e., in determining
whether a particular CA passes the test or not).
I agree. There's all these CAs doing good stuff out
there finding good markets and helping people use
TLS for new good purposes. It's not up to us to say
what's good or not.
But, it does mean that the user needs some way of
differentiating the options available to her.
4. Any policy language needs to take into account and separately
address the possible use cases and the relevant threats for those use
cases. For this policy we have three overall categories of use cases
-- for email certs, SSL server certs, and object signing certs -- and
then multiple use cases within those overall categories (e.g., for SSL
server certs we have HTTP/SSL for web sites vs. IMAP/SSL for email
servers).
Now let's see if I can crank out the next message right away, and not
keep you all in suspense :-)
iang
--
News and views on what matters in finance+crypto:
http://financialcryptography.com/
_______________________________________________
mozilla-crypto mailing list
[email protected]
http://mail.mozilla.org/listinfo/mozilla-crypto