Frank Hecker wrote:
> Nelson B wrote:
>> Very little of this has happened historically because the existing CAs
>> now in mozilla's list have been very very good at not issuing "duff"
>> certs.
>> However, mozilla is now considering changing its standards for admission
>> to mozilla's trusted CA list. I think there is substantial risk of
>> increased "duff" certs (especially SSL certs) from this plan.
> This is a serious question, and I think it's worth looking in more depth
> into the extent to which this might be true; otherwise some might
> mistake your statement as simply FUD, and I don't believe you intended
> it that way.
You're right, I didn't. I appreciate your confidence in that.
You asked a very poignant set of questions, well written, pretty
concise. Good work. Wish I could do that as fast as you seem to!
> First, what's a "duff cert" in this context?
I apologize for using that term. I used it merely because the message
to which I replied used it.
I consider a duff cert to be any of the kinds of certs that have caused
me (and the NSS group) trouble over the years, including (but not limited
to) these:
1. certs with technical flaws, e.g.
- duplicate issuer names and serial numbers.
- invalid public keys (e.g. DSA cert with 2kbit primes,
RSA certs with public exponent == 1).
- incorrect extensions (e.g. SSL certs that exclude SSL usage, or
authority key IDs that include BOTH the key ID *AND* the
issuer's issuer name and serial number.
- invalid dates
- ASN.1 DER encoding errors.
2. Certs with false (and therefore inadequately verified) information
about the identify of the party to whom the cert is issued, info
that was verifiably false at the time of issuance.
3. Certs with NO INFORMATION about the party to whom the cert is
issued, except an email address or a domain name, or other
info that doesn't identify the party.
[see "binary UI security model" below for more on this.]
4. (recent addition) certs used with phishing, including:
- certs with names confusingly similar to other domains, e.g.
paypal-security.com
- certs with IDN/punycode names that look like well known names
but aren't exactly.
All these problems were present in the certs at the time they were
issued. A CA who does adequate technical validity checking and
adequate due diligence about the requestor's credentials will
pass all these tests.
IMO, a CA that issues certs that are technically invalid, or that
lack all credible identity info, should never be admitted to mozilla's
list ever, neither as "high assurance" nor as "low assurance".
You will note that points 2-3 relate to the issue of "high assurance"
vs. "low assurance". This is a crucial part of the issue, and I
will address it below.
> Second, you seem to be saying that under the past and current policies
> for adding certs to the Mozilla default set (as implemented by Netscape
> and then myself) there has been minimal issuance of "duff certs"
> (however defined), but that under the proposed new policy the issuance
> of "duff certs" could be substantially increased. The implies (at least
> to me) that such an increase in "duff certs" would be specifically
> caused by adopting the new policy, to the exclusion of other factors. Am
> I reading you right here?
Yes. Now before I respond to any more questions, I must speak to the
"binary UI security model", because it is crucial to my answers.
Sorry if this is a bit long winded.
Today, the mozilla products have a binary UI security model. The padlock
is either open or closed. Period. And for SSL, users understand the
closed padlock to mean "good enough for banking". In other words,
high assurance *and* strong crypto.
As long as that remains true, as long as the padlock is either open or
closed, and no other info is presented to the user IN THE MAIN WINDOW
on which the user can judge the quality of the cert/CA, then IMO the
standard for closing the lock on a web page is, and must remain,
"good enough for banking", and "high assurance". It would be UNETHICAL
for us to allow low assurance CAs to be treated identically to high
assurance CAs (appearing to be "good enough for banking"), yet the
present UIs provide no way for a distinction to be presented.
WHEN the UI can effectively represent in a way that's obvious at a
glance to all users, including the color blind, that a cert is
"good enough for banking", or merely "good enough for writing to your
old high school friend Joe, but not good enough for banking", *then*
(and not until then) it will be sensible to allow the root CA list to
include both high and low assurance CAs for SSL server usage.
MAJOR POINT, that binary UI model is strictly the result of UI decisions
made by the browsers, and not the result of NSS. The UI people MUST
be willing to devote more screen real-estate to security info before
the binary security model can be eliminated.
History: The model has not always been binary. In Netscape Navigator 3,
the browser used a key icon that had 3 states:
- broken
- short, with one tooth
- long, with two teeth.
Two teeth meant "good enough for banking", and one tooth meant
"better than nothing, but not good enough for banking".
Users really understood that distinction. Netscape users outside the
USA, who had to use "export" browsers which were not permitted to use
strong crypto, and hence never showed two teeth, those users were
alarmed at the lack of teeth, and refused to use the browsers for
banking when the bank's web site showed only one tooth.
In response to that, Netscape invented SSL Step-Up, which allowed
export browsers to use strong crypto (and show two teeth) for banking
web sites. This was a HUGE win for everyone. SSL Step-Up first
appeared in Communicator 4.02 (IIRC), but about that time, the UI people
decided that the key was too big, and too complicated (3 states instead
of merely two), so they replaced it with a lock that looked JUST LIKE
IE's LOCK (oh Joy!). And we've been stuck with that damned lock and
its two states ever since.
There are a variety of ways that this problem could be solved in the UI.
The UI could:
a) go back to a iconic model that has clearly 3 or more states, which
are OBVIOUS to users in the chrome of the MAIN WINDOW, without the user
needing to click or move the mouse to see the info,
b) go to a model that identifies the CA, and allow the user to decide
for him/her self whether the CA is high or low assurance. The CA could
be identified by text (a name) and/or a "FavIcon" as web sites are now.
The browser could help the user remember his decision, and allow him to
change it.
The first method is visually the most simple, but requires someone to
make a judgment call on the user's behalf concerning the assurance
level of the CA. I think this is least confusing for the users, but
more work for MF and NSS.
The second method requires the users to grow their awareness of CAs
(of which they know nothing today). It also requires more window
real-estate. But it keeps MF out of the judgment business, which is
a business that MF seems particularly loath to do.
Next point: The problems described above have seldom ever been a
problem with the CAs already in the list, the ones who had to pay
$$$$ (either to Netscape or to WebTrust) to get it. I do not know
exactly why that is, (I'm sure someone here will be glad to tell us)
but here are some observations about it.
Most of the problems with technically erroneous certs have come from
the CAs that are free, mostly users of OpenSSL, many from universities.
The CAs with money don't have these problems, not very often.
CAs with money have not (in my experience) tried to issue certs that
completely hide the identity of the party to whom the certs are issued.
Yet many free CAs seem to want to do that all. It seems that they want
to provide anonymity more than assurance.
CAs with money have generally not (with one notable exception, in my
experience) issued certs with invalid public keys in them. People
trying to run free CAs seem to have this problem a lot.
In the last month, I have seen certs from one free CA that:
- had an invalid key
- had NO name but the DNSName, which technically belongs not in
the cert subject name, but in the "subject alt name". Had that
DNSName been where it belonged, the cert's subject name would have
been empty!
In that time, I have seen no keys from any other CAs with these failings.
Now, why is this? Why is the distribution of problem certs so one sided?
Perhaps the CA operators with money at risk want to take the time to
get it right, and the freebies want to cut corners to remain low cost.
Perhaps the CAs with money are able to hire people who have learned
the standards and appreciate the value of standards conformance,
while the freebies think that OpenSSL *IS* the standard. :(
My point is this: the past policies of charging money or requiring a
stamp of approval that costs money have (IMO) served the mozilla community
well, in terms of keeping "duff" certs out. MF has not historically
set forth my "duff" criteria above, nor required that CAs avoid them,
yet magically, mozilla's trusted CAs have managed to rarely issue
duff certs!
Now, MF is apparently doing everything in its power to take away the
one thing that (IMO) has kept the standards of certs high, usably high.
IMO, MF needs to raise standards in other areas to keep the duff certs out.
If money is no longer a barrier, then the standards of quality must be
stated in other ways, perhaps technically detailed ways, to keep the
issuers of duff certs out.
It will simply be a disaster if more duff certs start to be encountered
by mozilla users in large numbers. In the past I have replied to
hundreds of bugs reports by ignorant OpenSSL users who accused mozilla
of being buggy because their invalid certs didn't work with mozilla.
But I'm all done with that. I simply won't do in any more.
If issuers of duff certs are admitted, the flood gates will open.
products associated with a list that includes duff issuers will be a
laughingstock.
Frank, will *YOU* answer all those bug reports, explaining what's wrong
with their certs? If not, who will? Not me.
Now, I want to summarize this. IMO, it turns out that COST has been the
only factor responsible for the apparent success of PKI, having kept the
issuers of duff certs out, and having allowed the issuers of good certs
in. It wasn't the WebTrust criteria that kept them out, it wasn't the
ETSI TS 101, it wasn't nosy auditors, it was cost.
Perhaps cost has also kept out some potential issuers of good certs.
That is unfortunate. But in the security business, I think we have to
err on the side of caution, on the side of security. This isn't
"innocent until proven guilty". This is "untrusted until established
as trustworthy". The only value that PKI/crypto offers is trustworthiness.
If we lose that, we've lost the war.
If you eliminate cost as a barrier, you MUST erect another barrier that
will be as good as cost in keeping the duff issuers out (while the
binary model remains).
At present, Draft 11 doesn't seem to keep out issuers of certs that
have invalid keys, empty cert names, etc. Those things were kept out
by COST in the past. What will keep them out in the future?
Will Draft 11 keep out an issuer of certs with names empty except for
DNS names? Will draft 11 keep out issuers of certs with invalid keys?
> 1. I suspect that adding the other criteria (X9.79 and ETSI TS 101 456
> and 102 042) is not at issue,
Certainly those criteria are no weaker than WebTrust. Maybe even
stronger. But will they keep out issuers of certs with invalid keys,
with empty names? with invalid combinations of cert extensions?
> 2. Permitting "non-traditional" evaluation of CAs is certainly something
> that one could imagine increasing risks, if the people doing the
> evaluation don't do a good job.
Doing a "good job" has to be defined in such a way that a good job
keeps out duff issuers. I'm less worried that a non-traditional
evaluator will be dishonest than that he will not know what to look
for to keep out the duff. Certainly no financial auditor would.
> But this may or may not be at the root of your concern, so let me ask
> directly: In your opinion, would permitting such "non-traditional"
> evaluations contribute to the increased risk you perceive? If so, is it
> the only factor increasing risk? A major factor? Just a contributing
> factor relative to other factors (like the one I'll discuss next)?
> If you do believe that permitting non-traditional evaluations would
> increase the risk of "duff certs" being issued, what would you recommend
> we do? Tighten the requirements on when we'd allow such evaluations? Or
> drop the idea entirely, and require that all evaluations be done by
> authorized auditors and test labs?
Well, my first goal is: keep duff cert issuers out.
Now, if we can accomplish that by tighter requirements (and I think
that is possible), then that seems very desirable. That would be
my first choice.
If we cannot or are unwilling to take that approach, then yes, I'd
advocate continuing to require that all evaluations be done by
authorized auditors and test labs. We'd be falling back on the old
tried and true method of keeping duff issuers out. But that's not my
first choice.
> The consequence of the latter would be to keep CAcert.org out of the
> list permananently, or at least until they could afford a WebTrust or
> other audit. It would also keep other CAs out that haven't undergone
> WebTrust or equivalent audits; for example, I think there's a CA
> associated with a German university/research consortium that would be
> affected by this, and I think a couple of others as well.
It seems to me that the parties who would be CAs fall loosely into
two camps:
Camp a) the people who understand that being a CA isn't merely about
running OpenSSL to make certs, but that it's about trustworthiness,
about telling the truth to the people who would rely on them.
These people take the time to do it right.
From the outset their efforts appear to be well done, technically done
right, causing no problems for browsers, email clients, whathaveyou, etc.
They promote themselves on the good job they do, and the due diligence
in the verification of subjects' identities, in the trustworthiness of
the certs they issue, and (eventually) on their good track record.
Camp b) the people who don't understand that the value of a CA is its
trustworthiness. They think that CAs who charge money are just taking
money for nothing more than running OpenSSL. They think anyone who
can run OpenSSL can and should be a CA, making certs, and their certs
should be universally accepted, etc. Offering assurance is less
important to them than having and offering free certs.
From the outset, these people's certs have lots of problems. Invalid
keys, invalid extensions, invalid cert names, duplicated serial
numbers. They demonstrate unfamiliarity with and unawareness of the
standards. They seem unaware of the kinds of attacks that might
easily undermine their lax authentication, etc. But they're determined
to make their certs and be accepted alongside the folks in group a above.
They say "I haven't yet read the cert standards, but I demand to be
treated as an equal with Verisign, Thawte, Comodo, etc.".
They promote themselves on the basis that they don't charge money,
they can run OpenSSL as well an anyone else, they're not corporations,
they're lost cost, they're non-profit, (did I mention free?), and
even on their egalitarian ideals, but NOT on their expertise, nor
their taking of care, nor their due diligence, etc.
The ideal is to let all persons in group a in, regardless of the money
they have or the money they charge (or don't charge), and keep all
persons in group b out, also regardless of the money they have (or
don't have) and charge (or don't).
> 3. Regarding requirements on CA validation of their customers, I
> certainly don't have any such requirements in my current "WebTrust or
> equivalent" policy, so in that sense the proposed new policy is more
> stringent than the old policy.
Somewhat, yes.
> Did Netscape have such requirements?
No, they required $$$$. Ever notice how the supply of PSM developers
ended at about the same time as the trusted CA money stopped? Hmmm.
> However with regard to SSL certs in particular I will note
> that there are already CAs issuing "domain-validated" certs, e.g., the
> Thawte ssl123 and Go Daddy TurboSSL services, and to my knowledge such
> CAs are already in the default Mozilla set and usable with Firefox, etc.
I don't know what you mean by "domain-validated". Sorry. So I cannot
speak to the worthiness of "domain validated" cert issuers. However,
I will say that I think domain registrars are almost ideal candidates
to be SSL CAs. They need to do due diligence, but they're already
getting the registrant's info. If they can verify it, and if
registering the domain isn't a problem (e.g. with phishing), then
who better than they to certify that so-and-so owns the XXX.XXX domain?
> (There may also be other CAs like this as well, but I haven't had time
> to do a thorough check.) Has this resulted in significant issuance of
> "duff certs"? Apparently not, or I presume you would have mentioned this.
The only problems with GoDaddy certs that have been reported to me is
that many of their cert holders have not been instructed on the
necessity of installing their full server cert chains into their
servers, so they serve incomplete cert chains, and get cert chain
validation errors. This is merely a user education issue, not a
security vulnerability.
> So, I ask you: In your opinion, would explicitly permitting such
> "domain-validated" certs (as opposed to the implicit permission we've
> apparently had previously) contribute to the increased risk you
> perceive?
I cannot answer that question without knowing all the implications
of the term "domain-validated certs". Sorry. You asked lots of
good questions about the relevance of domain-validated certs, and
I just don't know what that term means. So, I'm deleting the
paragraphs of questions about that term.
> Finally, let me consider some larger issues. Forgive me if I'm missing
> something, but I'm getting contradictory impressions here. On the one
> hand I get the impression that you and others believe that historically
> there have been minimal problems (and hence minimal risks to users)
> resulting from CA practices and the selection of CAs to be included in
> the Mozilla default set. Now that I'm proposing this new policy I get
> the impression that you and others believe it will result in significant
> problems and significant risks to users, even though the sorts of CAs
> and CA practices permitted under the new policy are AFAICT basically the
> same sorts of CAs and CA practices that were permitted under the old
> policy.
I understand. I submit that the COST of the old policies had the
mysterious and wonderful, desirable and yet coincidental, side effect
of admitting only serious rather-competent CAs, even though the old
policy lacked technical specifics related to identifying and prohibiting
duff behavior.
In the absence of that protective factor (which was undesirable in
many ways), MUCH MORE needs to be done (IMO) to draw a clear line
against which duff cert issuers may not cross.
> (The major exception to this is the possibility of including CAs like
> CAcert.org based on non-traditional evaluations; but I'm not clear there
> whether non-traditional evaluations are the sorts of your concern or
> whether it's domain-validated SSL certs and other low-assurance certs.)
I have nothing in principle against free CAs, nor against CAs who use
web of trust (if done right and taken seriously). Indeed, I would
hold up Thawte as an example of such a CA that is apparently done
very well. I would like very much to see another free CA
> The only way I can personally resolve this contradiction is to conclude
> that it's not the policy in isolation which is at issue, it's the policy
> in combination with the new environment in which protecting against
> phishing has become a top priority. In other words, while existing
> policies (which in practice permitted things like domain-validated SSL
> certs) may have been good enough way back when,
Existing policies have succeeded largely (IMO) due to a component they
did not explicitly state: cost. The rest of the requirements were
less important than that one. With that removed, the rest of the
remaining (previously stated) requirements are not (IMO) good enough
to assure that a cert is "good enough for banking".
Regarding phishing: Phishing is merely one aspect of the problem of
lack of authentication with network peers. A victim of phishing is
dealing with someone over the net, while having a mistaken understanding
of who he is dealing with. This is the same problem as the MITM attack
presents, exactly the same. You think you're dealing with Alice, but
you're dealing with Evil Eve.
At the same time that phishing (one form of attack based on inadequate
authentication) is getting so much press, some who tout SSL to be a big
part of the solution have also called for allowing self signed certs to
be treated as equals with certs from high assurance CAs. Oy!
I'm not more interested in SSL as a solution to phishing than for SSL
authentication as an ongoing assurance provider for banking at home.
In fact, I'm more concerned for the latter.
In any case, as long as the binary UI model remains, I want mozilla users
to have no doubt about using SSL with their bank.
> any new policy has to be
> more stringent in order to counter the increased threat; it's not good
> enough for us to just continue and formalize existing de facto
> practices, we have to add some more requirements over and above what has
> been done in the past.
Yes, I agree with that. At least as long as the binary model remains
the only model in the UI.
> Is this what you and others are arguing? If so, it's a perfectly
> legitimate argument to make. But if you and others want to make this
> argument (i.e., that policy needs to be more stringent in this new
> environment) then I believe it's incumbent on you and others to a)
> propose in some level of details what more stringent requirements we
> should actually impose, and b) explain how those requirements will
> actually reduce the risks experienced by ordinary users.
The users have benefited from the high levels of assurance offered
by the existing trusted CAs. They have come to expect that level of
assurance. To lower that level of assurance without providing any
notice to the users of same is IMO unethical, and will hurt users.
> I think such a detailed justification is necessary because there are
> consequences to adopting more stringent policies: We're talking about
> eliminating (i.e., no longer accepting as valid) uses of things like
> domain-validated certs that are currently in use (with apparently no
> problems resulting), and closing off the possibility of such uses in the
> future.
I think you're assuming that my position is anti-domain-validated certs.
Since I don't know what those are, I am not (presently) against them. :)
> We're in essence saying that use of SSL in e-commerce and
> financial applications is our primary concern, that the risks associated
> with SSL in such applications require us to adopt as stringent a set of
> rules as we can, and that all other uses of SSL have to play by the same
> rules, whether they make sense in other contexts or not.
The binary UI model causes that. We're (er, I'm) saying that as long as
the UI model remains binary, the standard must be "good enough for banking",
because that is what all users understand it to be. When the UI model
allows a clear indication of "not good enough for banking", then let the
low-assurance CAs in.
> I know you're busy and don't have time to create such a detailed
> justification, but I certainly wish someone would.
The person who is missing from these debates is the person responsible
for mozilla browser crypto security, the PSM guy, who doesn't exist.
Sigh.
It's late, I'm tired, eyes aren't really focusing here. I apologize
for typos.
Disclaimer: All descriptions of persons or groups of persons above
are based on composites of descriptions of many people. Any similarity
to any particular person, living or dead, is purely coincidental.
--
Nelson B
_______________________________________________
mozilla-crypto mailing list
[email protected]
http://mail.mozilla.org/listinfo/mozilla-crypto