David Ross wrote:
After reviewing the discussion in this thread (and other threads),
I must conclude that the whole approach to developing a policy is
flawed. A policy should represent specifics based on a more
general philosophy, but I don't think the philosophy itself is
clear in this case.
This is an excellent comment which I'm going to take to heart. I have concluded that it would be very useful for me to write and post a "meta-policy" document that clarifies the underlying type of policy I personally want to see us develop, and why that policy has the features that it does; this would in essence outline the more general philosophy behind the policy itself.
The first question that must be answered is: Why continue developing Mozilla? I would hope the answer does NOT revolve around an exercise in computer science but instead reflects a desire to create a high-quality software application for personal and commercial use -- an application for the real world.
Yes, but additional background is useful here: With the founding of the Mozilla Foundation the explicit focus of the project is now indeed to produce an end user software product. (Prior to that the nominal focus was to produce a developer product from which others would create an end user product.) So, yes, we do want to create an "application for the real world".
However although Mozilla is an end user product it is not a commercial proprietary product but rather a non-commercial open source product. IMO that has implications for what user's expectations are or at least should be, both in general and in the area of security in particular.
Note carefully: I am *not* saying that users should have lower expectations regarding the quality and security of non-commercial open source products like Mozilla. Rather I am saying that users do (or should) have different expectations about how that quality and security is going to be maintained in practice.
For a commercial proprietary product a user's expectations are (or should be) something like this:
* I've paid a vendor good money for this product (whether directly or indirectly, e.g., for a bundled product like IE).
* The vendor has total control over this product and how it's developed (since it's a proprietary closed source product).
* If the product has bugs, including security flaws, then I expect that the vendor will take the money that I and others have given it and through its own efforts (and no one else's) will provide the necessary resources (people, systems, etc.) to fix the bugs and provide me with a better product in the future.
* If this proves not to be the case then I will lose faith in the product and the vendor, and will look for an alternative vendor and product.
On the other hand, for a non-commercial open source product like Mozilla a user's expectations are (or should be) something like this:
* I've paid nothing for this product, and the licensing terms are such that I can do pretty much anything with it, including modifying it using the source code, redistributing it, and so on.
* The organization (or individual) distributing the product doesn't own or control all the resources (people or otherwise) used to develop the product.
* If the product has bugs, including security flaws, then I expect that the product's distributor and/or others involved with the product will have established processes that maximize the probability that the bugs will be fixed and that I will be provided with a better product in the future.
* If this proves not to be the case then I may lose faith in the product, the processes, and the distributor and/or others that are involved with them, and I may look for an alternative product. On the other hand, I may decide to try to fix my own problems (which is possible since I have the source code and necessary rights to that source), or I may decide to participate in the processes myself and help make them more effective at fixing the bugs that I and possibly others have found.
Now, you may say: "So what? What does this difference, if indeed it is real, have to do with anything, including the policy we're discussing?" I'll come back to this question further on in my comments.
If Mozilla is intended for real use, the next question is: Who
uses Mozilla? Given my hope for the answer to the first question,
the answer to this question should be: Anyone who uses the
Internet. This means that most Mozilla users are not truly sophisticated
software experts.
Agreed, and more specifically most Mozilla users are not security experts.
The answer to the second question raises the next question: In
that context, how are (not how should) CA certificates used? Clearly (at least to me), the answer is: The primary and most
important use of a CA certificate is to provide the Mozilla user
with assurance that (1) a critical Web site is indeed what it
purports to be and (2) sensitive data communicated to a Web server
travels across the Internet securely.
This is true for web server certificates. With email certificates issued for CAs (e.g., for S/MIME) we have the somewhat different expectation that the certificate will provide assurance that the entity signing a signed email message is in fact the entity who controls that email account. (In other words, if I receive signed email with an accompanying certificate that lists "[EMAIL PROTECTED]" as the email address, that the message really came from whomever uses and controls the [EMAIL PROTECTED] email account.) And we have yet other expectations with CA certificates issued for use in signing downloadable executable code, etc.
If this chain of questions and answers is valid, then the Mozilla Foundation has an obligation to those who use its products to authenticate not only the validity of each CA certificate in the default database but also the integrity of the CA's process of issuing and signing Web server certificates with that CA certificate.
I pretty much agree. I think the responsibility is in practice divided among multiple parties, since the Mozilla Foundation doesn't own and control all aspects of Mozilla development. But the Mozilla Foundation is indeed responsible for the product that it distributes.
This requires specific, objective, and verifiable criteria for authenticating both validity and integrity.
Ah, here's where I think opinions might begin to diverge. (Actually, based on Ian Grigg's comments here and elsewhere I suspect his opinions may have diverged a comment or two back -- but I'll let him speak for himself.)
Let's take a moment to discuss this supposed need for "specific, objective, and verifiable" criteria. In particular, recall that I claimed in another message (and have not yet been contradicted) that CA cert-related "bugs" (e.g., including a cert for a CA that did not perform its proper functions) are simply a special class of security vulnerabilities in general, and are formally equivalent to other security vulnerabilities in the sense that the effects on the user may be equally serious, and in some cases identical or nearly so.
As a concrete example, recall the recent vulnerability in IE -- and to some extent Mozilla -- regarding display of URLs to a user. The net effect of this vulnerability was that a user thinking they were accessing one web site (e.g., http://www.onlinebank.com) ended up accessing another site (e.g., http://www.badguys.org) instead, with little or no indication that this had happened. This is basically the same situation that could be caused by a CA issuing a "www.onlinebank.com" server certificate to the wrong person/entity. (And IIRC use of SSL/TLS would not have protected the user here, since the attackers could have gotten a valid cert for "www.badguys.org", and the browser would be checking that cert against the "real" URL -- i.e., the one being accessed -- as opposed to the URL as falsely displayed to the user.)
So, if CA cert-related vulnerabilities are formally equivalent to non-CA related security vulnerabilities and vice versa, and if decisions on including CA certs require "specific, objective, and verifiable" criteria, then logically we should also specify and apply such criteria for everything else in Mozilla related to user security.
But in fact we don't do this, even though such criteria exist (e.g., Common Criteria and related standards). Instead we depend on the "three P's": people, processes, and publicity. The Mozilla project (under the ultimate direction of the Mozilla Foundation) puts its trust in designated "module owners" responsible for particular code areas, requires that those modules owners and others follow particular processes in developing and maintaining Mozilla (e.g., use of Bugzilla, review and super-review, etc.), and do all that in a public manner, where the details of the code and processes are open to public review.
As it happens, handling security vulnerabilities doesn't fully follow this model, since the process isn't totally open at all times and in all aspects. This was not for lack of trying -- the actual processes recommended by mozilla.org policy were the result of a compromise between the "full disclosure" position and the "fix in private" position. But that doesn't change my essential point -- the Mozilla project has never applied specific, objective, and verifiable criteria to all aspects of Mozilla security, and doesn't seem to have especially suffered for not doing so.
I advocate third-party audits because those criteria already exist
and are already being applied through such audits.
But as I mentioned earlier, mandating independent audits 1) imposes other costs (really externalities in the economic sense) that are borne by the Mozilla project and Mozilla users, and 2) may not actually be an appropriate form of security risk mitigation in all cases.
Rather than repeat my previous comments addressing these issues in the context of CAs and CA auditing, let's turn to a similar issue in another closely-related context, namely independent auditing of cryptographic implementations according to FIPS 140-x and related standards.
As it happens the Mozilla project was the beneficiary of a fortunate historical accident: It was able to take advantage of a high-quality field-proven open source cryptographic implementation, namely NSS, that had also been FIPS 140-1 validated.
But let's turn back the clock a few years and suppose that NSS never existed, and that the only available open source crypto library were OpenSSL, which at the time was not FIPS validated. Let's further suppose that there were another alternative choice, a proprietary crypto library (call it "ClosedSSL") whose vendor had made it available in binary form on the main Mozilla platforms (Windows, Mac OS, and Linux), with license terms permitting it to be included in Mozilla and redistributed at no charge.
If you had to pick which crypto library to include in Mozilla, which would it have been: OpenSSL, a product with source code available and a fairly public development process, but no formal validation against specific, objective, and verifiable criteria, or ClosedSSL, a product formally validated against specific, objective, and verifiable criteria but developed behind closed doors with source code not available?
I think reasonable people could decide either way and justify the choice. However I can tell you what I would have done: I would have recommended use of OpenSSL instead of ClosedSSL, for at least two reasons:
First, use of an open source product that could be reviewed in the public eye would have been consistent with practices and processes in the rest of the Mozilla project. Otherwise we would have been able to take advantage of public review and distributed bug detection and fixing for the rest of Mozilla, but would have been hampered in attempting to find and fix potential bugs in the crypto library. This would mean that we couldn't leverage the distribute nature of open source bug fixing with regard to the crypto library, and that the reputation of Mozilla as a whole could be compromised by problems with a product (ClosedSSL) over which we had no control or oversight.
Second, use of an open source product would help enable Mozilla to be ported to more platforms, including platforms that the vendor of ClosedSSL did not support and might not be interested in supporting. This list of otherwise "deprived" platforms might have included OS/2, the various *BSD distributions, non-Red Hat distributions of Linux, Solaris, HP-UX, AIX, Irix, and others. Most people may not care whether Mozilla is available on, say, OS/2, but I can guarantee that the users of OS/2 care a lot, and the widespread availability of Mozilla on lots of different platforms has been a major factor in its popularity and success thus far.
So in this case the informal "validation" made possible by public review of open source code would trump the formal validation of closed code against specific, objective, and verifiable criteria, at least for me. Based on the market success of OpenSSL over the years I think a lot of people hold the same opinion as I do. As it happens OpenSSL is now being validated against the FIPS 140-2 criteria, but note the cause and effect: OpenSSL is being validated because it became so popular that its user base came to include users for which FIPS validation was important, but the popularity of OpenSSL had nothing to do with whether it was FIPS validated or not.
This ties back to Ian Grigg's comments about "markets" in this context. I don't agree with everything Ian writes, but I think this line of thinking can be fruitful, particularly with regard to the role and value of independent auditors:
If we look at why we have independent auditors in the case of public companies, it's in large part because most of what goes on in any company is closed to public view. Investors don't have access to detailed internal sales forecasts, or customer lists, or development plans, or other things that they might use to evaluate a company. So we have independent auditors who are in a sense "stand-ins" for investors, and who have access to information that investors are denied.
But at the same time independent auditors can't be complete stand-ins for investors. For one thing, the auditors are paid by the company, and so their interests are not 100% aligned with investors: Although the vast majority of individual auditors and audit firms may act in a manner beyond reproach, there is always at some level the temptation to "fudge" the results, and there is almost always someone somewhere who succumbs to that temptation, at least to some extent.
Besides whatever other virtues it might have, the requirement for specific, objective, and verifiable criteria can be seen in one light as a response to the issues raised by the temptations inherent in the role of paid independent auditor: By tightly restricting the "degrees of freedom" available to auditors, we make it more difficult for auditors to "bend the rules" to help a company obtain a favorable evaluation.
However in public markets like the stock exchange investors still don't put complete trust in the results of corporate audits, no matter how carefully conducted. They also take into account any other information available to them, and the final value assigned to a company is based on the totality of information known about a company, of which the audited results are only a part. If a company's operations were significantly more transparent than they typically are today (and a number of people have recommended that companies do this), then IMO the audited results would be an even smaller factor in determining perceived company value.
If you substitute "users" for "investors" and "CAs" for "companies" (the "auditors" are still "auditors") then I think you pretty much capture the essence of what Ian Grigg is saying (or at least what I take him to be saying).
So, to turn once again back to the case of deciding which CA certs to include, a possible alternative policy would be for the Mozilla Foundation to assign this task to a particular "module owner" and require that they follow normal Mozilla project processes when making their decisions: track requests and comments on them in Bugzilla, supplement with discussions in public forums, and take public comments and publicly-available information into account when making the decisions. There would be no specific, objective, and verifiable criteria outlined as part of the original policy; any such criteria would emerge as part of the public decision process, and any particular decision might apply some criteria but not others.
Now I suspect that whatever policy I end up proposing will in fact include a large dose of specific, objective, and verifiable criteria for CAs. That's because any policy, including this one, is a product of compromise, and there a lot of people who think formal criteria are important in this context. I think it will be much easier to get a policy completed if we include enough formal criteria to satisfy most people concerned about this.
In the end, the real question is: Can we trust and rely on the CA certificates in the Mozilla default database to protect our privacy and our assets?
I respectfully disagree: The real question is: Can we trust and rely on the Mozilla project to produce a product that properly protects the security of users? The whole CA cert scheme is but an aspect of that.
The answer to that question will
determine whether we can trust the Mozilla Foundation, which needs
to clarify the underlying philosophy upon which the proposed
policy should be based.
I agree that we need to clarify the underlying philosophy, which is why my next task is to create the "meta-policy" I mentioned above. Only then will I feel comfortable creating a new revision of the proposed policy and FAQ.
Frank
-- Frank Hecker hecker.org _______________________________________________ mozilla-crypto mailing list [EMAIL PROTECTED] http://mail.mozilla.org/listinfo/mozilla-crypto
