Dear Ryan,

Thanks very much for this very insightful email. There really is a lot that
I and others don't know about how these decisions are made.

The silver lining here is that we agree on where some of the gaps are in
this process, and that Mozilla, Google and others are working on filling in
these gaps, as you say. I would argue that the existence of so many
conflicts of interests, intricacies and complexities between the multiple
stakeholders in such decisions make it more urgent to fill in these gaps
quickly and completely.

If the existing documentation are insufficient in order to provide a full
set of distinguishers on the intricacies of this process, then it stands to
reason that they should be improved. If a certain terminology is too broad,
it stands to reason that it can be made less broad. If layman's terms are
deployed for non-layman concepts, it stands to reason that this should be
modified and its underlying concept elucidated. If incompetent auditors
cannot be differentiated from competent auditors, it stands to reason that
this can be addressed. If areas exist where conflicts of interests are
likely, it stands to reason that policies can be expanded to avoid these
conflicts of interests from occurring.

So long as we can continue to point to specific problems and shortcomings,
which you do masterfully and to great public service in your email, it will
always stand to reason that we can improve our policies such that the gaps
are filled. And again, it's wonderful that Mozilla, Google etc. are working
on this.

Many times in this discussion, we have all been offered a choice between
two paths. The first path would be to examine difficult problems and
shortcomings together and attempting to present incremental--often
onerous--improvements. The second path would be to just say that someone
should trust us based on years of subjective experience. In many, many
cases, the latter really is a wise thing to say and a correct thing to say
(and I truly mean this); it offers a path through which judicious decisions
are often made. Furthermore, it is often a necessary path to take when time
is of the essence. But it is seldom the rigorous path to take, seldom the
path that serves future engineers and practitioners in the field, and
seldom the path that gives institutions the foundation and the standing
that they will need in the decades to come.

I sincerely appreciate the formidable passion with which you argue for your
positions, and am glad that someone like you holds the responsibility that
you do.

Thank you,

Nadim Kobeissi
Symbolic Software •
Sent from office

On Wed, Jul 10, 2019 at 8:42 PM Ryan Sleevi <> wrote:

> On Wed, Jul 10, 2019 at 2:15 PM Nadim Kobeissi via dev-security-policy <
>> wrote:
>> Indeed I would much rather focus on the rest of the elements in the
>> Mozilla
>> Root Store Policy (
>> )
>> which are less vapidly authoritarian than the single clause you quote, and
>> which focus more on a set of audits, confirmations and procedures that
>> give
>> everyone a fair chance at proving the honesty of their role as a
>> certificate authority. For example, I find policy points 2.2 (Validation
>> Practices), 3.1.1 (Audit Criteria) and 3.1.4 (Public Audit Information) to
>> be much more of a fertile ground for future discussion.
> I appreciate that attempt to focus. However, it does again fundamentally
> misunderstand things in ways that are critical in demonstrating why this
> discussion is not productive or fruitful, and your suggestions are quite
> misguided.
> For example, judging by your replies, it seems you may not understand
> audits, what they are, or how they work.
> During an audit, someone who agrees to a voluntary set of professional
> standards, such as a Chartered Public Accountant, agrees to perform an
> audit using a specific set of Principles and Criteria. The Principles are
> very broad - for example, the three principles are "CA Business Practices
> Disclosure", "Service Integrity", and "CA Environmental Controls". These
> don't tell you very much at all, so then there are individual Criteria.
> However, the Criteria are very broad: for example: "The CA maintains
> controls to provide reasonable assurance that its Certification Practice
> Statement (CPS) management processes are effective."
> Now, you may not realize, but "reasonable assurance" and "effective" are
> not layman's terms, but refer to specific procedures that vary by country
> and professional standards (e.g. AICPA standards like the AT series or CPA
> Canada standards like CSAE)
> During the process of an audit, the auditors role is primarily to look at
> things and say "Yeah, that sounds right". It is not, for example,
> adversarial and look for counterfactuals. It does not, for example, include
> specific steps the auditor must perform; those steps are merely
> illustrative. A CA may actually significantly fail in its management
> processes, but the auditor might determine that, even despite those
> failures, the assurance provided was still "reasonable" so as to be
> effective.
> The process starts with the auditor assuming they're doing nothing, and
> the CA showing positive evidence that supports each thing. Negative
> evidence can and is overlooked, if there are other positive controls to be
> used. Mozilla, in collaboration with Google and others, has been working
> for years to address this gap, but I believe it's reasonable to say that
> the existence of an audit is by no means a positive sign for a CA; it
> merely serves as a filtering function for those too bad to operate, and
> even then, only barely.
> You might expect that the auditor have skill and familiarity with PKI.
> However, that's a very subjective measurement itself. The WebTrust
> licensing body may or may not perform an examination as to the skills of
> the auditor. Like some participants here, the auditor might feel they're
> skilled in PKI and have a well-formed opinion, based solely on reading
> m.d.s.p. and thinking they understand stuff. It's a very wild-west.
> It's important to note that, at the end of this process, in which the
> auditor has been shown all this evidence, they make a subjective
> determination about whether or not they think it was "reasonable". Auditors
> are not required to reach the same conclusion, and indeed, professional
> standards discourage auditors from "checking eachother's work". Unskilled
> auditors, of which there are many, are indistinguishable from skilled
> auditors. In all cases, their fiduciary relationship is with the CA, and
> thus they are bound by confidentiality and professionally prohibited from
> disclosing knowledge of adverse events, such as the CA "misleading" the
> public, provided that the CA made sure to exclude such things from their
> scope of the engagement.
> I mention all of this, because it seems you have a mistaken belief that
> PKI rests on objective criteria. It has not, nor has it ever. It has simply
> been "someone" (in this case, chosen by the CA) to offer their opinion on
> whether it's likely that the CA will end up doing what they said they would
> do. It does not measure that what the CA says they'll do is what people
> trusting the CA may expect. It does not permit the auditor to disclose
> deception. And, at the end of the day, it's merely the auditors
> "professional judgement", and with a whole host of disclaimers so that
> they're not personally responsible, should someone rely on that judgement.
> Perhaps, if you've read this far, you've come to realize that the thing
> you're taking unnecessary and unfounded umbrage over, which is the
> 'subjectivity' based on 'reasonable evidence', is and has always been the
> foundation for CA assessments. Perhaps, if you look deeper, you'll realize
> that there are a number of reasons not to trust such professional
> judgements, and why significant effort has been put to bring transparency
> to the /why/ the auditor is making that judgement, as well as consistency
> between two auditors. Perhaps, by now, you've realized that the fiduciary
> duty of the auditor to the CA means that there are significant conflicts of
> interest, which arguably should be inverted in order to serve a public
> good, or should be using criteria and controls developed and performed
> directly, rather than outsourced.
> In any event, I hope you'll realize that the process described, of
> examining evidence and looking to see whether you can be reasonably
> confident that things will work out, is exactly what's being proposed here.
> While I've described WebTrust, and ETSI would be its own thing to
> summarize, it similarly relies on an element of subjectivity in assessment
> that is fundamental to the establishment, or undermining, of trust.
> This may be unsatisfying, but it is hardly the deep affront to Mozilla's
> principles as have been suggested, and it's hardly an inconsistent
> standard. This is the same standard applied to all CAs, despite your
> suggestions otherwise.
> While you may choose to ignore People Magazine's expose on a dentist whose
> patients keep dying, and you may choose to ignore multiple credible claims
> of sexual harassment and misconduct by a variety of sources, such actions
> would be misguided, at best. Someone looking to stay safe would do better
> by avoiding that dentist, and avoid professionally or personally engaging
> with that sexual predator, and similarly, avoid engaging with an
> organization with a concerning pattern of issues, on the sole basis that
> someone they hired said they were probably doing or going to do what they
> said they would, even if that may not be what you want them to do.
dev-security-policy mailing list

Reply via email to