(Writing in a personal capacity)

I want to preemptively apologize for the length of this message. Despite
multiple rounds of editing, there's still much to be said, and I'd prefer
to say it in public, in the spirit of those past discussions, so that they
can be both referred to and (hopefully) critiqued.

These discussions are no easy matter, as shown from the past conversations
regarding both TeliaSonera [1] and CNNIC [2][3][4][5]. There have been
related discussions [6][7], some of which even discuss the UAE [8]. If you
go through and read those messages, you will find many similar messages,
and from many similar organizations, as this thread has provoked.

In looking at these older discussions, as well as this thread, common
themes begin to emerge. These themes highlight fundamental questions about
what the goals of Mozilla are, and how best to achieve those goals. My hope
is to explore some of these questions, and their implications, so that we
can ensure we're not overlooking any consequences that may result from
particular decisions. Whatever the decision is made - to trust or distrust
- we should at least make sure we're going in eyes wide open as to what may
happen.

1) Objectivity vs Subjectivity

Wayne's initial message calls it out rather explicitly, but you can see it
similarly in positions from past Mozilla representatives - from Gerv
Markham, Sid Stamm, Jonathan Nightingale - and current, such as Kathleen
Wilson. The "it" I'm referring to is the tension between Mozilla's Root
Program, which provides a number of ideally objective criteria for CAs to
meet for inclusion, and the policy itself providing significant leeway for
Mozilla to remove CAs - for any reason or no reason - and to take steps to
protect its users and its mission, an arguably subjective decision. This
approach goes back to the very first versions of the policy, written by
Frank Hecker for the then-brand new Mozilla Foundation [9][10]. Frank
struggled with the issue then [11], so perhaps it is unsurprising that we
still struggle with it now. Thankfully, Frank also documented quite a bit
of his thinking in drafting that policy, so we can have some insight. [12]

The arguments for the application of a consistent and objective policy have
ranged from being a way to keep Mozilla accountable to its principles and
mission [13] to being a way to reduce the liability that might be
introduced by trusting a given CA [14]. In many ways, the aim of having an
objective policy was to provide a transparent insight into how the
decisions are made - something notably absent from programs such as
Microsoft or Apple. For example, you will not find any public discussion as
to why Microsoft continues to add some CAs years ahead of other root
programs, even from organizations so rife with operational failures that
have disqualified them from recognition by Mozilla, yet took years to add
Let's Encrypt, even as they were trusted by other programs and had gone
through public review. Mozilla's policy provides that transparency and
accountability, by providing a set of principles for which the decisions
made can be evaluated against. In theory, by providing this policy and
transparency, Mozilla can serve as a model for other root stores and
organizations - where, if they share those principles, then the application
of the objective policy should lead to the same result, thus leading to a
more interoperable web, thus fulfilling some of Mozilla's core principles.

In the discussions of CNNIC and TeliaSonera, there was often a rigid
application of policy. Unless evidence could be demonstrably provided of
not adhering to the stated policies - for example, misissued certificates -
the presumption of both innocence and trustworthiness was afforded to these
organizations by Mozilla. Factors that were not covered by policy - for
example, participation in providing interception capabilities, the
distribution of malware, supporting censorship - were discarded from the
consideration of whether or not to trust these organizations and include
their CAs.

During those discussions, and in this, there's a counterargument that
highlights these behaviours - or, more commonly, widely reported instances
of these behaviours (which lack the benefit of being cryptographically
verifiable in the way certificates are) - undermine the trustworthiness of
the organization. This argument goes that even of the audit is clean
(unqualified, no non-conformities), such an organization could still behave
inappropriately towards Mozilla users. Audits can be and are being gamed in
such a way as to disguise some known-improper behaviours. CAs themselves
can go from trustworthy to untrustworthy overnight, as we saw with
StartCom's acquisition by WoSign [15]. The principles involved may be
authorizing or engaging in the improper behaviour directly, as we saw with
StartCom/WoSign, or may themselves rely on ambiguous wording to provide an
escape, should they be caught, as we saw with Symantec. Because of this,
the argument goes, it's necessary to consider the whole picture of the
organization, and for the Module Owner and Peers to make subjective
evaluations based on the information available, which may or may not be
reliable or accurate.

The reality is that the policy needs both. As Frank called out in [12], a
major reasoning for the policy was not to provide an abstract checklist,
but to provide a transparent means of decision making, which can consider
and balance the views of the various stakeholders, with the goal of
fulfilling Mozilla's mission. Yet much of that objective policy inherently
rests on something frighteningly subjective - the audit regime that's come
to make up the bulk of the inclusion process. While the early drafts of the
policy considered schemes other than just WebTrust and ETSI, as it's
evolved, we're largely dependent on those two. And both of those rely on
the subjective opinion of auditors, which may or may not meet the Mozilla
community's definition of skilled, to provide their opinion about how well
the CA meets certain abstract audit criteria, for which there may or may
not have been technical guidance provided. As we've seen with the
disqualification of some auditors, those subjective judgements can create
real harm to Mozilla users.

2) Government CAs and Jurisdictional Issues

As the later messages in this thread have shown, but also those early
messages in both TeliaSonera and CNNIC's case, there is a general unease
about the legal implications of CAs and where they operate. This generally
manifests as a concern that a CA will be compelled to do something to
violate policy by force, the instrument of governments. Whether it's
through the rule of law or a "friendly" stop-in by a government official,
the risk is still there. At times, these discussions can be informative,
but not uncommonly, they can devolve into "X country is better than Y
country" on some axis - whether it be rule of law, human rights, encryption
policies, or some other dimension. I don't say that to dismiss the
concerns, but to highlight that they continue to come up every time, and
often have a certain element of subjectivity attached to them.

At times, there's a subtle component to these discussions, which may or may
not get called out, which is about the motivations that the organization
may have in violating the policy. Commercial CAs may be motivated to
violate the policy for financial gain. Government CAs may be motivated to
do so to support their government functions. Quasi-governmental CAs may do
so to ensure their government contracts are maintained. This is a more
useful question, as it tries to unpack what might make the particular risk
- a policy violation - manifest.

It's not unreasonable at all to consider these issues - but in doing so,
the discussion is often masking something deeper, and thus it's necessary
to continue digging down into it.

3) Detection vs Prevention

Another common theme, which plays in with the above two, is a debate
between whether detection is a suitable substitute for prevention. In this
discussion, we see DarkMatter commenting that they've committed to log
certificates via Certificate Transparency. There are fundamental technical
issues with that - most notably, that such a decision does not protect
Mozilla users unless and until Mozilla implements CT - but there's also a
thorny policy issue as well, which is whether or not detection should be a
substitute for prevention. The CNNIC and TeliaSonera discussions equally
wrestled with this, although they lacked the benefit of CT - and a number
of users argued that, in the absence of detection mechanisms, prevention by
not trusting was better than the risk of trusting and not being able to
detect. With DarkMatter's application, there's a theoretical ability to
detect - but a host of practical issues that prevent that, both in the
ecosystem and specifically with Mozilla Firefox.

The counterargument would highlight that this risk exists with literally
every CA trusted by Mozilla today - any one of them could go rogue, and
detection is not guaranteed. Both CT and the audit regimes serve as ways to
try to determine after the fact if that happened, but they're both far from
guaranteed. For example, even in a perfect CT ecosystem of client auditing
and checking and no misbehaving logs, the ability to detect MITM still
wholly relies on site operators scanning those logs for their domain. As
such, CT doesn't necessarily mitigate the risk of MITM - it redistributes
it (somewhat), while providing a more concrete signal.

4) Sub-CAs

Common with these discussions, and certainly shared between CNNIC and
DarkMatter, is the fact that these organizations are already in possession
of publicly trusted intermediate CAs and key material. In some ways,
whatever risk is present, the community has already been living with this
risk, due to the CAs that provided these cross-signs. Whatever may be felt
about the present organizations, it's not difficult to imagine there being
a 'worse' organization equally being granted such access and key material,
and the community may not discover that for between 15-18 months after its
occurred. Whatever decisions are made here, there's a further challenge in
ensuring these decisions are consistently applied. This isn't hypothetical
- unfortunately, we saw Certinomis do exactly this after StartCom was
removed from Mozilla's program [16].

5) Investigative Reporting

Another common theme in these discussions is the role in which
investigative reports should play. There are a number of concerning reports
that, as highlighted by Alex Gaynor, arguably are inconsistent with the
principles and values of Mozilla. DarkMatter is not the first CA to have
such reports published - as the discussions of TeliaSonera and CNNIC show,
there were similar discussions and reports at the time. It's very tempting
to lend credence to these reports, but at the same time, we've also seen
reports, such as those by Bloomberg regarding SuperMicro implants, which
appear more likely to manipulate markets than to be responsible
journalism.[17]

It's not clear to what extent such reports should play in to discussions to
trust or distrust a CA, yet it would seem, based on past evidence, to be
foolish to ignore them in the name of policy absolutism. At the same time,
as the world collectively struggles how to balance media, both in
perceptions and execution, it seems care should be taken as to how much
weight to give such reports.

6) Recommendations, Risks, and Suggestions

Mozilla could decide to rigidly apply the current policy, which it would
seem that DarkMatter is nominally on track to meet, much as it did in the
case of TeliaSonera and CNNIC. However, as those CAs showed, the
application of such policy ultimately resulted in significant risk being
introduced to end users. If we are to just past precedents as future
predictors, then I would suggest that we don't have a good predictive model
to suggest it will go well.

It should come as no surprise that I'm highly skeptical in the audits and
audit regime, and thus do not find them to be objective in the least. I'm
hugely appreciative of the continued collaboration in this community on how
to improve it, but I think there's still significant work to go to get
there. The way that we've been tackling this issue to date in this
community is centered around transparency - transparency in operations and
transparency through incident reports.

Regarding incident reports, I'm deeply concerned about DarkMatter's
engagement regarding non-compliance, which seems to be targeted to excuse,
more than explain. There's a risk that, in saying this, it will be
suggested that this is somehow a ploy to exclude new CAs - but as others
have highlighted, I think ALL CAs, both present and pending, need to
operate beyond approach and with meaningful evaluation. Because of the risk
to the Internet that any and every CA poses, every one of them should be
aggressively looking for best practice. This means that they absolutely
should be evaluating the public discussions that happen in the CA/B Forum
to understand the intent of ballots, just as they should absolutely be
examining every CA incident - both closed and open - for similar things.
The BRs are not a guidebook for "how to be a CA" - they're the minimum
floor for which, if you dip below, there should be no trust.

Regarding transparency of operations, I think DarkMatter has missed a
number of opportunities to meaningfully engage in a way to provide some
reassurances. Transparency is not "We promise we don't do [bad thing]", but
to describe the system and controls for how it is designed. This is far
more extensive than a CP/CPS. Auditable versions of this continue to be
discussed with WebTrust (borrowing concepts and ideas from reporting such
as SOC 2) and ETSI, but there's no reason to not be more transparent now.

Regarding sub-CAs, prior to this thread, I had already filed an issue for
future consideration as a means of ensuring an appropriate level of
transparency before new sub-CAs are introduced [18]. This would
significantly reduce the risk of, say, an organization issuing a sub-CA and
then exiting the CA business, as QuoVadis has done, which I think provides
the wrong incentives for both the CA and the community.

As Frank Hecker's CA metapolicy [12] called out, the document that
ultimately informed the policy that was developed, we should simply treat
audits as one signal regarding the CA's operations, and potentially, not a
particularly strong one. Mozilla would be entirely justified with its
current policy to reject the inclusion request and remove the current (and
any future) sub-CAs. Mozilla could also, arguing that it would be unfair to
hold DarkMatter to a high standard, accept them and bear any perceived
risks, only acting in the case of demonstrable or repeated bad action.
Neither decision would be unprecedented, as I've hopefully highlighted.

One area of consequence I've struggled most with is this: If DarkMatter is
rejected, will that encourage other root programs to apply subjective,
non-transparent criteria, and in doing so, undermine some of Mozilla's
manifesto? In theory, Mozilla is the shining light of transparency and
consistency here, and I'd love to believe that has influenced other root
programs, such as those by Apple and Mozilla. Yet I struggle to see
evidence that this is the case - certainly in trust decisions, but at times
in distrust decisions. More importantly, the more time I spend working with
audits, both to understand the frameworks that lead to the audit regimes
[19][20] and how the audits function, it's clear to me that these were
designed very much to allow for subjectivity, business relationships, and
risks both technical and legal, to be determinants in trust. I think the
past decision making process has perhaps too dogmatically relied on policy
and the existence of audits, rather than the content, level of detail, and
qualifications of the auditors.

With respect to CT, I don't believe any weight should be given to
DarkMatter's commitment to log certificates to CT. While it's true that it
theoretically provides detection, that detection is not at all available to
Mozilla users, and even if it were, it's fundamentally after-the-fact and
if and only if the site operator is monitoring. For the set of concerns
that have been raised here, regardless of whether they are seen as
legitimate, the proposal for CT does not address them in any meaningful
way, nor can it. Even if I'm overlooking something here, Frank's advice in
[12] regarding reliance on new technology not yet implemented should be a
clear warning about the risk such reliance can pose.

Regardless of the decision, this thread has elicited a number of
well-intentioned suggestions for how to improve the policy in a way that
clearly states various expectations that the community may have.
Documenting beneficial ownership is one that I think is critically
necessary, as there are a number of CAs already participating in Mozilla's
program that have arguably complicated relationships. Until recently, for
example, Sectigo's primary ownership was through Francisco Partners, who
also owned the extremely controversial and problematic NSO Group [21][22].
Similarly, when Symantec was a recognized CA in Mozilla's program, their
acquisition of Bluecoat (and subsequent appointment to a number of key
leadership positions) was seen as concerning [23].

Similarly, I think that as long as we allow for new organizations (not
present in the hierarchy already) to be introduced as sub-CAs, then much of
this discussion is largely moot, because there will be an easy policy
loophole to exploit. I believe that it's in the best interests of users
that, regardless of the organization, new independently operated sub-CAs
undergo the same scrutiny and application process as roots. Once they're
in, then I think provided there are paths to trust - both for including
them as roots or allowing them to switch to cross-signatures from other
organizations - but I am concerned about the ability to introduce new and
arbitrary players at will, especially with incentives structures that may
shift all of the burden and risk to Mozilla and its users.

[1]
https://groups.google.com/d/msg/mozilla.dev.security.policy/mirZzYH5_pI/5LJ-X-XfIdwJ
[2] https://bugzilla.mozilla.org/show_bug.cgi?id=542689
[3] https://bugzilla.mozilla.org/show_bug.cgi?id=476766
[4]
https://groups.google.com/d/msg/mozilla.dev.security.policy/QEwyx6TQ5TM/qzX_WsKwvIgJ
[5]
https://groups.google.com/d/msg/mozilla.dev.security.policy/xx8iuyLPdQk/OvrUtbAkKRMJ
[6]
https://groups.google.com/d/msg/mozilla.dev.security.policy/WveW8iWquxs/S2I8yC36y2EJ
[7]
https://groups.google.com/d/msg/mozilla.dev.security.policy/kRRBCbE-t5o/8gYiB_B1D1AJ
[8]
https://groups.google.com/d/msg/mozilla.dev.security.policy/OBrPLsoMAR8/J4B0CU3JGdoJ
[9] http://hecker.org/mozilla/cert-policy-submitted
[10] http://hecker.org/mozilla/cert-policy-approved
[11] http://hecker.org/mozilla/cert-policy-draft-12
[12] http://hecker.org/mozilla/ca-certificate-metapolicy
[13] https://www.mozilla.org/en-US/about/manifesto/
[14] https://bugzilla.mozilla.org/show_bug.cgi?id=233453
[15] https://wiki.mozilla.org/CA:WoSign_Issues
[16]
https://groups.google.com/d/msg/mozilla.dev.security.policy/RJHPWUd93xE/6yhrL4nXAAAJ
[17]
https://www.businessinsider.com/security-community-voicing-increasing-doubts-about-bombshell-bloomberg-chinese-chip-hacking-2018-10
[18] https://github.com/mozilla/pkipolicy/issues/169
[19]
https://www.americanbar.org/content/dam/aba/events/science_technology/2013/pki_guidelines.pdf
[20] http://www.oasis-pki.org/
[21]
https://groups.google.com/d/msg/mozilla.dev.security.policy/AvGlsb4BAZo/OicDsN2rBAAJ
[22]
https://www.franciscopartners.com/news/nso-group-acquired-by-its-management
[23]
https://groups.google.com/d/msg/mozilla.dev.security.policy/akOzSAMLf_k/Y1D4-RXoAwAJ
_______________________________________________
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

Reply via email to