On Sat, May 30, 2015 2:47 pm, Brian Smith wrote:
>  It seems reasonable to assume that governments that have publicly-trusted
>  roots will provide essential government services from websites secured
>  using certificates that depend on those roots staying publicly-trusted.
>  Further, it is likely that, especially in the long run, they will do
>  things, including pass legislation, that would make it difficult for them
>  to offer these services using certificates issued by CAs other than
>  themselves, as being in full control will be seen as being a national
>  security issue. Further, governments may even pass laws that make it
>  illegal for browser makers to take any enforcement action that would
>  reduce
>  or eliminate access to these government services. In fact, it might
>  already
>  be illegal to do so in some circumstances.

While your first two claims are backed by fact and precedent ("Government
CAs sometimes exist to provide government services" and "Governments
sometimes pass laws to require certain CAs"), I think you're wildly off
the mark on others.

It should bear repeating that governments can pass laws of any sort. There
are already plenty of laws that require the use of certain TSPs, which are
entrusted to commercial entities rather than government CAs. Does becoming
a TSP make them a government CA? Of course not (unless you would like to
argue that, say, Symantec is a government CA by virtue of participating as
a TSP in the US FPKI).

However, that you later bring in the idea that government's may pass laws
that make it illegal for browsers to take enforcement is, arguably,
without merit or evidence. If we accept that "governments may pass laws to
do X", then we can also logically assume two related statements

1) Governments may pass laws to compel a CA to issue a certificate to a
party other than a domain applicant.
2) Governments may pass laws to compel browser makers to include
particular roots as trusted.

The added tinge of uncertainty "In fact, it might already be illegal to do
in some circumstances" adds to the fear and doubt already sowed here.

I appreciate the argument you're making, but I don't think it stands up,
beyond a normal and healthy point of paranoia. With respect to
distinguishing a "government CA" from a "commercial CA", however, it
doesn't actually advance your point, because the same arguments could be
made to sow the same fear, uncertainty, and doubt of any commercial CA.

>  The main sticks that browsers have in enforcing their CA policies is the
>  threat of removal. However, such a threat seem completely empty when
>  removal means that essential government services become inaccessible and
>  when the removal would likely lead to, at best, a protracted legal battle
>  with the government--perhaps in a secret court.

Ah, but if we're worried about protracted legal battles in secret courts,
why aren't we worried about protracted legal battles in secret courts for
inclusion requests? After all, if we were to deny any applicant, who knows
what secret courts may summon the trust stores!

I hope you can see that this argument is without merit, even if
well-intentioned, in that you can argue any position you want by reducing
it to these terms. As such, I don't think it bears consideration - it's a
null argument.

The other part - that removal disrupts government services - relies on an
artificial dichotomy between government and commercial lives. How often do
you pay your taxes each year? Now how often do you Tweet or email or
purchase something online? The risk of disruption to everyday activities
is unquestionably higher for commercial CAs, rather than lower, so doesn't
that mean we can't remove commercial CAs?

>  Instead, it is likely that
>  browser makers would find that they cannot enforce their CA policies in
>  any
>  meaningful way against government CAs.

This is a hypothesis that cannot be established, and the arguments used to
establish it in this email can apply to any position one wants to take. As
a result, I find it hard to take it on its face.

After all, why wouldn't we argue that the risk of being sued for tortious
interference exists if a browser removes trust, ergo they can't enforce
their CA policies in any meaningful way?

>  Thus, government CAs' ability to
>  create and enforce real-world laws likely will make them "above the law"
>  as
>  far as browsers' CA policies are concerned.
>
>  Accordingly, when a browser maker adds a government CA to their default
>  trust store, and especially when that government CA has jurisdiction over
>  them, the browser maker should assume that they will never be able to
>  enforce any aspect of their policy for that CA in a way that would affect
>  existing websites that use that CA. And, they will probably never be able
>  to remove that CA, even if that CA were to be found to mis-issue
>  certificates or even if that CA established a policy of openly
>  man-in-the-middling websites.

I find these arguments without supporting merit.

>  IIRC, in the past, we've seen CAs that lapse in compliance with Mozilla's
>  CA policies and that have claimed they cannot do the work to become
>  compliant again until new legislation has passed to authorize their
>  budget.
>  These episodes are mild examples show that government legislative
>  processes
>  already have a negative impact on government CAs' compliance with
>  browsers'
>  CA policies.

I agree, this is the strongest argument against government CAs presented
in this thread, and I wish this, rather than the musings of secret courts
and "maybe impossibles", was the core of your argument.

These arguments apply not just to government CAs (that may rely on
external controls for financing, such as budgets, as you mention) but also
to small commercial CAs (whose profit margins may be too thin to implement
controls).

The response to both should be the same - removal.

>  More generally, browsers should encourage CAs to agree to name
>  constraints,
>  regardless of the "government" status of the CA.

Of this, I absolutely agree. But I think there's a fine line between
"encouraging" and "requiring", and how it's presented is key.

Most importantly, I don't believe for a second that constraints justify a
relaxation of security policy - they're an optional control for the CA to
opt-in to, as a means of reducing their exposure.

Name constraints can't replace compliance with the Mozilla Security
Policy, nor should it, in part or in whole.

>  In general, it seems like CT or similar
>  technology is needed to deal with the fact that browsers have (probably)
>  admitted, and will admit, untrustworthy CAs into their programs.

Here again, we find ourselves in agreement in the midst of remarkable
disagreement.

To be clear, my words are strong here because your argument is so
appealing and so enchanting in a world of post-Snowdonia, in which "trust
no one" is the phrase du jour, in which the NSA is the secret puppet
master of all of NIST's activities, and in which vanity crypto sees a
great resurgence. Your distinctions of government CAs, and their ability,
while well intentioned, rest on arguments that are logically unsound and
suspect, though highly appetizing for those who aren't following the
matter closely.

As such, I want to concretely and directly refute them, at the risk of
sounding rude or dismissive, rather than couch the disagreement in softer
language that may suggest some part of me agrees with your position.
Unquestionably, I appreciate you making these arguments, and I hope you'll
continue to engage in the discussion with the same depth of knowledge and
expertise, and I hope you find this email "spirited" rather than
"dismissive".

_______________________________________________
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

Reply via email to