On Sun, March 8, 2015 11:53 am, Eric Mill wrote:
>  That comes down to how this program is implemented. The intent seems
>  pretty clearly to identify the space CAs are already issuing in.
>  Perhaps newer gTLDs merit some unrestrained time in the wild before
>  they're constrained in this way -- or perhaps it's simpler to make the
>  gradiations more black and white (e.g. "unrestricted" vs "niche" CAs,
>  and avoiding "somewhat unrestricted" or "nearly unrestricted").
>
>  For CAs whose business model is designed for a specific subset of the
>  web, a name constraint program could clear a path to entry without
>  endangering domains who are not designed to be served by that CA.

This is a dangerous line of reasoning, I think.

One reason is because it encourages calcifying the trust space ("If you
were there already, you can stay there; if you weren't, you're now kept
out")

Another reason is it encourages the trust store to be used for regional or
arbitrary distinctions ("not designed to be served by that CA"). This
amounts to naught more than recognizing borders on the Internet, a
somewhat problematic practice, for sure. That is, for every "constrained"
CA that you can imagine that ONLY wants to issue for a .ccTLD, you can
also imagine the inverse, where ONLY a given CA is allowed to issue for
that .ccTLD. The reasoning behind the two are identical, but the
implications of the latter - to online trust - are far more devastating.

>  This is a great point, and suggests that name constraint updates
>  should either a) have a clear and defined update path, or b) only be
>  implemented when the chances of updates are low.

It's nigh impossible to quantify (b), given the rate of gTLD adoption. It
also favours incumbants who have the ability to issue for those domains,
since any upstarts need to demonstrate need/desire to issue for the new
gTLDs.

The problem with (a) is already an issue, so why would or should we
believe it to be solved now?

>  * Add friction to applicants that claim in their initial application
>  to serve a specific subset of the web, and then wish to expand their
>  issuance surface area after their inclusion.

Why is this a good thing, and why should it be seen as such?

>  * Reduce the friction for niche CAs to be included in the first place.
>  For tightly constrained CAs, it's plausible to imagine that the
>  operational complexity they need to demonstrate can be reduced.

I strongly disagree with this sentiment. The holders of a .ccTLD domain
have just as much desire and reason to have strong security as the holders
of a .com domain. This idea that somehow we can be less stringent with a
.ccTLD-constrained CA is downright dangerous, because it suggests now a
balkanization of web security as a desirable outcome.

>  By contrast, name constraints protect *everyone*, even if the domain
>  owner has never heard of them, or heard of CT, CAA, or PKP.

I'm well aware of the distinctions between CT/CAA/PKP. The issue here is
simply one that the existing measures exist for site operators. The
argument for why we need "yet another" - one that is centrally managed,
slow to update, inherently political, and lacking firm criteria - is
somewhat problematic.

>  While this is not finalized, and the specific constrained domains in
>  the application are not accurate (.gov.us is not a public suffix, or
>  in use at all), name constraints seem to be a highly practical way of
>  bringing government CAs into the trusted root program.

Stop right here.

Why is this a good thing?
Why is it in the interest of Mozilla's users?
Why is it in the interest of the Web at large?

There's a fundamental mistake that assumes bringing the government CAs in
(of which the US FPKI is but one example) is somehow a good thing. The
closest it comes is "Well, the government said users must use the Federal
PKI, ergo it's nice that they CAN use the federal PKI", but that's simply
an argument that private industry should exist to enable governments'
legislative whims on technology.

We've already seen the impact the past decades legislative whims have had
on security. While FREAK is perhaps a modern example, the complexity and
security implications of FIPS 140-(1/2/3) remain a matter of active
discussion. Let alone the complexity involved with say, Fortezza, which
has been exploitable in NSS in the past.

As it relates to online trust ecosystem, we can see these government CAs
have either botched things quite spectacularly (India CCA) or been highly
controversial (CNNIC). The arguments for CNNIC aren't "Well, if they're
only MITMing .cn users, that's OK", it's "Well, they could MITM".

That's why name constraints are misleading. They exist because we lack
confidence that these government CAs (often audited under ETSI) are
competent to operate the technology necessary to be stewards of Internet
trust. The solution shouldn't be to find ways to hinder their damage, the
solution should be to make it impossible for them to damage things in the
first place.

Name constraints, as presented, give tacit approval to the CAs constrained
to botch things, as long as they do so only in their little fiefdoms. But
when these fiefdoms easily represent millions-to-billions of Internet
users, especially in emerging markets, do we really believe that their
needs are being served?

That is, in essence, why I think a change like this is so dangerous. It
strives to draw borders around the (secure) Internet, and to acknowledge
that what you do in your own borders, to your own users, is an issue
between you and them. I don't think that's a good state for anyone to be
in.

_______________________________________________
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

Reply via email to