On Tue, May 23, 2017 at 9:45 AM, Jakob Bohm via dev-security-policy <
[email protected]> wrote:

> * TCSCs can, by their existing definition, be programmatically
>  recognized by certificate validation code e.g. in browsers and other
>  clients.
>

In theory, true.
In practice, not even close.


> * If TCSCs are limited, by requirements on BR-complient unconstrained
>  SubCAs, to lifetimes that are the BR maximum of N years + a few months
>  (e.g. 2 years + a few months for the latest CAB/F requirements), then
>  any new CAB/F requirements on the algorithms etc. in SubCAs will be
>  phased in as quickly as for EE certs.
>

I'm not sure what you're trying to say here, but the limits of lifetime to
EE certs are different than that of unconstrained subCAs (substantially)


> * If TCSCs cannot be renewed with the same public key, then TCSC issued
>  EEs are also subject to the same phase in deadlines as regular EEs.
>

Renewing with the same public key is a problematic practice that should be
stopped.


> * When issuing new/replacement TCSCs, CA operators should (by policy) be
>  required to inform the prospective TCSC holders which options in EE
>  certs (such as key strengths) will not be accepted by relying parties
>  after certain phase-out dates during the TCSC lifetime.  It would then
>  be foolish (and of little consequence to the WebPKI as a whole) if any
>  TCSC holders ignore those restrictions.
>

This seems to be operating on an ideal world theory, not a real world
incentives theory.

First, there's the obvious problem that "required to inform" is
fundamentally problematic, and has been pointed out to you by Gerv in the
past. CAs were required to inform for a variety of things - but that
doesn't change market incentives. For that matter, "required to inform" can
be met by white text on a white background, or a box that clicks through,
or a default-checked "opt-in to future communications" requirement. The
history of human-computer interaction (and the gamification of regulatory
action) shows this is a toothless and not meaningful action.

I understand your intent is to be like "Surgeon General's Warning" on
cigarettes (in the US), or more substantive warnings in other countries,
and while that is well-intentioned as a deterrent - and works for some
cases - is to otherwise ignore the public health risk or to try to sweep it
under the rug under the auspices of "doing something".

Similarly, the market incentives are such that the warning will ultimately
be ineffective for some segment of the population. Chrome's own warnings
with SHA-1 - warnings that CAs felt were unquestionably 'too much' - still
showed how many sites were ill-prepared for the SHA-1 breakage (read: many).

Warnings feel good, but they don't do (enough) good. So the calculus comes
down to those making the decision - Gerv and Kathleen on behalf of Mozilla,
or folks like Andrew and I on behalf of Google - of whether or not to
'break' sites that worked yesterday, and which won't work tomorrow. When
that breakage is low, it can fit within the acceptable tolerances -
https://www.chromium.org/blink/removing-features and
https://www.chromium.org/blink try to spell out how we do this in the Web
Platform - but too large, and it becomes a game of chicken.

So even though you say "it would be foolish," every bit of history suggests
it will be done. And since we know this, we also have to consider what the
impact will be afterwards. Breaking a ton of sites is something no browser
manufacturer - or its employees, more specifically - wake up each morning
and say "Gee, I wonder what I can break today!", and so we shouldn't
trivialize the significant risk it would impose.


> * With respect to initiatives such as CT-logging, properly written
>  certificate validation code should simply not impose this below TCSCs.
>

"properly written"? What makes it properly written? It just means what you
want as the new policy.


> With the above and similar measures (mostly) already in place, I see no
> good reason to subject TCSCs to any of the administrative burdens
> imposed on public SubCAs.


While I hope I've laid them out for you in a way that can convince you, I
also suspect that the substance will be disregarded because of the source.
That said, the risk of breaking something is not done lightly, and while
you may feel it's the site operators fault - and perhaps even rightfully so
- the cost is not born by the site operator (even when users can't get to
their site!) or the CA (who didn't warn "hard enough"), but by the user.
And systems that externalize cost onto the end user are not good systems.
_______________________________________________
dev-security-policy mailing list
[email protected]
https://lists.mozilla.org/listinfo/dev-security-policy

Reply via email to