On Fri, Aug 2, 2019 at 9:59 AM Doug Beattie <doug.beat...@globalsign.com>
wrote:

> Ryan,
>
> GlobalSign has been thinking along these lines, but it's not clear how
> browsers build their path when a cross certificate is presented to them in
> the TLS handshake.
>

Excellent! Happy to help in any way to make that possible and easier :)


> Can you explain how chrome (windows and Android)  builds a path when a
> cross
> certificate is delivered?  What about the case when the OS (Microsoft
> specifically) has cached the cross certificate, is it different?
>

It's unclear the objective of the question. That is, are you trying to
figure out what happens with both paths are valid, or how it handles edge
cases, etc?

At present (and this is changing), Chrome uses the CryptoAPI
implementation, which is the same as IE, Edge, and other Windows
applications.

You can read a little bit about Microsoft's logic here:
-
https://blogs.technet.microsoft.com/pki/2010/05/12/certificate-path-validation-in-bridge-ca-and-cross-certification-environments/


And a little about how the IIS server selects which intermediates to
include in the TLS handshake here:
-
https://support.microsoft.com/en-us/help/2831004/certificate-validation-fails-when-a-certificate-has-multiple-trusted-c

The "short answer" is that, assuming both are trusted, either path is
valid, and the preference for which path is going to be dictated by the
path score, how you can influence that path score, and how ties are broken
between similarly-scoring certificates.

Android's selection logic is somewhat simpler, but it supports exploring
multiple variations of an intermediate in the attempt to explore a possible
path.

With this approach, we'd require our customers to configure their web
> servers to always send down the extra certificate which:
>   * complicates web server administration,
>

I'm not sure I understand this; that is, what's different from the existing
need to configure the issuing intermediate? I can understand challenges
faced with, say, IIS (which attempts to automatically send the chain), but
that's only an issue based on how the CA constructs the scoring, and even
that can be overridden.


>   * increases TLS handshake packet sizes (or extra packet?), and
>   * increases the certificate path from 3 to 4 certificates (SSL, issuing
> CA, Cross certificate, Root), which increases the path validation time and
> is typically seen as a competitive disadvantage
>

I'm surprised and encouraged to hear CAs think about client performance.
That certainly doesn't align with how their customers are actually
deploying things, based on what I've seen from the httparchive.org data
(suboptimal chains requiring AIA, junk stapled OCSP responses, CAs putting
entire chains in OCSP responses).

As a practical matter, there are understandably tradeoffs. Yet you can
allow your customers the control to optimize for their use case and make
the decision best for them, which helps localize some of those tradeoffs.
For example, when you (the CA) is first rolling out such a new root, you're
right that your customers will likely want to include the cross-signed
version back to the existing root within root stores. Yet as root stores
update (which, in the case of browsers, can be quite fast), your customer
could chose to begin omitting that intermediate, and rely on intermediate
preloading (Firefox) or AIA (everyone else). In this model, the AIA for
your 'issuing intermediate' would point to a URL that contained your
cross-signed intermediate, which would then allow them to build the path to
the legacy root. Clients with your new root would build and prefer the
shorter path, because they'd have a trust anchor matching that (root, key)
combination, while legacy clients could still build the legacy path.


> Do you view these as meaningful issues?  Do you know of any CAs that have
> taken this approach?


Definitely! I don't want to sound dismissive of these issues, but I do want
to suggest it's good if we as an industry start tackling these a bit
head-on. I'm particularly keen to understand more about how and when we can
'sunset' roots. For example, if the desire is to introduce a new root in
order to transition to stronger cryptography, I'd like to understand more
about how and when clients get the 'strong' chain or the 'weaker' chain and
how that selection may change over time. I'm understanding to 4K roots -
while I'd rather we were in a world where 2K roots were viable because we
were rotating roots more frequently (hence the above), 4K roots may make
sense given the pragmatic realities that these end up being used much
longer than anticipated. If that's the case, though, it's reasonable to
think we'd retire roots <4K, and it's reasonable to think we don't need
multiple 4K roots. That's why I wanted to flesh out these considerations
and have that discussion, because I'm not sure that just allowing folks to
select '2K vs 4K' for a particular CA really helps move the needle far
enough in user benefit (it does, somewhat, but not as much as 'all 4K', for
example)

My understanding is that both Symantec / DigiCert and Sectigo have pursued
paths like this, and can speak more. ISRG / Let's Encrypt pursued something
similar-but-different, but which had the functional goal of reducing their
dependency on the IdenTrust root in favor of the ISRG root.
_______________________________________________
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

Reply via email to