On 17/01/2018 23:03, Jonathan Rudenberg wrote:

On Jan 17, 2018, at 16:24, Jakob Bohm via dev-security-policy 
<[email protected]> wrote:

On 17/01/2018 21:14, Jonathan Rudenberg wrote:
On Jan 17, 2018, at 14:27, Jakob Bohm via dev-security-policy 
<[email protected]> wrote:

On 17/01/2018 16:13, Jonathan Rudenberg wrote:
On Jan 17, 2018, at 09:54, Alex Gaynor via dev-security-policy 
<[email protected]> wrote:

Hi Wayne,

After some time thinking about it, I struggled to articulate what the right
rules for inclusion were.

So I decided to approach this from a different perspective: which is that I
think we should design our other policies and requirements for CAs around
what we'd expect for organizations operating towards a goal of securing the
Internet as a global public resource.

Towards that goal we should continue to focus on things like transparency
(how this list is run, visibility of audit statements, certificate
transparency) and driving technical improvements to the WebPKI (shorter
certificate lifespans, fewer allowances for non-compliant certificates or
use of deprecated formats and cryptography). If organizations wish to hold
themselves to these (presumably higher) standards for what could equally
well be a private PKI, I don't see that as a problem. On the flip side, we
should not delay improvements because CAs with limited impact on the public
internet struggle with compliance.

I would say, that to limit the danger that such an essentially unused CA 
operator turns rogue, only CAs that provide certificates for public trust 
should be admitted in the future, more on that in another post.

I like this concept a lot. Some concrete ideas in this space:
- Limit the validity period of root certificates to a few years, so that the 
criteria can be re-evaluated, updated, and re-applied on a rolling basis.

This may be fine for TLS root CAs that are distributed in frequently
updated browsers (such as Firefox and Chrome).

It is absolutely fatal for roots that are also used for any of the
following:

- Distributed in browsers that don't get frequent updates (due to
  problems in that distribution channel), such as many browsers
  distributed in the firmware of mobile devices, TVs etc.
Distributing WebPKI roots in infrequently updated software is a bad idea and 
leads to disasters like the issues around the SHA-1 deprecation.

But what should then be done when that infrequently updated software is
in fact a general end user web browser (as opposed to the previously
discussed special cases of certain payment terminals)?  Remove TLS
support?  Trust all certificates without meaningful checks?  Pop up
certificate warnings for every valid certificate?

Don’t ship browsers that don’t get updates. The surface area is huge and there 
are many security bugs that are fixed regularly. Shipping browsers in a way 
that prevents them from being updated frequently is deeply irresponsible.


Unfortunately, people do that, in the billions.  Nothing we can do to
stop them.

The way the SHA-1 deprecation was done, with no widely implemented way
for TLS clients to signal their ability to support stronger algorithms,
has in fact created a situation where unreliable hacks are needed to
support older mobile browsers, including feeding unencrypted pages to
some of them.  The public stigma attached to this makes this something
that is rarely discussed in public, but is quietly done by webmasters that need 
to communicate with those systems.

This is false. TLS clients absolutely signal their ability to support specific 
algorithms, and there are several implementations of serving SHA-1 certificates 
to insecure clients.


Interesting that their is no published support or instructions in the
usual guides on TLS configuration best practices, nor much accommodation
by this root program for SHA-1-forever CAs.


- Used to (indirectly) sign items of long validity, such as e-mails
  (Thunderbird!), timestamps and software.
I don’t know much about S/MIME, but this doesn’t sound right. Of course 
certificates used to sign emails expire! That’s obviously expected, and I’d 
hope that the validation takes that into account.

The mechanisms vary by recipient software.  But a typical technique
combines a known-unmodified-since date (such as the date of reception or
a date certified by a cryptographic timestamps) to compare the relevant
validity dates in certificates against.

This obviously needs continued trust in the root certificates
that were relevant at that earlier time, including the ability
if the corresponding CAs to publish and update revocation
information after the end certificates expiry date.  (Consider the
case where an e-mail sender's personal certificate was
compromised 1 day before expiry, but that fact was not reported
to the CA until later, thus requiring the CA to publish changed
revocation information for an already expired certificate in
order to protect relying parties (recipients) from trusting
fraudulent signatures made with the compromised key.

If this is correct, I don’t see anything that precludes the regular replacement 
of root certificates.

Because any e-mail client (or equivalent for other long-validity
certificate systems) would then need to ship with an ever-growing list
of historic trust anchors.

Also time-stamping servers need to use certificates with near-infinite
lifespans (as long as the statement about the existence of data at
specific times needs to remain trusted).  So either the timestamping EKU
would need its own roots that are not in Mozilla root program, or
Mozilla needs to accept that actual root CA certs normally have long
validity periods (unlike most of the issuing intermediary certificates).


- Apply for inclusion in other root programs with slow/costly/
  inefficient distribution of new root certificates (At least one
  major software vendor has these problems in their root program).
This isn’t Mozilla’s problem, and one can come up with a variety of 
straightforward workarounds.

The big problem is that the formats for communicating the certificate
chain from the certificate holder to the relying parties are quite
limited in how they can accommodate different relying parties trusting
different roots from each CA.  Requiring CAs to set up extra workarounds
just to satisfy an arbitrary policy like yours is an unneeded
complication for everybody but Mozilla and Chrome.

The workarounds are for the root programs that move slowly, not the other way 
around.


But the root programs that move slowly move too slowly to quickly
incorporate workarounds.


- Make all certificates issued by CAs under a root that is trusted for TLS in 
scope for the Baseline Requirements, and don’t allow issuing things like client 
certificates that have much more relaxed requirements (if any). This helps 
avoid ambiguity around scope.

Again this would be a major problem for roots that are used outside web
browsers, including roots used for e-mail certificates (Thunderbird).

The ecosystem already suffers from the need to keep multiple trusted
root certificates per CA organization due to artifacts of existing
rules, no need to make this worse by multiplying this with the number
of certificate end uses.
I’m having trouble seeing how sharding roots based on compliance type is a 
problem. Not doing so complicates reasoning about compliance unnecessarily.

Open the root certificate management interface in most non-Mozilla
software.  It's typically a flat list which is already too long for most
people to work with.   Recent Mozilla products put the certificates into
a hierarchy by organization, but with some mistakes (for example, Google
is not part of Geotrust, Geotrust is part of the Symantec/Digicert
portfolio).

Most people never see that list, and it’s not designed to be useful for people 
that don’t understand what’s going on, so this point is irrelevant.


Understanding what's going on doesn't help with the manual work of
having to scroll past masses of redundant roots when trying to do
specific adjustments.


- Limit the maximum validity period of leaf certificates issued to a sane upper 
bound like 90 days. This will help ensure that we don’t rust old crypto and 
standards in place and makes it less likely that a CA is “too big to fail” due 
to a set of customers that are not expecting to replace their certificates 
regularly.

This would be a *major* problem for any end users not using Let's
encrypt, and would seemingly seek to destroy a major advantage of using
a real CA over Let's encrypt.
Obviously this is completely false. Ridiculous diversions about “real” CAs 
aside, many other CAs issue certificates to automated management systems and 
this is obviously the way forward. Humans should not be managing certificate 
lifecycles.

The fact is that human site operators DO manage certificates manually.
Outside Let's encrypt and certain other large automated environments
this is the normal situation.  If automation had already been the norm,
there was no need for Let's encrypt and its sponsors to develop the ACME
protocol and tools, as they could just have reused the existing tools
you seem to think everybody is using.

This is faulty reasoning. There are many non-standard tools and APIs in use, 
ACME exists to allow consolidation and optimization around a single protocol 
instead of the proliferation of obscure bespoke APIs and tools with a lot of 
potential for security flaws.


However those non-standard tools were obviously not widely enough
deployed to be included with typical web server distributions, nor
prominently offered by CAs to subscribers.

Humans also manage their personal client/e-mail certificates, unless
tricked into using key-escrowed services.

Given that the primary users of S/MIME are Enterprise, this is incorrect. Such 
certificates are typically centrally managed by an IT department using some 
level of automation.


The limited use of personal S/MIME in your environment is mostly an
artifact of the lack of affordable public CAs.

A number of national citizen certificate programs in fact issued (and
might still issue) S/MIME certificates to the majority of the citizenry,
but only for their "main/official" e-mail address (so not any throw-away
addresses used for mailing lists etc.)


Furthermore certificates that are fully validated (OV, EV etc.)
generally involve humans at the subscriber end in the validation
process.  The EV BRs give some key examples of such procedures.

This is yet another reason why EV/OV is bad, not an argument against automation.


This is only an argument against EV/OV if you want the world to be ruled
by AI robots instead of humans.

An additional manual process is the "simple" act of paying for a
certificate application, which involves not just the transaction done
during the ordering process, but also the subsequent book-keeping job of
putting the transaction into the correct part of the company accounts.

Luckily computer programs are very good at tracking transactions and doing 
bookkeeping.

But bookkeeping systems are generally security-isolated from minor
technical systems and IT department technical work.  So the only
transactions they generally automate are the daily sales channels and
the payroll.  Everything else has to be approved by an internal
accountant.


You are assuming a level of automation that just isn't there in the real
world.

I’m not assuming anything. It’s a fact that this automation already exists, and 
it’s very clear from the benefits that it will be even more common in the 
future.


You seem to be stuck inside some kind of ivory tower world where
computers are king and everything is done by robots.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
_______________________________________________
dev-security-policy mailing list
[email protected]
https://lists.mozilla.org/listinfo/dev-security-policy

Reply via email to