I think requiring publication of profiles for certs is a good idea. It’s part 
of what I’ve wanted to publish as part of our CPS. You can see most of our 
profiles here: 
 but it doesn’t include ICAs right now. That was an oversight that we should 
fix. Publication of profiles probably won’t prevent issues related to 
engineering snafu’s or more manual procedures. However, publication may 
eliminate a lot of the disagreement on BR/Mozilla policy wording. That’s a lot 
more work though for the policy owners so the community would probably need to 
be more actively involved in reviewing profiles. Requiring publication at least 
gives the public a chance to review the information, which may not exist today.

The manual component definitely introduces a lot of risk in sub CA creation, 
and the explanation I gave is broader than renewals. It’s more about the risks 
currently associated with Sub CAs. The difference between renewal and new 
issuance doesn’t exist at DigiCert – we got caught on that issue a long time 

From: Ryan Sleevi <r...@sleevi.com>
Sent: Tuesday, October 8, 2019 5:49 PM
To: Jeremy Rowley <jeremy.row...@digicert.com>
Cc: Wayne Thayer <wtha...@mozilla.com>; Ryan Sleevi <r...@sleevi.com>; 
mozilla-dev-security-policy <mozilla-dev-security-pol...@lists.mozilla.org>
Subject: Re: Mozilla Policy Requirements CA Incidents

On Tue, Oct 8, 2019 at 6:42 PM Jeremy Rowley 
<jeremy.row...@digicert.com<mailto:jeremy.row...@digicert.com>> wrote:
Tackling Sub CA renewals/issuance from a compliance perspective is difficult 
because of the number of manual components involved. You have the key ceremony, 
the scripting, and all of the formal process involved. Because the root is 
stored in an offline state and only brought out for a very intensive procedure, 
there is lots that can go wrong  compared to end-entity certs, including bad 
profiles and bad coding. These events are also things that happen rarely enough 
that many CAs might not have well defined processes around. A couple things 
we’ve done to eliminate issues include:

  1.  2 person review over the profile + a formal sign-off from the policy 
  2.  A standard scripting tool for generating the profile to ensure only the 
subject info in the cert changes.  This has basic some linting.
  3.  We issue a demo cert. This cert is exactly the same as the cert we want 
to issue but it’s not publicly trusted and includes a different serial. We then 
review the demo cert to ensure profile accuracy. We should run this cert 
through a linter (added to my to-do list).

We used to treat renewals separate from new issuance. I think there’s still a 
sense that they “are” different, but that’s been changing. I’m definitely 
looking forward to hearing what other CAs do.

It's not clear: Are you suggesting the the configuration of sub-CA profiles is 
more, less, or the as risky as for end-entity certificates? It would seem that, 
regardless, the need for review and oversight is the same, so I'm not sure that 
#1 or #2 would be meaningfully different between the two types of certificates?

That said, of the incidents, only two of those were potentially related to the 
issuance of new versions of the intermediates (Actalis and QuoVadis). The other 
two were new issuance.

So I don't think we can explain it as entirely around renewals. I definitely 
appreciate the implicit point you're making: which is every manual action of a 
CA, or more generally, every action that requires a human be involved, is an 
opportunity for failure. It seems that we should replace all the humans, then, 
to mitigate the failure? ;)

To go back to your transparency suggestion, would we have been better if:
1) CAs were required to strictly disclose every single certificate profile for 
everything "they sign"
2) Demonstrate compliance by updating their CP/CPS to the new profile, by the 
deadline required. That is, requiring all CAs update their CP/CPS prior to 

Would this prevent issues? Maybe - only to extent CAs view their CP/CPS as 
authoritative, and strictly review what's on them. I worry that such a solution 
would lead to the "We published it, you didn't tell us it was bad" sort of 
situation (as we've seen with audit reports), which then further goes down a 
rabbit-hole of requiring CP/CPS be machine readable, and then tools to lint 
CP/CPS, etc. By the time we've added all of this complexity, I think it's 
reasonable to ask if the problem is not the humans in the loop, but the wrong 
humans (i.e. going back to distrusting the CA). I know that's jumping to 
conclusions, but it's part of what taking an earnest look at these issues are: 
how do we improve things, what are the costs, are there cheaper solutions that 
provide the same assurances?
dev-security-policy mailing list

Reply via email to