The way that we currently handle these types of issues is about as good as
we're going to get. We have a [recently relaxed but still] fairly stringent
set of rules around revocation in the BRs. This is necessary and proper
because slow/delayed revocation can clearly harm our users. It was
difficult to gain consensus within the CAB Forum on allowing even 5 days in
some circumstances - I'm confident that something like 28 days would be a
non-starter. I'm also confident that CAs will always take the entire time
permitted to perform revocations, regardless of the risk, because it is in
their interest to do so (that is not mean to be a criticism of CAs so much
as a statement that CAs exist to serve their customers, not our users). I'm
also confident that any attempt to define "low risk" misissuance would just
incentivize CAs to stop treating misissuance as a serious offense and we'd
be back to where we were prior to the existence of linters..

CAs obviously do choose to violate the revocation time requirements. I do
not believe this is generally based on a thorough risk analysis, but in
practice it is clear that they do have some discretion. I am not aware of a
case (yet) in which Mozilla has punished a CA solely for violating a
revocation deadline. When that happens, the violation is documented in a
bug and should appear on the CA's next audit report/attestation statement.
>From there, the circumstances (how many certs?, what was the issue?, was it
previously documented?, is this a pattern of behavior?) have to be
considered on a case-by-case basis to decide a course of action. I realize
that this is not a very satisfying answer to the questions that are being
raised, but I do think it's the best answer.

- Wayne

On Wed, Nov 28, 2018 at 1:10 PM Nick Lamb via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Mon, 26 Nov 2018 18:47:25 -0500
> Ryan Sleevi via dev-security-policy
> <dev-security-policy@lists.mozilla.org> wrote:
> > CAs have made the case - it was not accepted.
> >
> > On a more fundamental and philosophical level, I think this is
> > well-intentioned but misguided. Let's consider that the issue is one
> > that the CA had the full power-and-ability to prevent - namely, they
> > violated the requirements and misissued. A CA is only in this
> > situation if they are a bad CA - a good CA will never run the risk of
> > "annoying" the customer.
>
> I would sympathise with this position if we were considering, say, a
> problem that had caused a CA to issue certs with the exact same mistake
> for 18 months, rather than, as I understand here, a single certificate.
>
> Individual human errors are inevitable at a "good CA". We should not
> design systems, including policy making, that assume all errors will be
> prevented because that contradicts the assumption that human error is
> inevitable. Although it is often used specifically to mean operator
> error, human error can be introduced anywhere. A requirements document
> which erroneously says a particular Unicode codepoint is permitted in a
> field when it should be forbidden is still human error. A department
> head who feels tired and signs off on a piece of work that actually
> didn't pass tests, still human error.
>
> In true failure-is-death scenarios like fly-by-wire aircraft controls
> this assumption means extraordinary methods must be used in order to
> minimise the risk of inevitable human error resulting in real world
> systems failure. Accordingly the resulting systems are exceptionally
> expensive. Though the Web PKI is important, we should not imagine for
> ourselves that it warrants this degree of care and justifies this level
> of expense even at a "good CA".
>
> What we can require in policy - and as I understand it Mozilla policy
> does require - is that the management (also humans) take steps to
> report known problems and prevent them from recurring. This happened
> here.
>
> > This presumes that the customer cannot take steps to avoid this.
> > However, as suggested by others, the customer could have minimized or
> > eliminated annoyance, such as by ensuring they have a robust system
> > to automate the issuance/replacement of certificates. That they
> > didn't is an operational failure on their fault.
>
> I agree with this part.
>
> > This presumes that there is "little or no risk to relying parties."
> > Unfortunately, they are by design not a stakeholder in those
> > conversations
>
> It does presume this, and I've seen no evidence to the contrary. Also I
> think I am in fact a stakeholder in this conversation anyway?
>
> > I agree that it's entirely worthless the increasingly implausible
> > "important" revocations. I think a real and meaningful solution is
> > what is being more consistently pursued, and that's to distrust CAs
> > that are not adhering to the set of expectations.
>
> I don't think root distrust is an appropriate response, in the current
> state, to a single incident of this nature, this sort of thing is,
> indeed, why you may remember me suggesting that Mozilla needs other
> mechanisms short of distrust in its arsenal.
>
> Nick.
> _______________________________________________
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
_______________________________________________
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

Reply via email to