On 27/11/2018 00:54, Ryan Sleevi wrote:
> On Mon, Nov 26, 2018 at 12:12 PM Jakob Bohm via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
> 
>> 1. Having a spare certificate ready (if done with proper security, e.g.
>>     a separate key) from a different CA may unfortunately conflict with
>>     badly thought out parts of various certificate "pinning" standards.
>>
> 
> You blame the standards, but that seems an operational risk that the site
> (knowingly) took. That doesn't make a compelling argument.
> 

I blame those standards for forcing every site to choose between two 
unfortunate risks, in this case either the risks prevented by those 
"pinning" mechanisms and the risks associated with having only one 
certificate.

The fact that sites are forced to make that choice makes it unfair to 
presume they should always choose to prevent whichever risk is discussed 
in a given context.  Groups discussing other risks could equally unfairly 
blame sites for not using one of those "pinning" mechanims.

> 
>> 2. Being critical from a society perspective (e.g. being the contact
>>     point for a service to help protect the planet), doesn't mean that the
>>     people running such a service can be expected to be IT superstars
>>     capable of dealing with complex IT issues such as unscheduled
>>     certificate replacement due to no fault of their own.
>>
> 
> That sounds like an operational risk the site (knowingly) took. Solutions
> for automation exist, as do concepts such as "hiring multiple people"
> (having a NOC/SOC). I see nothing to argue that a single person is somehow
> the risk here.
> 

The number of people in the world who can do this is substantially 
smaller than the number of sites that might need them.  We must 
therefore, by necessity, accept that some such sites will not hire such 
people, or worse multiple such people for their own exclusive use.

Automating certificate deployment (as you often suggest) lowers 
operational security, as it necessarily grants read/write access to 
the certificate data (including private key) to an automated, online, 
unsupervised system.

Allowing multiple persons to replace the certificates also lowers 
operational security, as it (by definition) grants multiple persons 
read/write access to the certificate data.

Under the current and past CA model, certificate and private key 
replacement is a rare (once/2 years) operation that can be done 
manually and scheduled weeks in advance, except for unexpected 
failures (such as a CA messing up).


> 
>> 3. Not every site can be expected to have the 24/7 staff on hand to do
>>     "top security credentials required" changes, for example a high-
>>     security end site may have a rule that two senior officials need to
>>     sign off on any change in cryptographic keys and certificates, while a
>>     limited-staff end-site may have to schedule a visit from their outside
>>     security consultant to perform the certificate replacement.
>>
> 
> This is exactly describing a known risk that the site took, accepting the
> tradeoffs. I fail to see a compelling argument that there should be no
> tradeoffs - given the harm presented to the ecosystem - and if sites want
> to make such policies, rather than promoting automation and CI/CD, then it
> seems that's a risk they should bear and make an informed choice.
> 

The trade off would have been made against the risk of the site itself 
mishandling its private key (e.g. a site breach).  Not against force 
majeure situations such as a CA recalling a certificate out of turn.

It is generally not fair to say "that we may impose a difficult 
situation is a risk that the site took".

> Thus I would be all for an official BR ballot to clarify/introduce
>> that 24 hour revocation for non-compliance doesn't apply to non-
>> dangerous technical violations.
>>
> 
> As discussed elsewhere, there is no such thing as "non-dangerous technical
> violations". It is a construct, much like "clean coal", that has an
> appealing turn of phrase, but without the evidence to support it.
> 

That is simply not true.  The case at hand is a very good example, as 
the problem is that a text field used only for display purposes by 
current software, and generally requiring either human interpretation or 
yet-to-be-defined parseable definitions, was given an out-of-range 
value.

Unless someone can point out a real-world piece of production software 
which causes security problems when presented with the particular out-
of-range value, or that the particular out-of-range value would 
reasonably mislead human relying parties, than dangers are entirely 
hypothetical and/or political.

> 
>> Another category that would justify a longer CA response time would be a
>> situation where a large batch of certificates need to be revalidated due
>> to a weakness in validation procedures (such as finding out that a
>> validation method had a vulnerability, but not knowing which if any of
>> the validated identities were actually fake).  For example to recheck a
>> typical domain-control method, a CA would have to ask each certificate
>> holder to respond to a fresh challenge (lots of manual work by end
>> sites), then do the actual check (automated).
> 
> 
> Like the other examples, this is not at all compelling. Solutions exist to
> mitigate this risk entirely. CAs and their Subscribers that choose not to
> avail themselves of these methods - for whatever the reason - are making an
> informed market choice about these. If they're not informed, that's on the
> CAs. If they are making the choice, that's on the Subscribers.
> 

You have yet to point out methods that work in practice and without risk 
for organizations not dedicated to a large scale devOps model like your 
employer.

For example, every BR permitted automated domain validation method 
involves a challenge-response interaction with the site owner, who must 
not (to prevent rogue issuance) respond to that interaction except 
during planned issuance.

Thus any unscheduled revalidation of domain ownership would, by 
necessity, involve contacting the site owner and convincing them this is 
not a phishing attempt.

Some ACME protocols may contain specific authenticated ways for the CA 
to revalidate out-of-schedule, but this would be outside the norm.

> There's zero reason to change, especially when such revalidation can be,
> and is, being done automatically.
> 


Enjoy

Jakob
-- 
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 
_______________________________________________
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

Reply via email to