On 06/06/2017 07:45, Ryan Sleevi wrote:
On Mon, Jun 5, 2017 at 6:21 PM, Jakob Bohm via dev-security-policy <
If you read the paper, it contains a proposal for the CAs to countersign
the computed super-crl to confirm that all entries for that CA match the
actual revocations and non-revocations recorded by that CA. This is not
currently deployed, but is an example of something that CAs could safely
do using their private key, provided sufficient design competence by the
central super-crl team.
I did read the paper - and provide feedback on it.
And that presumption that you're making here is exactly the reason why you
need a whitelist, not a blacklist. "provided sufficient design competence"
does not come for free - it comes with thoughtful peer review and community
feedback. Which can be provided in the aspect of policy.
I am saying that setting an administrative policy for inclusion in a
root program is not the place to do technical reviews of security
protocols. And I proceeded to list places that *do* perform such peer
review at the highest level of competency, but had to note that the list
would be too long to enumerate in a stable root program policy.
Another good example could be signing a "certificate white-list"
containing all issued but not revoked serial numbers. Again someone
not a random CA) should provided a well thought out data format
specification that cannot be maliciously confused with any of the
current data types.
Or a bad example. And that's the point - you want sufficient technical
review (e.g. an SDO ideally, but minimally m.d.s.p review).
SDO? Unfamiliar with that TLA.
And why should Mozilla (and every other root program) be consulted to
unanimously preapprove such technical work? This will create a massive
roadblock for progress. I really see no reason to create another FIPS
140 style bureaucracy of meaningless rule enforcement (not to be
confused with the actual security tests that are also part of FIPS 140
Look, you could easily come up with a dozen examples of improved validation
methods - but just because they exist doesn't mean keeping the "any other
method" is good. And, for what it's worth, of those that did shake out of
the discussions, many of them _were_ insecure at first, and evolved through
Interestingly, the list of revocation checking methods supported by
Chrome (and proposed to be supported by future Firefox versions) is
essentially _empty_ now. Which is completely insecure.
Here's one item no-one listed so far (just to demonstrate our collective
lack of imagination):
This doesn't need imagination - it needs solid review. No one is
disagreeing with you that there can't be improvements. But let's start with
the actual concrete matters at hand, appropriately reviewed by the
Mozilla-using community that serves a purpose consistent with the mission,
or doesn't pose risks to users.
Within *this thread* proposed policy language would have banned that.
And neither I, nor any other participant seemed to realize this specific
omission until my post this morning.
However the failure mode for "signing additional CA operational items"
would be a lot less risky and a lot less reliant on CA competency.
That is demonstrably not true. Just look at the CAs who have had issues
with their signing ceremonies. Or the signatures they've produced.
Did any of those involve erroneously signing non-certificates of a
wholly inappropriate data type?
It is restrictions for restrictions sake, which is always bad policy
No it's not. You would have to reach very hard to find a single security
engineer would argue that a blacklist is better than a whitelist for
security. It's not - you validate your inputs, you don't just reject the
badness you can identify. Unless you're an AV vendor, which would explain
why so few security engineers work at AV vendors.
I am not an AV vendor.
Technical security systems work best with whitelists wherever possible.
Human-to-human policy making works best with blacklists wherever
Root inclusion policies are human-to-human policies.
If necessary, one could define a short list of technical characteristics
that would make a signed item non-confusable with a certificate. For
example, it could be a PKCS#7 structure, or any DER structure whose
first element is a published specification OID nested in one or more
layers of SEQUENCE or SET tags, perhaps more safe alternatives could be
added to this.
You could try to construct such a definition - but that's a needless
technical complexity with considerable ambiguity for a hypothetical
situation that you are the only one advocating for, and using an approach
that has repeatedly lead to misinterpretations and security failures.
Indeed, and I was trying not to until forced by posts rejecting simply
saying that if it looks like a certificate, it counts as a certificate
issuance for policy purposes.
An incorrect CRL is an incorrect CRL and falls under the CRL policy
An incorrect OCSP response is an incorrect OCSP response and falls under
the OCSP policy requirements.
This is an unnecessary ontology split, because it leaves it ambiguous where
something that 'ends up in the middle' is. Which is very much the risk from
these things (e.g. SHA-1 signing of OCSP responses, even if the
certificates signed are SHA-256)
Just trying to preserve existing ontologies. Prior to this thread,
failures in OCSP and CRL operations were never classified as
"mis-issuance", because it shares nothing relevant with "mis-issuance".
For example you cannot "revoke a mis-issued OCSP response" within 24
hours by adding it to CRLs etc. It's nonsense.
Those whitelists have already proven problematic, banning (for example)
any serious test deployment of well-reviewed algorithms such as non-NIST
curves, SHA-3, non-NIST hashes, quantum-resistant algorithms, perhaps
even RSA-PSS (RFC3447, I haven't worked through the exact wordings to
check for inclusion of this one).
I suspect this is the core of our disagreement. It has prevented a number
of insecure deployments or incompatible deployments that would pose
security or compatibility risk to the Web Platform. Crypto is not about
"everything and the kitchen sink" - which you're advocating both here and
overall - it's about having a few, well reviewed, well-oiled joints. The
unnecessary complexity harms the overall security of the ecosystem through
its complexity and harms interoperability - both key values in Mozilla's
I think you are exaggerating my position here. What I am trying to
avoid is a frozen monoculture ecosystem that will fail spectacularly
when the single permitted security configuration is proven inadequate,
because every player in the ecosystem was forced, by policy, to not have
any alternatives ready.
Rather than arguing for the sake of the hypothetical, what some CA "might"
want to do, it's far more productive to have the actual use cases with
actual interest in deployment (of which none of those things are) come
forward to have the public discussion. Otherwise, we're just navel gazing,
and it's unproductive :)
The attitudes on this newsgroup seem to strongly discourage any attempt
to to express such interest. Thus I would not expect any CA wishing to
stay in the root program to risk expressing such interest here.
As a non-CA, I have the freedom to advocate that they be given a fair
Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark. Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
dev-security-policy mailing list