Re: Symantec response to Google proposal

2017-06-05 Thread Peter Kurrasch via dev-security-policy
  Hi Gerv--Is Mozilla willing to consider a simpler approach in this matter? For example, it seems that much of the complexity of the Google/Symantec proposal stems from this new PKI idea. I think Mozilla could obtain a satisfactory outcome without it.From: Gervase Markham via dev-security-policySent: Friday, June 2, 2017 9:54 AMTo: mozilla-dev-security-pol...@lists.mozilla.orgReply To: Gervase MarkhamSubject: Symantec response to Google proposalhttps://www.symantec.com/connect/blogs/symantec-s-response-google-s-subca-proposalSymantec have responded to the Google proposal (which Mozilla hasendorsed as the basis for further discussion) with a set of inlinecomments which raise some objections to what is proposed.Google will, no doubt, be evaluating these requests for change anddeciding to accept, or not, each of them. But Mozilla can make our ownindependent decisions on these points if we choose. If Google andMozilla accept a change, it is accepted. If Google accepts it but wedecline to accept, we can add it to our list of additional requirementsfor Symantec instead.Therefore, I would appreciate the community's careful consideration ofthe reasonableness of Symantec's requests for change to the proposal.Gerv___dev-security-policy mailing listdev-security-policy@lists.mozilla.orghttps://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


New undisclosed intermediates

2017-06-05 Thread Alex Gaynor via dev-security-policy
Happy Monday!

Another week, another set of intermediate certs that have shown up in CT
without having been properly disclosed:
https://crt.sh/mozilla-disclosures#undisclosed

There are four intermediates here, and with exception of the StartCom one,
they were all issued more than a year ago.

As I've expressed before, I find it baffling that this still happens. To
approach this more productively, I'd be very appreciative if someone from a
CA could describe how they approach disclosing intermediates, where it fits
into their process, how they track progress, etc.

Cheers,
Alex
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


On remedies for CAs behaving badly

2017-06-05 Thread Matthew Hardeman via dev-security-policy
Hi all,

I thought it prudent in light of the recent response from Symantec regarding 
the Google Chrome proposal for remediation to raise the question of the 
possible remedies the community and the root programs have against a CA 
behaving badly (mis-issuances, etc.)

Symantec makes a number of credible points in their responses.  It's hard to 
refute that the time frames required to stand up a third party managed CA 
environment at the scale that can handle Symantec's traffic could happen in 
reasonable time.

In the end, it seems inevitable that everyone will agree that practical time 
frame to accomplish the plan laid out could take... maybe even a year.

As soon as everyone buys into that, Symantec will no doubt come with the "Hmm.. 
By that time, we'll have the new roots in the browser stores, so how about we 
skip the third party and go straight to that?"

Even if that's not the way it goes, this Symantec case is certainly a good 
example of cures (mistrust) being as bad as the disease (negligence, bad 
acting).

Has there ever been an effort by the root programs to directly assess monetary 
penalties to the CAs -- never for inclusion -- but rather as part of a 
remediation program?

Obviously there would be limits and caveats.  A shady commercial CA propped up 
by a clandestine government program such that the CA seems eager to pay out for 
gross misissuance -- even in amounts that exceed their anticipated revenue -- 
could not be allowed.

I am curious however to know whether anyone has done any analysis on the 
introduction of economic sanctions in order to remain trusted -- combined with 
proper remediation -- as a mechanism for incentivizing compliance with the 
rules?

Particularly in smaller organizations, it may be less necessary.  In larger 
(and especially publicly traded) companies, significant economic sanctions can 
get the attention and involvement of the highest levels of management in a way 
that few other things can.

Thanks,

Matt
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: On remedies for CAs behaving badly

2017-06-05 Thread Ryan Sleevi via dev-security-policy
On Mon, Jun 5, 2017 at 11:52 AM, Matthew Hardeman via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:
>
> Has there ever been an effort by the root programs to directly assess
> monetary penalties to the CAs -- never for inclusion -- but rather as part
> of a remediation program?
>

The extent upon which there can be meaningful discussion about this is
going to be understandably significantly limited, for non-technical reasons.

I can simply point you to the existing precedent and discussions around
such proposals:

1) Examine the DigiNotar case, both with respect to liability and with
respect to insurance
2) Examine the CA/Browser Forum's multiple discussions around CA liability
in the context of EV, with Browsers uniformly voting against imposing
additional liability due to the fact that no liability claim for
misissuance has ever been successfully claimed, and thus it merely
represents an artificial barrier to market entry that predominantly Western
CAs use to exclude those in other jurisdictions
3) Examine CAs' CP/CPS statements with respect to disclaiming liability.
4) Examine CA's Relying Party Agreements regarding the obligations of an RP
prior to having liability

While on paper the idea sounds quite good, it turns out to simply trade
technical complexity for complexity of the non-technical sort. As such,
it's best to focus on meaningful and actionable technical solutions.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: On remedies for CAs behaving badly

2017-06-05 Thread Moudrick M. Dadashov via dev-security-policy

+1

Thanks,
M.D.

On 6/5/2017 7:16 PM, Ryan Sleevi via dev-security-policy wrote:

On Mon, Jun 5, 2017 at 11:52 AM, Matthew Hardeman via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

Has there ever been an effort by the root programs to directly assess
monetary penalties to the CAs -- never for inclusion -- but rather as part
of a remediation program?


The extent upon which there can be meaningful discussion about this is
going to be understandably significantly limited, for non-technical reasons.

I can simply point you to the existing precedent and discussions around
such proposals:

1) Examine the DigiNotar case, both with respect to liability and with
respect to insurance
2) Examine the CA/Browser Forum's multiple discussions around CA liability
in the context of EV, with Browsers uniformly voting against imposing
additional liability due to the fact that no liability claim for
misissuance has ever been successfully claimed, and thus it merely
represents an artificial barrier to market entry that predominantly Western
CAs use to exclude those in other jurisdictions
3) Examine CAs' CP/CPS statements with respect to disclaiming liability.
4) Examine CA's Relying Party Agreements regarding the obligations of an RP
prior to having liability

While on paper the idea sounds quite good, it turns out to simply trade
technical complexity for complexity of the non-technical sort. As such,
it's best to focus on meaningful and actionable technical solutions.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: On remedies for CAs behaving badly

2017-06-05 Thread Peter Bowen via dev-security-policy
On Mon, Jun 5, 2017 at 9:16 AM, Ryan Sleevi via dev-security-policy
 wrote:
> On Mon, Jun 5, 2017 at 11:52 AM, Matthew Hardeman via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>>
>> Has there ever been an effort by the root programs to directly assess
>> monetary penalties to the CAs -- never for inclusion -- but rather as part
>> of a remediation program?
>
> The extent upon which there can be meaningful discussion about this is
> going to be understandably significantly limited, for non-technical reasons.
>
> I can simply point you to the existing precedent and discussions around
> such proposals:
>
> 2) Examine the CA/Browser Forum's multiple discussions around CA liability
> in the context of EV, with Browsers uniformly voting against imposing
> additional liability due to the fact that no liability claim for
> misissuance has ever been successfully claimed, and thus it merely
> represents an artificial barrier to market entry that predominantly Western
> CAs use to exclude those in other jurisdictions

It is also worth noting that many CAs are fairly small companies.
Many CAs are privately held or small portions of much larger
companies, so estimating their sizes can be hard.  However there are a
few data points:

Buypass posts total revenue
(https://www.buypass.no/om-buypass/selskapet/n%C3%B8kkeltall): They
reported revenue of 192 million Norweigan Krones in 2015; using
today's exchange rate, this is about $23 million US dollars.
WISeKey reported QuoVadis (whom they acquired) had revenue of $18
million US dollars in 2016
(https://www.wisekey.com/press/wisekey-completes-acquisition-of-cybersecurity-company-quovadis-and-becomes-an-pki-internet-of-things-security-industry-leader/)

There are almost surely EV CAs that do even less revenue per year.
Therefore what is small to one company may be huge to another.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Add definition of "mis-issuance"

2017-06-05 Thread Jakob Bohm via dev-security-policy

On 02/06/2017 17:12, Ryan Sleevi wrote:

On Fri, Jun 2, 2017 at 10:09 AM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


On 02/06/2017 15:54, Ryan Sleevi wrote:

On Fri, Jun 2, 2017 at 9:33 AM, Peter Bowen  wrote:


On Fri, Jun 2, 2017 at 4:27 AM, Ryan Sleevi  wrote:
Yes, my concern is that this could make SIGNED{ToBeSigned} considered
misissuance if ToBeSigned is not a TBSCertificate.  For example, if I
could sign an ASN.1 sequence which had the following syntax:

TBSNotCertificate ::= {
 notACertificateUTF8String,
 COMPONENTS OF TBSCertificate
}

Someone could argue that this is mis-issuance because the resulting
"certificate" is clearly corrupt, as it fails to start with an
INTEGER.  On the other hand, I think that this is clearly not
mis-issuance of a certificate, as there is no sane implementation that
would accept this as a certificate.



Would it be a misissuance of a certificate? Hard to argue, I think.

Would it be a misuse of key? I would argue yes, unless the
TBSNotCertificate is specified/accepted for use in the CA side (e.g. IETF
WD, at the least).

As a practical matter, this largely only applies to the use of signatures
for which collisions are possible - since, of course, the

TBSNotCertificate

might be constructed in such a way to collide with the TBSCertificate.
As a "assume a jackass genie is interpreting the policy" matter, what

about

situations where a TBSNotCertificate has the same structure as
TBSCertificate? The fact that they are identical representations
on-the-wire could be argued as irrelevant, since they are non-identical
representations "in the spec". Unfortunately, this scenario has come up
once before already - in the context of RFC 6962 (and hence the
clarifications in the Baseline Requirements) - so it's not unreasonable a
scenario to expect.

The general principle I was trying to capture was one of "Only sign these
defined structures, and only do so in a manner conforming to their
appropriate encoding, and only do so after validating all the necessary
information. Anything else is 'misissuance' - of a certificate, a CRL, an
OCSP response, or a Signed-Thingy"



Thing is, that there are still serious work involving the definition of
new CA-signed things, such as the recent (2017) paper on a super-
compressed CRL-equivalent format (available as a Firefox plugin).



This does ny rely on CA signatures - but also perfectly demonstrates the
point - that these things should be getting widely reviewed before
implementing.



If you read the paper, it contains a proposal for the CAs to countersign
the computed super-crl to confirm that all entries for that CA match the
actual revocations and non-revocations recorded by that CA.  This is not
currently deployed, but is an example of something that CAs could safely
do using their private key, provided sufficient design competence by the
central super-crl team.

Another good example could be signing a "certificate white-list"
containing all issued but not revoked serial numbers.  Again someone
not a random CA) should provided a well thought out data format
specification that cannot be maliciously confused with any of the
current data types.





Banning those by policy would be as bad as banning the first OCSP
responder because it was not yet on the old list {Certificate, CRL}.



This argument presumes technical competence of CAs, for which collectively
there is no demonstrable evidence.


In this case, it would presume that technical competence exists at high
end crypto research / specification teams defining such items, not at
any CA or vendor.  For example any such a format could come from the
IETF, ITU-T, NIST, IEEE, ICAO, or any of the big crypto research centers
inside/outside the US (too many to enumerate in a policy).

Here's one item no-one listed so far (just to demonstrate our collective
lack of imagination):

Using the CA private key to sign a CSR to request cross-signing from
another CA (trusted or untrusted by Mozilla).



Functionally, this is identical to banning the "any other method" for
domain validation. Yes, it allowed flexibility - but at the extreme cost to
security.



However the failure mode for "signing additional CA operational items"
would be a lot less risky and a lot less reliant on CA competency.


If there are new and compelling thing to sign, the community can review and
the policy be updated. I cannot understand the argument against this basic
security sanity check.



It is restrictions for restrictions sake, which is always bad policy
making.





Hence my suggested phrasing of "Anything that resembles a certificate"
(my actual wording a few posts up was more precise of cause).



Yes, and I think that wording is insufficient and dangerous, despite your
understandable goals, for the reasons I outlined.




If necessary, one could define a short list of technical characteristics
that would make a signed item non-confusable with a certificate.  

Re: On remedies for CAs behaving badly

2017-06-05 Thread Peter Kurrasch via dev-security-policy
  Consider, too, that removing trust from a CA has an economic sanction built-in: loss of business. For many CA's I imagine that serves as motivation enough for good behavior but others...possibly not.Either way, figuring out how to impose, fairly, an explicit financial toll on bad CA's is likely to be as difficult as figuring out any of the other remedies ‎that are presently available. (For example, who gets to keep the money collected?)From: Matthew Hardeman via dev-security-policySent: Monday, June 5, 2017 10:52 AM‎Hi all,I thought it prudent in light of the recent response from Symantec regarding the Google Chrome proposal for remediation to raise the question of the possible remedies the community and the root programs have against a CA behaving badly (mis-issuances, etc.)Symantec makes a number of credible points in their responses.  It's hard to refute that the time frames required to stand up a third party managed CA environment at the scale that can handle Symantec's traffic could happen in reasonable time.In the end, it seems inevitable that everyone will agree that practical time frame to accomplish the plan laid out could take... maybe even a year.As soon as everyone buys into that, Symantec will no doubt come with the "Hmm.. By that time, we'll have the new roots in the browser stores, so how about we skip the third party and go straight to that?"Even if that's not the way it goes, this Symantec case is certainly a good example of cures (mistrust) being as bad as the disease (negligence, bad acting).Has there ever been an effort by the root programs to directly assess monetary penalties to the CAs -- never for inclusion -- but rather as part of a remediation program?Obviously there would be limits and caveats.  A shady commercial CA propped up by a clandestine government program such that the CA seems eager to pay out for gross misissuance -- even in amounts that exceed their anticipated revenue -- could not be allowed.I am curious however to know whether anyone has done any analysis on the introduction of economic sanctions in order to remain trusted -- combined with proper remediation -- as a mechanism for incentivizing compliance with the rules?Particularly in smaller organizations, it may be less necessary.  In larger (and especially publicly traded) companies, significant economic sanctions can get the attention and involvement of the highest levels of management in a way that few other things can.Thanks,Matt___dev-security-policy mailing listdev-security-policy@lists.mozilla.orghttps://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: On remedies for CAs behaving badly

2017-06-05 Thread Matt Palmer via dev-security-policy
On Mon, Jun 05, 2017 at 08:25:22PM -0500, Peter Kurrasch via 
dev-security-policy wrote:
>Consider, too, that removing trust from a CA has an economic sanction
>built-in: loss of business. For many CA's I imagine that serves as
>motivation enough for good behavior but others...possibly not.

I think it's a strong motivator, it's just that CAs trust that the
collateral damage of broad distrust will prevent trust stores from deploying
the sanction.  Essentially, CAs use relying parties as a human shield
against having meaningful sanctions deployed against them.  Hence "Too Big
to Fail".

>(For example, who gets to keep the money collected?)

Me, of course.  

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [EXT] Symantec response to Google proposal

2017-06-05 Thread Martin Heaps via dev-security-policy
As an incidental, I am negatively influenced by reading Symantecs response:

On Friday, 2 June 2017 16:48:45 UTC+1, Steve Medin  wrote:

>  
>  https://www.symantec.com/connect/blogs/symantec-s-response-google-
>  s-subca-proposal
>  
>
>
> > Our primary objective has always been to minimize any potential business
> > disruption for our customers

So, Symantec's primary objective is not PK Security, PKI Trust, or Best 
Practise, or even Baseline Requirements?

> > Our CA business is led and staffed by experienced individuals around the 
> > world 
> > who serve our customers while ensuring our issuance practices comply with 
> > industry and browser requirements.  

This is fundametally inaccurate, if this was true then the issues that Mozilla 
and others have discovered wouldn't have been there to find.


> > As the largest issuer of EV and OV certificates in the industry according 
> > to 
> > Netcraft, Symantec handles significantly larger volumes of validation 
> > workloads across more geographies than most other CA’s. To our knowledge, 
> > no 
> > other single CA operates at the scale nor offers the broad set of 
> > capabilities 
> > that Symantec offers today.

So what if Symantec is the largest? If I am the busiest barman in the West and 
serving thousands of drinks an hour, if these drinks are in fact diluted down, 
the VOLUME of drinks I serve does not make up for the QUALITY of the drinks I 
serve. 

Likewise, Every time Symantec issues an EV or OV certificate, they are paid, 
they make money. That's business, but if Symantec then decide not to reinvest 
in their infrastructure to support that business, why on earth should the rest 
of the PKI infrastructure have to give them some sort of special leniency?
  
> > Google shared this new proposal for Symantec’s CA with the community on May
> > 15. We have since been reviewing this proposal and weighing its merits 
> > against feedback we’ve heard from the broader community, including our CA 
> > customers.

If Symantec customers (who DO NOT KNOW the technical or even broader details of 
the issues at hand) have an nifluence on the way Symantec acts, it's not going 
to be best interest for the wider PKI security because it's doubtful of the 
technical knowledge available to these influncers. 


This whole blog post unfortuantely comes across as Symantec weasel-wording it's 
way out of self improvement or even real acceptance of the bad practise that 
has been documented so far.

Disappointing, but un-surprising. 

I feel Symantec needs the associated potential business penalty of running the 
risk of lost business (which I'm sure they can afford, being the biggest EV and 
OV provider in the world) to remind them, and to underline to them the 
importance of adhereing to the Baseline requirements and keeping the PKI 
secure. 


___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Add definition of "mis-issuance"

2017-06-05 Thread Ryan Sleevi via dev-security-policy
On Mon, Jun 5, 2017 at 6:21 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> If you read the paper, it contains a proposal for the CAs to countersign
> the computed super-crl to confirm that all entries for that CA match the
> actual revocations and non-revocations recorded by that CA.  This is not
> currently deployed, but is an example of something that CAs could safely
> do using their private key, provided sufficient design competence by the
> central super-crl team.
>

I did read the paper - and provide feedback on it.

And that presumption that you're making here is exactly the reason why you
need a whitelist, not a blacklist. "provided sufficient design competence"
does not come for free - it comes with thoughtful peer review and community
feedback. Which can be provided in the aspect of policy.


> Another good example could be signing a "certificate white-list"
> containing all issued but not revoked serial numbers.  Again someone
> not a random CA) should provided a well thought out data format
> specification that cannot be maliciously confused with any of the
> current data types.
>

Or a bad example. And that's the point - you want sufficient technical
review (e.g. an SDO ideally, but minimally m.d.s.p review).

Look, you could easily come up with a dozen examples of improved validation
methods - but just because they exist doesn't mean keeping the "any other
method" is good. And, for what it's worth, of those that did shake out of
the discussions, many of them _were_ insecure at first, and evolved through
community discussion.


> In this case, it would presume that technical competence exists at high
> end crypto research / specification teams defining such items, not at
> any CA or vendor.  For example any such a format could come from the
> IETF, ITU-T, NIST, IEEE, ICAO, or any of the big crypto research centers
> inside/outside the US (too many to enumerate in a policy).
>

And so could new signature algorithms. But that doesn't mean there
shouldn't be a policy on signature algorithms.


> Here's one item no-one listed so far (just to demonstrate our collective
> lack of imagination):
>

This doesn't need imagination - it needs solid review. No one is
disagreeing with you that there can't be improvements. But let's start with
the actual concrete matters at hand, appropriately reviewed by the
Mozilla-using community that serves a purpose consistent with the mission,
or doesn't pose risks to users.


> However the failure mode for "signing additional CA operational items"
> would be a lot less risky and a lot less reliant on CA competency.


That is demonstrably not true. Just look at the CAs who have had issues
with their signing ceremonies. Or the signatures they've produced.


> It is restrictions for restrictions sake, which is always bad policy
> making.
>

No it's not. You would have to reach very hard to find a single security
engineer would argue that a blacklist is better than a whitelist for
security. It's not - you validate your inputs, you don't just reject the
badness you can identify. Unless you're an AV vendor, which would explain
why so few security engineers work at AV vendors.


> If necessary, one could define a short list of technical characteristics
> that would make a signed item non-confusable with a certificate.  For
> example, it could be a PKCS#7 structure, or any DER structure whose
> first element is a published specification OID nested in one or more
> layers of SEQUENCE or SET tags, perhaps more safe alternatives could be
> added to this.
>

You could try to construct such a definition - but that's a needless
technical complexity with considerable ambiguity for a hypothetical
situation that you are the only one advocating for, and using an approach
that has repeatedly lead to misinterpretations and security failures.


> An incorrect CRL is an incorrect CRL and falls under the CRL policy
> requirements.
>
> An incorrect OCSP response is an incorrect OCSP response and falls under
> the OCSP policy requirements.


This is an unnecessary ontology split, because it leaves it ambiguous where
something that 'ends up in the middle' is. Which is very much the risk from
these things (e.g. SHA-1 signing of OCSP responses, even if the
certificates signed are SHA-256)


> Those whitelists have already proven problematic, banning (for example)
> any serious test deployment of well-reviewed algorithms such as non-NIST
> curves, SHA-3, non-NIST hashes, quantum-resistant algorithms, perhaps
> even RSA-PSS (RFC3447, I haven't worked through the exact wordings to
> check for inclusion of this one).


I suspect this is the core of our disagreement. It has prevented a number
of insecure deployments or incompatible deployments that would pose
security or compatibility risk to the Web Platform. Crypto is not about
"everything and the kitchen sink" - which you're advocating both here and
overall - it's about having a few, well reviewed, well-o