Re: Technically Constrained Sub-CAs and the BRs

2016-11-07 Thread Santhan Raj
> Certificates that are capable of being used to issue new certificates MUST 
> either be Technically Constrained in line with section 7.1.5 and audited in 
> line with section 8.7 only, or Unconstrained and fully audited in line with 
> all remaining requirements from this section
> 
> Section 8.7 reads:
> During the period in which a Technically Constrained Subordinate CA issues 
> Certificates, the CA which signed the Subordinate CA SHALL monitor adherence 
> to the CA’s Certificate Policy and the Subordinate CA’s Certification 
> Practice Statement. On at least a quarterly basis, against a randomly 
> selected sample of the greater of one certificate or at least three percent 
> of the Certificates issued by the Subordinate CA, during the period 
> commencing immediately after the previous audit sample was taken, the CA 
> shall ensure all applicable CP are met. 
> 
> 
> That is, according to the BRs, the issuer of a technically constrained 
> subordinate CA has a BR-obligation to ensure that their TCSCs are adhering to 
> the BRs and the issuing CA's policies and practices, as well as conduct a 
> sampling audit quarterly.

May be I'm missing it, but I don't see 8.7 (or at least the lines quoted above) 
requiring TCSC to be compliant with BR. I read it as TCSCs must adhere to the 
Issuing CA's CP and their own (TCSC's) CPS, adhereance towards which should be 
verified by the Issuing CAs, however it doesn't (explicitly) state TCSC 
compliance towards BR. 

Is this how you arrived at "TCSC should adhere to BRs", (which to me at least, 
personally, sounds fair and logical): 
Issuing CA must be BR compliant 
  -> Issuing CA's CP must be BR compliant (unless the CA gets creative) 
 -> TCSC's CPS should adhere Issuing CA's CA 
-> TCSC's CPS should adhere to BR
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Distrusting New WoSign and StartCom Certificates -- Mozilla Security Blog

2016-11-07 Thread Percy
On Monday, October 24, 2016 at 6:09:50 PM UTC-7, Kathleen Wilson wrote:
> The security blog about Distrusting New WoSign and StartCom Certificates has 
> been published:
> 
> https://blog.mozilla.org/security/2016/10/24/distrusting-new-wosign-and-startcom-certificates/
> 
> Chinese translations of it will be posted soon.
> 
> Thanks,
> Kathleen

StartCom finally posted an announcement publicly on Nov. 3 
https://startssl.com/NewsDetails?date=20161103
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla CT Policy

2016-11-07 Thread Ryan Sleevi
On Monday, November 7, 2016 at 9:02:37 AM UTC-8, Gervase Markham wrote:
> As in, their dishonesty would be carefully targetted and so not exposed
> by this sort of coarse checking?

(Continuing with Google/Chrome hat on, since I didn't make the previous reply 
explicit)

Yes. An 'evil log' can provide a divided split-view, targeting only an affected 
number of users. Unless that SCT was observed, and reported (via Gossip or some 
other means of exfiltration), that split view would not be detected.

Recall: In order to ensure a log is honest, you need to ensure it's providing 
consistent views of the STH *and* that SCTs are being checked. In the absence 
of the latter, you don't need to do the former - and that's infrastructure for 
monitoring primarily focuses on the STH consistency, with the 
assumption/expectation that clients are doing the SCT inclusion proof fetching.

So if I were wanting to run an evil log, which could hide misissued 
certificates, I could sufficiently compel or coerce a quorum of acceptable logs 
to 'misissue' an SCT for which they never incorporated into their STH. So long 
as clients don't ask for an inclusion proof of this SCT, there's no need for a 
split log - and no ability for infrastructure to detect. You could use such a 
certificate in targeted, user-specific attacks.

This is why it's vitally important that clients fetch inclusion proofs in some 
manner (either through gossip or through 'privacy' intermediaries, which is 
effectively what the Google DNS proposal is - using your ISP's DNS hierarchy as 
the privacy preserving aspect), and then check that the STH is consistent 
(which, in the case of Chrome, Chrome clients checking Google's DNS servers is 
effectively an STH consistency proof with what Google sees).

In the absence of this implementation, checking the SCT provides limited 
guarantee that a certificate has actually been logged - in effect, you're 
making a full statement that you trust the log to be honest. Google's goal for 
Certificate Transparency has been to not trust logs to be honest, but to verify 
- but as Chrome builds out it's implementation, it has to 'trust someone' - and 
given our broader analysis of the threat model and scenario, the decision to 
"trust Google" (by requiring at least one SCT from a Google-operated log) is 
seen as no worse than existing "trust Google" requests existing Chrome users 
are asked of (for example, trusting Chrome's autoupdate will not be 
compromised, trusting Google not to deliver targeted malicious code). [1]

Thus, in the absence of SCT inclusion proof checking (whether temporarily, as 
implementations blossom, or permanently, if you feel there can be no suitable 
privacy-preserving solution), you're trusting the logs not to misbehave, much 
like you trust CAs not to misbehave. You can explore technical solutions - such 
as inclusion proof checking - or you can explore policy solutions - such as 
requiring a Mozilla log, or requiring logs have some criteria to abide by ala 
WebTrust for CAs, or who knows what - but it's at least useful to understand 
the context for why that decision exists, and what the trust tradeoffs are with 
such a decision.


[1] As an aside, this "trust Google for binaries" bit is being explored in 
concepts like Binary Transparency, a very nascant and early stage exploration 
in how to provide reliable assurances that binaries aren't targeted. Similarly, 
the work of folks on verifiable builds, such as shown by Tor Browser Bundle, 
are meant to address the case of no 'obvious' backdoors, but the situation is 
more complex when non-open code is involved. I call this out to highlight that 
the computer industry has still not solved this, and even if we did for 
software, we have compilers hardware to contend with, and then we're very much 
into "Reflections on Trusting Trust" territory.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla CT Policy

2016-11-07 Thread Gervase Markham
On 07/11/16 16:13, Ryan Sleevi wrote:
> Yes, particularly for logs that may be compelled to be dishonest for 
> geopolitical reasons.

As in, their dishonesty would be carefully targetted and so not exposed
by this sort of coarse checking?

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla CT Policy

2016-11-07 Thread Ryan Sleevi
On Monday, November 7, 2016 at 1:59:31 AM UTC-8, Gervase Markham wrote:
> It is correct that there is not yet a plan for when Firefox might
> implement inclusion proof fetching.
> 
> One thing I have been pondering is checking the honesty of logs via
> geographically distributed checks done by infra rather than clients. Did
> Google consider that too easy to game?

Yes, particularly for logs that may be compelled to be dishonest for 
geopolitical reasons.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Implementing a SHA-1 ban via Mozilla policy

2016-11-07 Thread Gervase Markham
On 07/11/16 15:34, Doug Beattie wrote:
> I'd prefer a requirement for long serial numbers over a total ban on
> SHA-1 Sub CAs. The BRs state 112 bits of entropy, so I'd recommend
> using that for non BR certificates (assuming client applications
> don't have issues with that).

Can you list some of the uses you'd still like to use SHA-1 in
publicly-trusted hierarchies for?

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla CA Policy 2.3 plan

2016-11-07 Thread Gervase Markham
On 07/11/16 14:34, Kurt Roeckx wrote:
> In my experience, pointing to a specific section of the BRs causes
> problems because things are moved, renumbered and so on. Other changes
> in the document also point to specific sections.

The BRs now follow RFC 3647, which AIUI specifies the title and
numbering of each section. So this is much less of a problem than it was
before we converted to using RFC 3647.

Gerv

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla CA Policy 2.3 plan

2016-11-07 Thread Kurt Roeckx

On 2016-11-07 15:08, Gervase Markham wrote:

https://github.com/mozilla/pkipolicy/compare/2.2...master


So one of the changes is that you now have:
-issuing certificates), as described in [CA/Browser Forum
-Baseline Requirement
-\#12;](http://www.cabforum.org/documents.html)
+issuing certificates), as described in section 6.1.7 of the
+[CA/Browser Forum Baseline
+ 
Requirements](https://cabforum.org/baseline-requirements-documents/);


In my experience, pointing to a specific section of the BRs causes 
problems because things are moved, renumbered and so on. Other changes 
in the document also point to specific sections.



Kurt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Implementing a SHA-1 ban via Mozilla policy

2016-11-07 Thread Gervase Markham
On 07/11/16 13:11, Phillip Hallam-Baker wrote:
> Not long after I was sitting in a conference at NIST listening to a talk on
> how shutting down DigiNotar had shut down the port of Amsterdam and left
> meat rotting on the quays etc. Ooops.

Sounds like someone got a lesson in single points of failure, cert
agility and so on. Let's hope they took it.

I'm not sure I totally understand your point. You are saying that it's
not reasonable to eliminate SHA-1 from the publicly trusted hierarchies
entirely because there are devices out there which are not going to be
upgraded and which don't support SHA-256, and further that these devices
are not web devices and so we shouldn't be purporting to control their
crypto?

> None of the current browser versions support SHA-1. 

Yes, they do. They won't as of January 2017.

> If digest functions are so important, perhaps the industry should be
> focusing on deployment of SHA-3 as a backup in case SHA-2 is found wanting
> in the future.

https://yourlogicalfallacyis.com/black-or-white . This is not either/or.

Gerv


___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Implementing a SHA-1 ban via Mozilla policy

2016-11-07 Thread Gervase Markham
On 07/11/16 10:52, Nick Lamb wrote:
> Where we don't have another way forward, I think one option is for
> CAs to replace an existing unconstrained intermediate with a newer
> one that is suitably constrained, and revoke the old one. This is
> subject to all the usual caveats about revocation and of course the
> constraints chosen must be practical for that particular CA in the
> chosen timeframe.

You mean EKU-constrained (e.g. to email, or OCSP only)?

> Another economic tactic would be to require CAs to use long random
> serial numbers even in non-BR certificates. 

How long would you say is long enough?

Gerv

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Implementing a SHA-1 ban via Mozilla policy

2016-11-07 Thread Phillip Hallam-Baker
Remember the DigiNotar incident? At the time, I thought that pulling the
DigiNotar roots was exactly the right thing to do. I didn't say so as it
isn't proper for people to be suggesting putting their competitors out of
business. But I thought it the right thing to do.

Not long after I was sitting in a conference at NIST listening to a talk on
how shutting down DigiNotar had shut down the port of Amsterdam and left
meat rotting on the quays etc. Ooops.

The WebPKI is a complicated infrastructure that is used in far more ways
than any of us is aware of. And when it was being developed it wasn't clear
what the intended scope of use was. So it isn't very surprising that it has
been used for a lot of things like point of sale terminals etc.

It is all very well saying that people shouldn't have done these things
after the facts are known. But right now, I don't see any program in place
telling people in the IoT space what they should be doing for devices that
can't be upgraded in the field.

None of the current browser versions support SHA-1. Yes, people could in
theory turn it back on for some browsers but that isn't an argument because
the same people can edit their root store themselves as well. Yes people
are still using obsolete versions of Firefox etc. but do we really think
that SHA-1 is the weakest point of attack?

If digest functions are so important, perhaps the industry should be
focusing on deployment of SHA-3 as a backup in case SHA-2 is found wanting
in the future.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SHA-1 issuances in 2016 That Chain to Mozilla Roots

2016-11-07 Thread Gervase Markham
On 05/11/16 13:49, Ryan Sleevi wrote:
> As noted elsewhere, the issuance of SHA-1 allows for an attacker to
> pivot the contents of the certificates, and the only mitigation is
> the EKU on the sub-CA.
> 
> Are you suggesting this is GA because it wasn't clear enough to CA
> members at the time this was issued?

It's GA because the Mozilla SHA-1 ban is currently (but see my other
message posted today) implemented via the BRs, and because these certs
have an EKU but don't have serverAuth, they are pretty clearly not in
the scope of the BRs. So we have no policy mechanism to complain.

I suspect there are a ton of such email certs out there; it's just that
only a few of them happen to make their way into CT and therefore crt.sh.

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla CT Policy

2016-11-07 Thread Gervase Markham
On 05/11/16 19:33, Ryan Sleevi wrote:
> My understanding was that Mozilla's implementation status was similar
> to Chrome's a year ago - that is, that it doesn't implement inclusion
> proof fetching (in the background) and that work hadn't been
> scheduled/slated yet. In that case, it's a question for Mozilla about
> whether to trust that logs won't lie, or whether to verify.

It is correct that there is not yet a plan for when Firefox might
implement inclusion proof fetching.

One thing I have been pondering is checking the honesty of logs via
geographically distributed checks done by infra rather than clients. Did
Google consider that too easy to game?

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Implementing a SHA-1 ban via Mozilla policy

2016-11-07 Thread Gervase Markham
It has been noted that currently, Mozilla's SHA-1 ban is implemented via
the ban in the BRs, which we require all CAs to adhere to. At the
moment, Mozilla policy simply says:

"We consider the following algorithms and key sizes to be acceptable and
supported in Mozilla products:

SHA-1 (until a practical collision attack against SHA-1 certificates
is imminent);
"

Whether or not such an attack is imminent, we have not notified CAs that
it is, and so we cannot claim this clause is a ban. However,
implementing the ban via the BRs is problematic for a number of reasons:

* It allows the issuance of SHA-1 certs in publicly-trusted hierarchies
in those cases where the cert is not within scope of the BRs (e.g. email
certs).

* The scope of the BRs is a matter of debate, and so there are grey
areas, as well as areas clearly outside scope, where SHA-1 issuance
could happen.

* Even when the latest version of Firefox stops trusting SHA-1 certs in
January, a) that block is overrideable, and b) that doesn't address
risks to older versions.

Therefore, we would like to update Mozilla's CA policy to implement a
"proper" SHA-1 ban, which we would implement via a CA Communication, and
then later in an updated version of our policy. This message is intended
to start a discussion about how we can reasonably and safely define
"proper".

One option would be to say that there should be no signing of SHA-1
hashes in any circumstances within the hierarchies which chain up to
Mozilla-trusted roots. However, it's possible that such a total ban
would have disproportionate impact, and there are some circumstances
where SHA-1-hash-signing can safely continue (e.g. if all data is
CA-controlled). Or it may be that there isn't. Comments welcome.

We would also need to decide on a timeline for CAs to implement any
changes we require, and comments are welcome on that also.

Gerv

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Remediation Plan for WoSign and StartCom

2016-11-07 Thread Rami Kogan
Just came across the following Phishing site which is using a StartCom cert:

hXXps://serviices-intl.com/webapps/6fa9b/websrc







On 11/2/16, 6:32 PM, "dev-security-policy on behalf of Itzhak Daniel" 
 wrote:

>On Wednesday, November 2, 2016 at 5:22:30 PM UTC+2, Gervase Markham wrote:
>> Hi Daniel,
>>
>> On 02/11/16 14:11, Itzhak Daniel wrote:
>> As far as the DigiCert certs go, it is far too early to have an opinion
>> on what Mozilla is or isn't doing.
>
>I have to agree, the time span is too short (at least they didn't backdate).
>
>> I'm not sure what you mean by "ignoring Mozilla Security Community". I
>> am happy with the level of communication by Comodo about their incident.
>
>AFAIK they didn't include the TLD '.re' in their incident report [1] (the 
>certificate was probably issued on Jun 30th, 2014; Google CT 1st seen 
>timestamp: 2014-07-02 14:54:54 GMT [2]), they had the same mistake before the 
>'sb' incident, but did/do not acknowledge it officially [3].
>
>Links,
>1. 
>https://scanmail.trustwave.com/?c=4062=sZWa2NJm1b7zf0w12nNA5JOUrTfLuNXQPooKM1C0fA=5=https%3a%2f%2fwww%2email-archive%2ecom%2fdev-security-policy%40lists%2emozilla%2eorg%2fmsg04274%2ehtml
>2. 
>https://scanmail.trustwave.com/?c=4062=sZWa2NJm1b7zf0w12nNA5JOUrTfLuNXQPtwOMQDifg=5=https%3a%2f%2fcrt%2esh%2f%3fid%3d4467456
>3. 
>https://scanmail.trustwave.com/?c=4062=sZWa2NJm1b7zf0w12nNA5JOUrTfLuNXQPtpZZQXtKA=5=https%3a%2f%2fgroups%2egoogle%2ecom%2fforum%2f%23%21topic%2fmozilla%2edev%2esecurity%2epolicy%2fLQSrnPv2qOo
>___
>dev-security-policy mailing list
>dev-security-policy@lists.mozilla.org
>https://scanmail.trustwave.com/?c=4062=sZWa2NJm1b7zf0w12nNA5JOUrTfLuNXQPtpZZ1bsJg=5=https%3a%2f%2flists%2emozilla%2eorg%2flistinfo%2fdev-security-policy



This transmission may contain information that is privileged, confidential, 
and/or exempt from disclosure under applicable law. If you are not the intended 
recipient, you are hereby notified that any disclosure, copying, distribution, 
or use of the information contained herein (including any reliance thereon) is 
strictly prohibited. If you received this transmission in error, please 
immediately contact the sender and destroy the material in its entirety, 
whether in electronic or hard copy format.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy