RE: Policy 2.7.1: MRSP Issue #206: Limit re-use of domain name verification to 398 days

2021-03-19 Thread Doug Beattie via dev-security-policy
Thanks Ben.

 

What’s the purpose of this statement:

5. verify that all of the information that is included in server certificates 
remains current and correct at intervals of 825 days or less;

 

The BRs limit data reuse to 825 days since March 2018 so I don’t think this 
adds anything.  If it does mean something more than that, can you update to 
make it more clear?

 

 

From: Ben Wilson  
Sent: Thursday, March 18, 2021 2:53 PM
To: Doug Beattie 
Cc: mozilla-dev-security-policy 
Subject: Re: Policy 2.7.1: MRSP Issue #206: Limit re-use of domain name 
verification to 398 days

 

I've edited the proposed subsection 5.1 and have left section 5 in for now.  
See 

https://github.com/BenWilson-Mozilla/pkipolicy/commit/d37d7a3865035c958c1cb139b949107665fee232
 

 

On Tue, Mar 16, 2021 at 9:10 AM Ben Wilson mailto:bwil...@mozilla.com> > wrote:

That works, too.  Thoughts?

 

On Tue, Mar 16, 2021 at 5:21 AM Doug Beattie mailto:doug.beat...@globalsign.com> > wrote:

Hi Ben,

Regarding the redlined spec: 
https://github.com/mozilla/pkipolicy/compare/master...BenWilson-Mozilla:2.7.1?short_path=73f95f7#diff-73f95f7d2475645ef6fc93f65ddd9679d66efa9834e4ce415a2bf79a16a7cdb6

Is this a meaningful statement given max validity is 398 days now? 
   5. verify that all of the information that is included in server 
certificates remains current and correct at intervals of 825 days or less;
I think we can remove that and them move 5.1 to item 5

I find the words for this requirement 5.1 unclear. 

  " 5.1. for server certificates issued on or after October 1, 2021, verify 
each dNSName or IPAddress in a SAN or commonName at an interval of 398 days or 
less;"

Can we say:
"5.1. for server certificates issued on or after October 1, 2021, each dNSName 
or IPAddress in a SAN or commonName MUST have been validated  within the prior 398 days.



-Original Message-
From: dev-security-policy mailto:dev-security-policy-boun...@lists.mozilla.org> > On Behalf Of Ben 
Wilson via dev-security-policy
Sent: Monday, March 8, 2021 6:38 PM
To: mozilla-dev-security-policy mailto:mozilla-dev-security-pol...@lists.mozilla.org> >
Subject: Re: Policy 2.7.1: MRSP Issue #206: Limit re-use of domain name 
verification to 398 days

All,

Here is the currently proposed wording for subsection 5.1 of MRSP section
2.1:

" 5.1. for server certificates issued on or after October 1, 2021, verify each 
dNSName or IPAddress in a SAN or commonName at an interval of 398 days or less;"

Ben

On Fri, Feb 26, 2021 at 9:48 AM Ryan Sleevi mailto:r...@sleevi.com> > wrote:

>
>
> On Thu, Feb 25, 2021 at 7:55 PM Clint Wilson via dev-security-policy < 
> dev-security-policy@lists.mozilla.org 
>  > wrote:
>
>> I think it makes sense to separate out the date for domain validation 
>> expiration from the issuance of server certificates with previously 
>> validated domain names, but agree with Ben that the timeline doesn’t 
>> seem to need to be prolonged. What about something like this:
>>
>> 1. Domain name or IP address verifications performed on or after July 
>> 1,
>> 2021 may be reused for a maximum of 398 days.
>> 2. Server certificates issued on or after September 1, 2021 must have 
>> completed domain name or IP address verification within the preceding 
>> 398 days.
>>
>> This effectively stretches the “cliff” out across ~6 months (now 
>> through the end of August), which seems reasonable.
>>
>
> Yeah, that does sound reasonable.
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org 
 
https://lists.mozilla.org/listinfo/dev-security-policy



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Policy 2.7.1: MRSP Issue #206: Limit re-use of domain name verification to 398 days

2021-03-16 Thread Doug Beattie via dev-security-policy
Hi Ben,

Regarding the redlined spec: 
https://github.com/mozilla/pkipolicy/compare/master...BenWilson-Mozilla:2.7.1?short_path=73f95f7#diff-73f95f7d2475645ef6fc93f65ddd9679d66efa9834e4ce415a2bf79a16a7cdb6

Is this a meaningful statement given max validity is 398 days now? 
   5. verify that all of the information that is included in server 
certificates remains current and correct at intervals of 825 days or less;
I think we can remove that and them move 5.1 to item 5

I find the words for this requirement 5.1 unclear. 

  " 5.1. for server certificates issued on or after October 1, 2021, verify 
each dNSName or IPAddress in a SAN or commonName at an interval of 398 days or 
less;"

Can we say:
"5.1. for server certificates issued on or after October 1, 2021, each dNSName 
or IPAddress in a SAN or commonName MUST have been validated  within the prior 398 days.



-Original Message-
From: dev-security-policy  On 
Behalf Of Ben Wilson via dev-security-policy
Sent: Monday, March 8, 2021 6:38 PM
To: mozilla-dev-security-policy 
Subject: Re: Policy 2.7.1: MRSP Issue #206: Limit re-use of domain name 
verification to 398 days

All,

Here is the currently proposed wording for subsection 5.1 of MRSP section
2.1:

" 5.1. for server certificates issued on or after October 1, 2021, verify each 
dNSName or IPAddress in a SAN or commonName at an interval of 398 days or less;"

Ben

On Fri, Feb 26, 2021 at 9:48 AM Ryan Sleevi  wrote:

>
>
> On Thu, Feb 25, 2021 at 7:55 PM Clint Wilson via dev-security-policy < 
> dev-security-policy@lists.mozilla.org> wrote:
>
>> I think it makes sense to separate out the date for domain validation 
>> expiration from the issuance of server certificates with previously 
>> validated domain names, but agree with Ben that the timeline doesn’t 
>> seem to need to be prolonged. What about something like this:
>>
>> 1. Domain name or IP address verifications performed on or after July 
>> 1,
>> 2021 may be reused for a maximum of 398 days.
>> 2. Server certificates issued on or after September 1, 2021 must have 
>> completed domain name or IP address verification within the preceding 
>> 398 days.
>>
>> This effectively stretches the “cliff” out across ~6 months (now 
>> through the end of August), which seems reasonable.
>>
>
> Yeah, that does sound reasonable.
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Policy 2.7.1: MRSP Issue #206: Limit re-use of domain name verification to 398 days

2021-02-25 Thread Doug Beattie via dev-security-policy
Ben,

I'd prefer that we tie this to a date related to when the domain validations 
are done, or perhaps 2 statements.  As it stands (and as others have 
commented), on July 1 all customers will immediately need to validate all 
domains that were done between 825 and 397 days ago, so a huge number all at 
once for web site owners and for CAs.

I'd prefer that it says " Domain validations performed from July 1, 2021 may be 
reused for a maximum of 398 days ".  I understand that this basically kick the 
can down the road for an extra year and that may not be acceptable, so, maybe 
we specify 2 dates:

1)  Domain validations performed on or after July 1, 2021 may be reused for a 
maximum of 398 days.

2)  for server certificates issued on or after Feb 1, 2022, each dNSName or 
IPAddress in a SAN must have been validated within the prior 398 days

Is that a compromise you could consider?

Doug


-Original Message-
From: dev-security-policy  On 
Behalf Of Ben Wilson via dev-security-policy
Sent: Thursday, February 25, 2021 2:08 PM
To: Mozilla 
Subject: Re: Policy 2.7.1: MRSP Issue #206: Limit re-use of domain name 
verification to 398 days

All,

I continue to move this Issue #206 forward with a proposed change to section 
2.1 of the MRSP (along with an effort to modify section 3.2.2.4 or section 
4.2.1 of the CA/B Forum's Baseline Requirements).

Currently, I am still contemplating adding a subsection 5.1 to MRSP section
2.1 that would read,
" 5.1. for server certificates issued on or after July 1, 2021, verify each 
dNSName or IPAddress in a SAN or commonName at an interval of 398 days or less;"

See draft language here
https://github.com/BenWilson-Mozilla/pkipolicy/commit/69bddfd96d1d311874c35c928abdfc13dc11aba3


Ben

On Wed, Dec 2, 2020 at 3:00 PM Ben Wilson  wrote:

> All,
>
> I have started a similar, simultaneous discussion with the CA/Browser 
> Forum, in order to gain traction.
>
> 
>
> https://lists.cabforum.org/pipermail/servercert-wg/2020-December/00238
> 2.html
>
> Ben
>
> On Wed, Dec 2, 2020 at 2:49 PM Jeremy Rowley 
> 
> wrote:
>
>> Should this limit on reuse also apply to s/MIME? Right now, the 825 
>> day limit in Mozilla policy only applies to TLS certs with email 
>> verification of s/MIME being allowed for infinity time.  The first 
>> draft of the language looked like it may change this while the newer 
>> language puts back the TLS limitation. If it's not addressed in this 
>> update, adding clarification on domain verification reuse for SMIME 
>> would be a good improvement on the existing policy.
>>
>> -Original Message-
>> From: dev-security-policy 
>> 
>> On Behalf Of Ben Wilson via dev-security-policy
>> Sent: Wednesday, December 2, 2020 2:22 PM
>> To: Ryan Sleevi 
>> Cc: Doug Beattie ; Mozilla < 
>> mozilla-dev-security-pol...@lists.mozilla.org>
>> Subject: Re: Policy 2.7.1: MRSP Issue #206: Limit re-use of domain 
>> name verification to 398 days
>>
>> See my responses inline below.
>>
>> On Tue, Dec 1, 2020 at 1:34 PM Ryan Sleevi  wrote:
>>
>> >
>> >
>> > On Tue, Dec 1, 2020 at 2:22 PM Ben Wilson via dev-security-policy < 
>> > dev-security-policy@lists.mozilla.org> wrote:
>> >
>> >> See responses inline below:
>> >>
>> >> On Tue, Dec 1, 2020 at 11:40 AM Doug Beattie 
>> >> > >> >
>> >> wrote:
>> >>
>> >> > Hi Ben,
>> >> >
>> >> > For now I won’t comment on the 398 day limit or the date which 
>> >> > you
>> >> propose
>> >> > this to take effect (July 1, 2021), but on the ability of CAs to 
>> >> > re-use domain validations completed prior to 1 July for their 
>> >> > full
>> >> > 825 re-use period.  I'm assuming that the 398 day limit is only 
>> >> > for those domain validated on or after 1 July, 2021.  Maybe that 
>> >> > is your intent, but the wording is not clear (it's never been 
>> >> > all that
>> >> > clear)
>> >> >
>> >>
>> >> Yes. (I agree that the wording is currently unclear and can be 
>> >> improved, which I'll work on as discussion progresses.)  That is 
>> >> the intent - for certificates issued beginning next July--new 
>> >> validations would be valid for
>> >> 398 days, but existing, reused validations would be sunsetted and 
>> >> could be used for up to 825 days (let's say, until Oct. 1, 2023, 
>> >> which I'd advise against, given the benefits of freshness provided 
>> >> by re-performing methods in BR 3.2.2.4 and BR 3.2.2.5).
>> >>
>> >
>> > Why? I have yet to see a compelling explanation from a CA about why 
>> > "grandfathering" old validations is good, and as you note, it 
>> > undermines virtually every benefit that is had by the reduction 
>> > until
>> 2023.
>> >
>>
>> I am open to the idea of cutting off the tail earlier, at let's say, 
>> October 1, 2022, or earlier (see below).  I can work on language that 
>> does that.
>>
>>
>> >
>> > Ben, could you explain the rationale why this is better than the 
>> > simpler, clearer, and immediately beneficial for Mozilla users of 
>> > requiring new validations be 

RE: Policy 2.7.1: MRSP Issue #206: Limit re-use of domain name verification to 398 days

2020-12-01 Thread Doug Beattie via dev-security-policy
Hi Ben,

For now I won’t comment on the 398 day limit or the date which you propose this 
to take effect (July 1, 2021), but on the ability of CAs to re-use domain 
validations completed prior to 1 July for their full 825 re-use period.  I'm 
assuming that the 398 day limit is only for those domain validated on or after 
1 July, 2021.  Maybe that is your intent, but the wording is not clear (it's 
never been all that clear)

Could you consider changing it to read more like this (feel free to edit as 
needed):

CAs may re-use domain validation for subjectAltName verifications of dNSNames 
and IPAddresses done prior to July 1, 2021 for up to 825 days .  CAs MUST limit 
domain re-use for subjectAltName verifications of dNSNames and IPAddresses to 
398 days for domains validated on or after July 1, 2021. 

>From a CA perspective, I don't have any major concerns with shortening the 
>domain re-use periods, but customers do/will.  Will there be a Mozilla blog 
>that outlines the security improvements with cutting the re-use period in half 
>and why July 2021 is the right time?  

Doug

-Original Message-
From: dev-security-policy  On 
Behalf Of Ben Wilson via dev-security-policy
Sent: Monday, November 30, 2020 2:27 PM
To: mozilla-dev-security-policy 
Subject: Policy 2.7.1: MRSP Issue #206: Limit re-use of domain name 
verification to 398 days

 The purpose of this email is to begin public discussion on a modification to 
subsection 5 in section 2.1 of the Mozilla Root Store Policy.

Issue #206  in GitHub 
discusses the need to bring the reuse period for domain validation in line with 
the certificate issuance validity cycle of 398 days (as set forth in section 
6.3.2 of the Baseline Requirements). This proposal is not to say that Mozilla 
is not also contemplating a ballot in the CA/Browser Forum that would introduce 
similar language to the Baseline Requirements. Any potential CABF endorsers of 
such a ballot should reach out to me off-list.

Currently, subsection 5 of section 2.1 of the Mozilla Root Store Policy
(MRSP) states that a CA must “verify that all of the information that is 
included in SSL certificates remains current and correct at time intervals of 
825 days or less;”

It is proposed that a subsection 5.1 be added to this subsection to require 
that, for subjectAltName verifications of dNSNames or IPAddresses performed on 
or after July 1, 2021, CAs verify the dNSName or IPAddress at intervals of 398 
days or less.
Proposed language may be found in the following commit:

https://github.com/BenWilson-Mozilla/pkipolicy/commit/b7b53eea3a0af1503f3c99632ba22efc9e86bee2
Restated here, the proposed language for subsection 5.1 of section 2.1 is:

"for subjectAltName verifications of dNSNames and IPAddresses performed on or 
after July 1, 2021, verify that each dNSName or IPAddress is current and 
correct at intervals of 398 days or less;"

I look forward to your comments, suggestions and discussions.

Thanks,

Ben
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Policy 2.7.1 Issues to be Considered

2020-10-06 Thread Doug Beattie via dev-security-policy
Ben,

When, approximately, do you think this proposed updates would become effective, 
and specifically this item:

   https://github.com/mozilla/pkipolicy/issues/206

Doug

-Original Message-
From: dev-security-policy  On 
Behalf Of Ben Wilson via dev-security-policy
Sent: Thursday, October 1, 2020 4:22 PM
To: mozilla-dev-security-policy 
Subject: Policy 2.7.1 Issues to be Considered

Below is a list of issues that I propose be addressed in the next version
(2.7.1) of the Mozilla Root Store Policy (MRSP). There are currently 73 issues 
related to the MRSP listed here:
https://github.com/mozilla/pkipolicy/issues. So far, I have identified 13 items 
to consider for this policy update; which are tagged as v.2.7.1 in GitHub 
(https://github.com/mozilla/pkipolicy/labels/2.7.1). I will appreciate your 
input on this list as to whether there are issues that should be added or 
removed. Then, based on the list, I will start a separate discussion thread in 
mozilla.dev.security.policy for each issue.

#139  - Audits are required 
even if no longer issuing - Clarify that audits are required until the CA 
certificate is revoked, expired, or removed. Related to Issue #153.

#147  - Require EV audits for 
certificates capable of issuing EV certificates – Clarify that EV audits are 
required for all intermediate certificates that are technically capable of 
issuing EV certificates, even when not currently issuing EV certificates.

#153  – Cradle-to-Grave 
Contiguous Audits – Specify the audits that are required from Root key 
generation ceremony until expiration or removal from Mozilla’s root store.
Related to Issue #139.

#154  - Require Management 
Assertions to list Non-compliance – Add to MRSP 2.4 “If being audited to the 
WebTrust criteria, the Management Assertion letter MUST include all known 
incidents that occurred or were still open/unresolved at any time during the 
audit period.”

#173  - Strengthen requirement 
for newly included roots to meet all past and present requirements – Add 
language to MRSP 7.1 so that it is clear that before being included CAs must 
comply and have complied with past and present Mozilla Root Store Policy and 
Baseline Requirements.

#186  - Clarify MRSP 5.3 
Requirement to Disclose Self-signed Certificates – Clarify that self-signed 
certificates with the same key pair as an existing root meets MRSP 5.3’s 
definition of an intermediate certificate that must be disclosed in the CCADB.

#187  - Require disclosure of 
incidents in Audit Reports –  To MRSP 3.1.4 “The publicly-available 
documentation relating to each audit MUST contain at least the following 
clearly-labelled information: “ add “11. all incidents (as defined in section 
2.4) that occurred or were still open/unresolved at any time during the audit 
period, or a statement that the auditor is unaware of any;”

#192  - Require information 
about auditor qualifications in the audit report – Require audit statements to 
be accompanied by documentation of the auditor’s qualifications demonstrating 
the auditor’s competence and experience.

#205  - Require CAs to publish 
accepted methods for proving key compromise – Require CAs to disclose their 
acceptable methods for proving key compromise in section
4.9.12 of their CPS.

#206  - Limit re-use of domain 
name verification to 395 days – Amend item 5 in MRSP 2.1 with “and verify 
ownership/control of each dNSName and iPAddress in the certificate's 
subjectAltName at intervals of 398 days or less;”

#207  - Require audit 
statements to provide information about which CA Locations were and were not 
audited, and the extent to which they were (or were not) audited

#211  - Align OCSP 
requirements in Mozilla's policy with the section 4.9.10 of the Baseline 
Requirements
#218  Clarify CRL requirements 
for End Entity Certificates – For CRLite, Mozilla would like to ensure that it 
has full lists of revoked certificates. If the CA uses partial CRLs, then 
require CAs to provide the URL location of their full and complete CRL in the 
CCADB.

Ben Wilson
Mozilla Root Program Manager
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


smime.p7s
Description: S/MIME cryptographic signature

RE: Mandatory reasonCode analysis

2020-09-30 Thread Doug Beattie via dev-security-policy
Hi Rob,

I'm not sure you filtered this report by "thisUpdate", maybe you did it by
nextUpdate by mistake?

The GlobalSign CRL on this report was created in 2016, thus the question.

Doug


-Original Message-
From: dev-security-policy  On
Behalf Of Rob Stradling via dev-security-policy
Sent: Wednesday, September 30, 2020 11:59 AM
To: dev-security-policy@lists.mozilla.org
Subject: Mandatory reasonCode analysis

Starting today, the BRs require a reasonCode in CRLs and OCSP responses for
revoked CA certificates.  Since crt.sh already monitors CRLs and keeps track
of reasonCodes, I thought I would conduct some analysis to determine the
level of (non)compliance with these new rules.

It's not clear to me if (1) the new BR rules should be applied only to CRLs
and OCSP responses with thisUpdate timestamps dated today or afterwards, or
if (2) every CRL and OCSP response currently being served by distribution
points and responders (regardless of the thisUpdate timestamps) is required
to comply.  (I'd be interested to hear folks' opinions on this).

This gist contains my crt.sh query, the results as .tsv, and a .zip
containing all of the referenced CRLs:
https://gist.github.com/robstradling/3088dd622df8194d84244d4dd65ffd5f


--
Rob Stradling
Senior Research & Development Scientist
Email: r...@sectigo.com
Bradford, UK
Office: +441274024707
Sectigo Limited

This message and any files associated with it may contain legally
privileged, confidential, or proprietary information. If you are not the
intended recipient, you are not permitted to use, copy, or forward it, in
whole or in part without the express consent of the sender. Please notify
the sender by reply email, disregard the foregoing messages, and delete it
immediately.


___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Concerns with GlobalSign IP address validation

2020-08-10 Thread Doug Beattie via dev-security-policy

Hi Ian,

Thanks for pointing this out to us. We looked at all orders issued since a
new domain validation logic was rolled out in late May 2020 and we verified
that no IP addresses were attempted to be validated using a constructed
email address. Even if this was selected, the sending of that email would
fail. We'll update the page shortly to address this UI bug to avoid customer
confusion.

Regards,

Doug

-Original Message-
From: dev-security-policy  On
Behalf Of Doug Beattie via dev-security-policy
Sent: Monday, August 10, 2020 6:30 AM
To: i...@ian.sh ; mozilla-dev-security-pol...@lists.mozilla.org
Subject: RE: Concerns with GlobalSign IP address validation

Hi Ian,

Thanks, we're looking into this.

Doug

-Original Message-
From: dev-security-policy  On
Behalf Of i...--- via dev-security-policy
Sent: Friday, August 7, 2020 11:37 PM
To: mozilla-dev-security-pol...@lists.mozilla.org
Subject: Concerns with GlobalSign IP address validation

Hi there,

When purchasing a GlobalSign OV IP address certificate, I was presented with
several options to validate the certificate using email addresses that had
an incorrectly truncated IP address, treating it similarly to a DNS name,
which is not correct. As an example, GlobalSign would provide "admin@2.3.4"
and "admin@3.4" as options for the IPv4 address "admin@1.2.3.4" -- which are
(because of IPv4 notation) really 2.3.0.4 and 3.0.0.4, respectively, and not
even under the same CIDR (not that it would make that valid anyway).

To test this, I obtained an IP address with a zero from Google Cloud
(34.94.0.97) and then requested a certificate for 44.34.94.97 (part of
44net, which seems largely unused), which becomes 34.94.97 after truncation
and thus my server's IP.

GlobalSign returned an error message when I chose the plainly invalid
address "admin@34.94.97", which is why I'm not worried about posting this
here, but it seems worthy of a further investigation into why GlobalSign
presents these email addresses as options, if validation agents are trained
to manually accept emails from these addresses (such as being shown them in
internal systems), if they have issued any past certificates using invalid
verification methods, etc.

Thanks,
Ian Carroll
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Concerns with GlobalSign IP address validation

2020-08-10 Thread Doug Beattie via dev-security-policy
Hi Ian,

Thanks, we're looking into this.

Doug

-Original Message-
From: dev-security-policy  On
Behalf Of i...--- via dev-security-policy
Sent: Friday, August 7, 2020 11:37 PM
To: mozilla-dev-security-pol...@lists.mozilla.org
Subject: Concerns with GlobalSign IP address validation

Hi there,

When purchasing a GlobalSign OV IP address certificate, I was presented with
several options to validate the certificate using email addresses that had
an incorrectly truncated IP address, treating it similarly to a DNS name,
which is not correct. As an example, GlobalSign would provide "admin@2.3.4"
and "admin@3.4" as options for the IPv4 address "admin@1.2.3.4" -- which are
(because of IPv4 notation) really 2.3.0.4 and 3.0.0.4, respectively, and not
even under the same CIDR (not that it would make that valid anyway).

To test this, I obtained an IP address with a zero from Google Cloud
(34.94.0.97) and then requested a certificate for 44.34.94.97 (part of
44net, which seems largely unused), which becomes 34.94.97 after truncation
and thus my server's IP.

GlobalSign returned an error message when I chose the plainly invalid
address "admin@34.94.97", which is why I'm not worried about posting this
here, but it seems worthy of a further investigation into why GlobalSign
presents these email addresses as options, if validation agents are trained
to manually accept emails from these addresses (such as being shown them in
internal systems), if they have issued any past certificates using invalid
verification methods, etc.

Thanks,
Ian Carroll
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: New Blog Post on 398-Day Certificate Lifetimes

2020-07-10 Thread Doug Beattie via dev-security-policy
Ben,

For the avoidance of doubt, I assume this means Sept 1, 00:00 UTC.


-Original Message-
From: dev-security-policy  On
Behalf Of Ben Wilson via dev-security-policy
Sent: Friday, July 10, 2020 12:49 PM
To: mozilla-dev-security-policy

Subject: Re: New Blog Post on 398-Day Certificate Lifetimes

Some people have asked whether two-year certificates existing on August 31
would remain valid.  The answer is yes. Those certificates will remain valid
until they expire. The change only applies to certificates issued on or
after Sept. 1, 2020.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: 7.1.6.1 Reserved Certificate Policy Identifiers

2020-05-14 Thread Doug Beattie via dev-security-policy
Yes, I should have asked this on the CABF list, and you answered my question 
with the links below.  Thanks!

 

From: Ryan Sleevi  
Sent: Thursday, May 14, 2020 8:57 AM
To: Doug Beattie 
Cc: mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: 7.1.6.1 Reserved Certificate Policy Identifiers

 

Did you mean to ask this on the CABF list?

 

This is 

https://github.com/cabforum/documents/issues/179 which I was going to try to 
fix in 

https://github.com/sleevi/cabforum-docs/pull/12 (aka “spring” cleanup that is 
seeking endorsers)

 

The discussion thread is 

https://cabforum.org/pipermail/validation/2020-May/001469.html



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


7.1.6.1 Reserved Certificate Policy Identifiers

2020-05-14 Thread Doug Beattie via dev-security-policy
I have a question about section, 7.1.6.1.  It says:

This section describes the content requirements for the Root CA, Subordinate
CA, and Subscriber Certificates, as they relate to the identification of
Certificate Policy.

 

For Subscriber certificates I totally understand and agree with section
7.1.6.1, and specifically:

 

If the Certificate asserts the policy identifier of 2.23.140.1.2.1, then it
MUST NOT include organizationName, .

and

If the Certificate asserts the policy identifier of 2.23.140.1.2.2, then it
MUST also include organizationName,.

 

This means you can have one or the other, but never both in one certificate.


 

But, if a Root and a subordinate MUST have an Organizational name, then
there is no way it could ever have the DV policy OID (2.23.140.1.2.1) and
comply with that requirement.

 

The scope of this section should be for Subscriber Certificates only.  Can
we agree that was a bug?

 

Section 7.1.6.3 goes on to say that a CA "MAY include the CA/Browser Forum
reserved identifiers . to indicate the Subordinate CA's compliance with
these Requirements " which further implies that CA certificates can contain
CABF Policy identifiers (there are 6 defined CABF OIDs,
https://cabforum.org/object-registry/)

 

Doug



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Is issuing a certificate for a previously-reported compromised private key misissuance?

2020-03-19 Thread Doug Beattie via dev-security-policy
Has anyone worked with a site/service like this that could help convey 
compromised keys between CAs?

 https://pwnedkeys.com/submit.html



-Original Message-
From: dev-security-policy  On 
Behalf Of Matt Palmer via dev-security-policy
Sent: Thursday, March 19, 2020 7:05 AM
To: mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: Is issuing a certificate for a previously-reported compromised 
private key misissuance?

On Thu, Mar 19, 2020 at 05:30:31AM -0500, Ryan Sleevi wrote:
> On Thu, Mar 19, 2020 at 1:02 AM Matt Palmer via dev-security-policy < 
> dev-security-policy@lists.mozilla.org> wrote:
> > 2. If there are not explicit prohibitions already in place, *should* there
> >be?  If so, should it be a BR thing, or a Policy thing?
> 
> https://github.com/cabforum/documents/issues/171 is filed to 
> explicitly track this. That said, I worry the same set of negligent 
> and irresponsible CAs will try to advocate for more CA discretion when 
> revocation, such as allowing the CA to avoid revoking when they’ve 
> mislead the community as to what they do (CP/CPS violations) or 
> demonstrated gross incompetence (such as easily detected spelling issues in 
> jurisdiction information).
> 
> I would hope no CA would be so irresponsible as to try to bring that 
> up during such a discussion.

I shall fire up the popcorn maker in preparation.

> > 3. Can a CA be deemed to have "obtained evidence" of key compromise prior
> >to the issuance of a certificate, via a previously-submitted key
> >compromise problem report for the same private key?  If so, it would
> >seem that, even if the issuance of the certificate is OK, it is a
> >failure-to-revoke incident if the cert doesn't get revoked within 24
> >hours...
> 
> Correct, that was indeed the previous conclusion around this. The CA 
> can issue, but then are obligated to revoke within 24 hours.

Excellent, thanks for that confirmation.  Incident report inbound.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: About upcoming limits on trusted certificates

2020-03-16 Thread Doug Beattie via dev-security-policy
For clarity, I think we need to discuss all the knobs along with proposed 
effective dates and usage periods so we get the whole picture.  The max 
validity period of the certificate has been the one receiving the most 
discussion recently, yet that’s missing from your counter proposal.  Don’t you 
view that as a critical data item to put on the table, even if less important 
(in your opinion) than domain validation re-use?.  

 

Did you add Public key as a new knob, meaning that Applicants must change their 
public key according to some rule?

 

 

From: Ryan Sleevi  
Sent: Monday, March 16, 2020 10:27 AM
To: Doug Beattie 
Cc: r...@sleevi.com; Kathleen Wilson ; 
mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: About upcoming limits on trusted certificates

 

No, I don't think we should assume anything, since it doesn't say anything 
about lifetime :)

 

The value of reduced certificate lifetimes is only fully realized with a 
corresponding reduction in data reuse.

 

If you think about a certificate, there are three main pieces of information 
that come from a subscriber:

- The public key

- The domain name

- (Optionally) The organization information

 

In addition, there are rules about how a CA validates this information (e.g. 
the BR validation requirements)

This information is then collected, into a certificate, using a certificate 
profile (e.g. what the BRs capture in Section 7).

 

Reducing the lifetime of certificates, in isolation, helps with the agility of 
the public key and the agility of the profile, but does not necessarily help 
with the agility of the validation requirements nor the accuracy of the domain 
name or organization information. BygoneSSL is an example of the former being 
an issue, while issuing certificates for organizations that no longer exist/are 
legally recognized is an example of the latter being an issue.

 

These knobs - lifetime, domain validation, org validation - can be tweaked 
independently, but tweaking one without the others limits the value. For 
example, reducing domain validation reuse, without reducing lifetime, still 
allows for long-lived certs to be issued for otherwise-invalid domain names, 
which means you're not getting the security benefits of the validation reuse 
reduction. Introducing improved domain validation methods, for example, isn't 
helped by reducing lifetime or organization data reuse, because you can still 
reuse the 'old' validations using the less secure means. So all three are 
linked, even if all three can be independently adjusted.

 

I outlined a timetable on how to reduce the latter two (domain and organization 
validation). Reducing the latter two helps meaningfully reduce lifetimes, to 
take advantage of those reductions, but that can be independently adjusted. In 
particular, reducing lifetimes makes the most sense when folks are accustomed 
to regular validations, which is why it's important to reduce domain validation 
frequency. That effort complements reductions in lifetimes, and helps remove 
the concerns being raised.

 

On Mon, Mar 16, 2020 at 10:04 AM Doug Beattie mailto:doug.beat...@globalsign.com> > wrote:

Are we to assume that the maximum certificate validity remains at 398 days?

 

From: Ryan Sleevi mailto:r...@sleevi.com> > 
Sent: Monday, March 16, 2020 10:02 AM
To: Doug Beattie 
Cc: r...@sleevi.com  ; Kathleen Wilson 
mailto:kwil...@mozilla.com> >; 
mozilla-dev-security-pol...@lists.mozilla.org 
 
Subject: Re: About upcoming limits on trusted certificates

 

Hi Doug,

 

Perhaps it got mangled by your mail client, but I think I had that covered?

 

I've pasted it again, below.

 

Counter proposal:

April 2021: 395 day domain validation max

April 2021: 366 day organization validation max 

April 2022: 92 day domain validation max

September 2022: 31 day domain validation max

April 2023: 3 day domain validation max

April 2023: 31 day organization validation max

September 2023: 6 hour domain validation max

 

As mentioned in the prior mail (and again, perhaps it was eaten by a grueful 
mail-client)

This sets an entirely timeline that encourages automation of domain validation, 
reduces the risk of stale organization data (which many jurisdictions require 
annual review), and eventually moves to a system where request-based 
authentication is the norm, and automated systems for organization data is 
used. If there are jurisdictions that don't provide their data in a machine 
readable format, yes, they're precluded. If there are organizations that don't 
freshly authenticate their domains, yes, they're precluded.
Now, it's always possible to consider shifting from an account-authenticated 
model to a key-authenticated-model (i.e. has this key been used with this 
domain), since that's a core objective of domain revalidation, but I can't see 
wanting to get to an end state where that duration is greater than 

RE: About upcoming limits on trusted certificates

2020-03-16 Thread Doug Beattie via dev-security-policy
Are we to assume that the maximum certificate validity remains at 398 days?

 

From: Ryan Sleevi  
Sent: Monday, March 16, 2020 10:02 AM
To: Doug Beattie 
Cc: r...@sleevi.com; Kathleen Wilson ; 
mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: About upcoming limits on trusted certificates

 

Hi Doug,

 

Perhaps it got mangled by your mail client, but I think I had that covered?

 

I've pasted it again, below.

 

Counter proposal:

April 2021: 395 day domain validation max

April 2021: 366 day organization validation max 

April 2022: 92 day domain validation max

September 2022: 31 day domain validation max

April 2023: 3 day domain validation max

April 2023: 31 day organization validation max

September 2023: 6 hour domain validation max

 

As mentioned in the prior mail (and again, perhaps it was eaten by a grueful 
mail-client)

This sets an entirely timeline that encourages automation of domain validation, 
reduces the risk of stale organization data (which many jurisdictions require 
annual review), and eventually moves to a system where request-based 
authentication is the norm, and automated systems for organization data is 
used. If there are jurisdictions that don't provide their data in a machine 
readable format, yes, they're precluded. If there are organizations that don't 
freshly authenticate their domains, yes, they're precluded.
Now, it's always possible to consider shifting from an account-authenticated 
model to a key-authenticated-model (i.e. has this key been used with this 
domain), since that's a core objective of domain revalidation, but I can't see 
wanting to get to an end state where that duration is greater than 30 days, at 
most, of reuse, because of the practical risks and realities of key 
compromises. Indeed, if you look at the IETF efforts, such as Delegated 
Credentials or STAR, the industry evaluation of risk suggests 7 days is likely 
a more realistic upper bound for authorization of binding a key to a domain 
before requiring a fresh challenge.

 

Hopefully that helps!

 

On Mon, Mar 16, 2020 at 9:53 AM Doug Beattie mailto:doug.beat...@globalsign.com> > wrote:

Ryan,

 

In your counter proposal, could you list your proposed  milestone dates and 
then for each one specify the max validity period, domain re-use period and Org 
validation associated with those dates?As it stands, Org validation 
requires CA to verify that address is the Applicant’s address and that 
typically involves a direct exchange with a person at the organization via a 
Reliable Method of Communication.  It’s not clear how we address that if we 
move to anything below a year.



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: About upcoming limits on trusted certificates

2020-03-16 Thread Doug Beattie via dev-security-policy
Ryan,

 

In your counter proposal, could you list your proposed  milestone dates and 
then for each one specify the max validity period, domain re-use period and Org 
validation associated with those dates?As it stands, Org validation 
requires CA to verify that address is the Applicant’s address and that 
typically involves a direct exchange with a person at the organization via a 
Reliable Method of Communication.  It’s not clear how we address that if we 
move to anything below a year.

 

 

 

From: Ryan Sleevi  
Sent: Friday, March 13, 2020 9:23 PM
To: Doug Beattie 
Cc: Kathleen Wilson ; 
mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: About upcoming limits on trusted certificates

 

On Fri, Mar 13, 2020 at 2:38 PM Doug Beattie via dev-security-policy 
mailto:dev-security-policy@lists.mozilla.org> > wrote:

When we moved to SHA2 knew of security risks so the timeline could be 
justified, however, I don’t see the same pressing need to move to annual domain 
revalidation and 1 year max validity for that matter. 

 

I can understand, and despite several years of effort, it appears that we will 
be just as unlikely to make forward progress. 

 

When we think about the issuance models, we need to keep the Enterprise 
approach in mind where domains are validated against a specific account or 
profile within an account and then issuance can happen using any valid domain 
or subdomain of those registered with the account.  Splitting the domain 
validation from issuance permits different teams to handle this and to manage 
the overall policy.  Domains can be validated at any time by anyone and not 
tied to the issuance of a specific certificate which makes issuance less prone 
to errors.  

 

This is a security risk, not a benefit. It creates significant risk that the CA 
systems, rather than strongly authenticating a request, move to a model of 
weakly authenticating a user or account. I can understand why CAs would prefer 
this, and potentially why Subscribers would too: it's convenient for them, and 
they're willing to accept the risk individually. However, we need to keep in 
mind the User approach in mind when thinking about whether these are good. For 
users, this introduces yet more risk into the system.

 

For example, if an account on a CA system is compromised, the attacker can 
issue any certificate for any of the authorized domains. Compare this to a 
model of fresh authentication for requests, in which the only certificates that 
can be issued are the ones that can be technically verified. Similarly, users 
must accept that a CA that deploys a weak authentication system, any domains 
which use that CA are now at risk if the authentication method used is weak.

 

When we put the user first, we can see that those Enterprise needs simply shift 
the risk/complexity from the Enterprise and to the User. It's understandable 
why the Enterprise might prefer that, but we must not fool ourselves into 
thinking the risk is not there. Root Stores exist to balance the risk 
(collectively) to users, and to reject such attempts to shift the cost or 
burden onto them.

 

If your driving requirement to reduce the domain validation reuse is the 
BygoneSSL, then the security analysis is flawed.  There are so many things have 
to align to exploit domain ownership change that it's impactable, imo. Has this 
ever been exploited? 

 

Yes, this was covered in BygoneSSL. If you meant to say impractical, then your 
risk analysis is flawed, but I suspect we'll disagree. This sort of concern has 
been the forefront of a number of new technologies, such as HTTP/2, Signed 
Exchanges, and the ORIGIN frame. Heck, even APNIC has seen these as _highly 
practical_ concerns: 
https://blog.apnic.net/2019/01/09/be-careful-where-you-point-to-the-dangers-of-stale-dns-records/
 . Search PowerDNS.

 

Would it make sense (if even possible) to track the level of automation and set 
a threshold for when the periods are changed?  Mozilla and Google are tracking 
HTTPS adoption and plan to hard block HTTP when it reaches a certain threshold. 
 Is there a way we can track issuance automation?  I'm guessing not, but that 
would be a good way to reduce validity based on web site administrators embrace 
of automation tools.

 

I'm glad you appreciated the efforts of Google and Mozilla, as well as others, 
to make effort here. However, I think suggesting transparency alone is likely 
to have much impact is to ignore what actually happened. That transparency was 
accompanied by meaningful change to promote HTTPS and discourage HTTP, and 
included making the necessary, sometimes controversial, changes to prioritize 
user security over enterprise needs. For example, browsers would not launch new 
features over insecure HTTP, insecure traffic was flagged and increasingly 
alerted on, and ultimately blocked.

 

I think if that's the suggestion, then the quickest solution is to have the CA 
indicate, within the certi

RE: About upcoming limits on trusted certificates

2020-03-13 Thread Doug Beattie via dev-security-policy
Hi Kathleen,

I think a clear description of why the change is needed is a great first step 
and will help explain why this change is needed and justify the timeline (and 
Ryan's ballot SC22 had a number of suggestions, some good and some weak, imo).  
When we moved to SHA2 knew of security risks so the timeline could be 
justified, however, I don’t see the same pressing need to move to annual domain 
revalidation and 1 year max validity for that matter.Here are a few points 
based on the email below.

Why is it hard for the CA to domain validation every year instead of every 2 
years?  It's not, it's the customers that push back.  There was a lot of 
discussion around this on ballot SC22.  

When we think about the issuance models, we need to keep the Enterprise 
approach in mind where domains are validated against a specific account or 
profile within an account and then issuance can happen using any valid domain 
or subdomain of those registered with the account.  Splitting the domain 
validation from issuance permits different teams to handle this and to manage 
the overall policy.  Domains can be validated at any time by anyone and not 
tied to the issuance of a specific certificate which makes issuance less prone 
to errors.  Statements like this seem to omit the notion of enterprise 
accounts: "..when it would be reasonable to require CAs to re-verify that the 
certificate Applicant still owns the domain name to be included in their TLS 
cert to replace the cert that was issued a year ago."

While ACME supports domain validation, not all enterprises like to install 
agents on their production servers and delegate that level of control down to 
the admins.  Plus some methods require the agent to update web content or 
change DNS which is often not something the enterprise wants to delegate to the 
agent.  We continue to work with these enterprises to build tools to perform 
domain validation outside of web server ACME client, but that takes time.

If your driving requirement to reduce the domain validation reuse is the 
BygoneSSL, then the security analysis is flawed.  There are so many things have 
to align to exploit domain ownership change that it's impactable, imo. Has this 
ever been exploited?  


As certificate issuance increases and automation is embraced, we agree that 
moving to shorter periods is a good thing.  We also know that there will be 
enterprise and industry laggards that delay delay delay and we can't be heavily 
influenced by them.

Would it make sense (if even possible) to track the level of automation and set 
a threshold for when the periods are changed?  Mozilla and Google are tracking 
HTTPS adoption and plan to hard block HTTP when it reaches a certain threshold. 
 Is there a way we can track issuance automation?  I'm guessing not, but that 
would be a good way to reduce validity based on web site administrators embrace 
of automation tools.

If this is not possible, then I think I'd (begrudgingly) agree with some 
comments Ryan made several years ago (at SwissSign's F2F?)  that we need to set 
a longer term plan for these changes, document the reasons/security threats, 
and publish the schedule (rip the band aid off).  These incremental changes, 
one after another by BRs or the Root programs are painful for everyone, and it 
seems that the changes are coming weather the CABF agrees or not.

Recent history of validity period changes:
- June 2016: Reduced to 3 years
- March 2018: reduced to 825 days (27 months)
- April 2020: Ballot SC22 proposed an April 2020 effective date for 393 max 
validity and domain re-use

If (AND ONLY IF) the resulting security analysis shows the need to get to some 
minimum validity period and domain re-validation interval, then, in my personal 
capacity and not necessarily that of my employer, I toss out these dates:
April 2021: 13 month max: High motivates automation and gives everyone a 12 
month heads up to plan accordingly
April 2023: 9 month max: Mandates some level of automation
April 2024: 6 month max: If you’re not using tools, then life is going to be 
painful
April 2025: 4 month max (final reduction): Dead in the water if you don’t use 
automation.
Leave enterprise data re-validation at 825 days.

This gives a 5-year plan that everyone can publish and get behind.

Doug
 



-Original Message-
From: dev-security-policy  On 
Behalf Of Kathleen Wilson via dev-security-policy
Sent: Thursday, March 12, 2020 4:29 PM
To: mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: About upcoming limits on trusted certificates

On 3/12/20 5:52 AM, Doug Beattie wrote:


> Changing the domain validation re-user period is a substantial change from 
> the Apple proposed max validity period change and will place an additional 
> burden on certificate Applicants to update their domain validation more than 
> twice as frequently. 


Please elaborate about why re-verifying the domain name ownership is difficult 
for the CA who is issuing the 

RE: About upcoming limits on trusted certificates

2020-03-12 Thread Doug Beattie via dev-security-policy
Kathleen,

Changing the domain validation re-user period is a substantial change from the 
Apple proposed max validity period change and will place an additional burden 
on certificate Applicants to update their domain validation more than twice as 
frequently.   This would be a sudden and large departure from the BRs.  
Certificate validity and domain validation re-use periods don’t necessarily 
need to be tied to the same value, so having certificate validity capped at 398 
days and domain re-use set at 825 days isn’t contradictory.

Can you also provide, in a blog or a publicly posted article, the reasons for 
shortening the certificate validity?  There are hundreds of comments and 
suggestions in multiple mail lists, but there is a lack of a documented formal 
security analysis of the recommended changes that we can point our customers to.

Doug

-Original Message-
From: dev-security-policy  On 
Behalf Of Kathleen Wilson via dev-security-policy
Sent: Wednesday, March 11, 2020 8:29 PM
To: mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: About upcoming limits on trusted certificates

On 3/11/20 4:37 PM, Paul Walsh wrote:
> 
>> On Mar 11, 2020, at 4:11 PM, Kathleen Wilson via dev-security-policy 
>>  wrote:
>>
>> On 3/11/20 3:51 PM, Paul Walsh wrote:
>>> Can you provide some insight to why you think a shorter frequency in domain 
>>> validation would be beneficial?
> [PW] If the owner’s identity has already been validated and that information 
> is still valid, why ask them to validate again? 


By "domain validation" I specifically mean verifying that the certificate 
requestor owns/controls the domain name(s) to be included in the TLS 
certificate.


> [PW] I believe it’s a good idea to ensure they’re still in control of the 
> domain. 


So I guess we are in agreement on this.


> My comment is in relation to the cost of validating their identity.


My proposal has nothing to do with identity validation.



> [PW] Thanks for this info. If this is already part of the CA/B Forum, is it 
> your intention to potentially do something different/specific for Firefox, 
> irrespective of what happens in that forum?
> 


My proposal is that if we are going to update Mozilla's policy to require TLS 
certs to have validity period of 398 days or less, we should also update 
Mozilla's policy to say that re-use of domain validation is only valid up to 
398 days. i.e. the ownership/control of the domain name should be re-validated 
before the renewal cert is issued.

Currently Mozilla's policy and the BRs allow the CA to re-use domain validation 
results for up to 825 days. (which is inline with the 825 day certificate 
validity period currently allowed by the BRs)

Kathleen




___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: About upcoming limits on trusted certificates

2020-03-04 Thread Doug Beattie via dev-security-policy
Hi Clint,

The content of your email, the blog post and the Apple root policy all say
something a little different and may leave some room for interpretation by
the CAs.  As it stands, things are a bit confused.  Here's why:

Your mail is a little light on the details.  While you say this is an
"upcoming change" to the Root Program you say certificates "will need to
have a lifetime of no more than 398 days".  The "will need to have" is
really weak.  If this is a hard requirement then I would say something
stronger like: "The Apple Root Program requires (as of Sept 1) CAs to issue
certificates with validity period not to exceed a total life time of 398
days under roots in the Apple root program.  Any certificates issued under a
Root in the Apple root program will be considered a violation of the Apple
Root policy" (or something like that).  Done, everyone knows exactly what
you mean.

The article you posted does not mention Apple Root program or policy and it
more or less a general statement without any context.   "TLS server
certificates issued on or after September 1, 2020 00:00 GMT/UTC must not
have a validity period greater than 398 days."  If Connections (presumably
from Safari browser or Apple apps) are attempted, then "This might cause
network and app failures and prevent websites from loading".  There is
nothing indicating this is an Apple Root policy requirement or that CAs need
to take note, only that if an Apple endpoint encounters one of these
non-compliant certificates, the connection may/will fail.

Your root policy: Obviously there is nothing here about this new change, and
if this is "the" Apple root policy, I'd recommend getting that updated with
a clear statement of this requirement and what happens if a certificate is
issued with a lifetime outside of this duration.  Chrome has a policy that
it will not trust certificates that are not compliant with their CT policy,
but it's not a Root policy.  Is this how Apple views their policy, or is it
a Root policy and any non-compliance is considered a mis-issuance by Apple?
The various statements lead me back and fourth between those 2
interpretations.

I think it's important that this be clearly stated, and I dislike formal
root policies being documented only in email threads.  How would a new CA
know this is a requirement without going through years of archived email on
multiple lists?

By the way, there is no reporting process outlined in the event that
something in your policy is violated.  How should violations be reported and
tracked?

Thanks!

Doug

-Original Message-
From: dev-security-policy  On
Behalf Of Clint Wilson via dev-security-policy
Sent: Tuesday, March 3, 2020 2:55 PM
To: mozilla-dev-security-pol...@lists.mozilla.org
Subject: About upcoming limits on trusted certificates

Hello all,

I wanted to inform this community of an upcoming change to the Apple Root
Program. 
SSL/TLS certificates issued on or after September 1, 2020 will need to have
a total lifetime of no more than 398 days. This change will be put in place
in a future release of iOS, macOS, iPadOS, watchOS, and tvOS for
default-trusted TLS certificates (i.e. the Roots that come preinstalled on
the above OSes).

For additional information, please see
https://support.apple.com/en-us/HT211025.

Thank you!
-Clint
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Which fields containing email addresses need to be validated?

2020-02-06 Thread Doug Beattie via dev-security-policy
I don't agree that the CA MUST validate EVERY field.  CAs leverage
enterprise RAs to validate some information in SMIME certificates, e.g., the
subscribers name in the CN field because the CA can't readily validate that.
I believe the same is true for some other fields like the UPN which is the
active directly account, but I thought I'd start a discussion to see what
people thought.

Doug

-Original Message-
From: Kurt Roeckx  
Sent: Thursday, February 6, 2020 4:06 PM
To: Doug Beattie 
Cc: mozilla-dev-security-policy

Subject: Re: Which fields containing email addresses need to be validated?

On Thu, Feb 06, 2020 at 08:54:04PM +, Doug Beattie via
dev-security-policy wrote:
> It's not against Mozilla policy to
> issue certificates with unvalidated email addresses in any field as 
> long as the Secure Mail EKU is not included, so the intent should be 
> to validate only those that are used for Secure Mail.

Any field in the certificate should be validated. If it contains an email
address, it should be validated. If it's not validated, it should get
removed.


Kurt



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Which fields containing email addresses need to be validated?

2020-02-06 Thread Doug Beattie via dev-security-policy
The Mozilla policy section 2.2 says:

*   . the CA takes reasonable measures to verify that the entity
submitting the request controls the email account associated with the email
address referenced in the certificate.

 

Since the Mozilla policy only applies to certificates with the EKU of Secure
Mail (ignoring TLS in this discussion), it would seem to imply that only
email addresses that could be used for sending or receiving signed or
encrypted emails would be in scope.   It's not against Mozilla policy to
issue certificates with unvalidated email addresses in any field as long as
the Secure Mail EKU is not included, so the intent should be to validate
only those that are used for Secure Mail.

 

As far as I know, the only fields that could be used by S/MIME applications
are the CN, Email, and RFC822 SAN fields.

 

We should clarify the Mozilla policy to more clearly define list of fields
containing email address (those 3 listed above) must be validated in section
2.2 so that this is clear and concise.

 

Wayne opened this issue in December and I just replied with a comment
related to the validation requirements of SAN/Other Name/UPN:

https://github.com/mozilla/pkipolicy/issues/200

 

 



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: DNS records and delegation

2019-10-11 Thread Doug Beattie via dev-security-policy
Ryan,

Are you recommending that:
a) we need a new domain validation method that describes this, or 
b)  those CAs that want to play with fire can go ahead and do that based on
their own individual security analysis, or
c) we need a clear policy/guideline in the BRs or root program that MUST be
followed when the CAs (maybe other entities) are updating DNS with random
values?  

I'm pretty sure I know the answer (none of the above).



-Original Message-
From: dev-security-policy  On
Behalf Of Ryan Sleevi via dev-security-policy
Sent: Friday, October 11, 2019 2:40 PM
To: Clint Wilson 
Cc: Ryan Sleevi ; mozilla-dev-security-policy
; Jeremy Rowley

Subject: Re: DNS records and delegation

On Fri, Oct 11, 2019 at 2:10 PM Clint Wilson  wrote:

> Apologies, but this isn't entirely clear to me. I'm guessing (hoping) 
> my misunderstanding centers around a difference between the Applicant 
> fully delegating DNS to the CA vs the Applicant only configuring a 
> single CNAME record? If the Applicant has configured 
> _validation.sleevi.example 3600 IN CNAME .ca.example then 
> the CA wouldn't be able to use any other  value to complete 
> the full lookup to include the TXT record, without the Applicant 
> directly changing the CNAME value.
>
> However, if the CA is fully managing this DNS, and therefore able to 
> independently reconfigure:
> _validation.sleevi.example 3600 IN CNAME .ca.example to 
> _validation.sleevi.example 3600 IN CNAME 
> .ca.example
> that's clearly a very different story.
> Is it correct to think of these as two different scenarios? In my 
> mind, the first scenario is the one I'm most interested in.
>

It's my fault for not being clearer of the example.

You can think of  as an ID computed from the domain being
requested - e.g. "sleevi.example" - rather than the account doing the
requesting (the Applicant).

In that scenario, when Evil Hacker, the Applicant, requests a cert for
sleevi.example, the CA doesn't look at the Applicant-ID. They look to see if
the domain - e.g. sleevi.example - is one that is signed up to use their
service. If so, they modify the records, and now Evil Hacker has access.

I tried to clarify this later on, that it's possible to design around; for
example, by using .ca.example. This is closer to what AWS
does.

There are /still/ risks with that approach though, in terms of greater
centralization of risk. Using the AWS example, if you wanted to get a
malicious cert for sleevi.example, you'd either need to compromise my DNS
provider, my DNS registrar, or my AWS account. Using the 
example, with the CA hosting, you'd either need to compromise my DNS
provider, my DNS registrar, or... well, the CA's systems.

Allowing the CA to do this sort of flow creates real challenges, because
they're the only ones that can issue certs. For example, the CA could refuse
to use that TXT method for anyone who doesn't point to them (e.g.
who uses AWS instead of the CA). It might seem odd to suggest that CAs might
refuse issuance, but we see it all the time. After all, the whole reason we
have so many trusted CAs is because there are a number (particularly the
European CAs) that want to refuse issuance on grounds of who is applying or
how they're applying (e.g. ETSI EN 319 411-et-al forbidding automation
entirely!). So we know CAs can try to lock folks in, and we also know that
CAs' incentives, around user/account security, are not necessarily the same
as might exist with a third-party provider like AWS.

I realize that seems like I'm suggesting a lot of ill-intent. I'm simply
trying to threat-model both the systemic weaknesses and the economic
incentives. If we allow the CAs to do this, it seems like we'd need even
stronger rules regarding Subscriber/CA authentication and identification,
and that... would likely be controversial and complex, to say the least ;)


Would there be a benefit to something like having  (or
>  as discussed further below) published in CAA as well? i.e.
> the Applicant publishes a whitelist of valid  values to 
> be used in this type of validation-delegation-schema? I haven't 
> thought this through fully, but it seems like it could help with 
> explicitly codifying the requirements, especially if the CAA record is 
> published at sleevi.example?
>

That's functionally what AWS is doing - it's using  instead of
 as the binding, and pushing that in the hop.


> This would suggest that we don't want the CA doing it, because the CA 
> is
>> not the Applicant, and the goal of 3.2.2.4 is to make sure the 
>> Applicant can demonstrate control.
>>
>> Another way of looking at this is imagining the following?
>> - Do we think it's allowed by the BRs for the domain operator to set 
>> their MX to be the CA, so the CA can auto-answer their own emails 
>> sent under 3.2.2.4.4.?
>>
> - Do we think it's allowed by the BRs for the domain operator to give 
> the
>> CA FTP/SSH/file upload access to /.well-known/pki-validation, so that 
>> the CA can place the answer file 

GlobalSign: OCSP Responder Returns invalid values for some Precertificates

2019-09-06 Thread Doug Beattie via dev-security-policy
Based on announcements by DigiCert and Let's Encrypt, GlobalSign has found
that our Precertificates without corresponding certificates also return
Unauthorized or Unknown. We're working with PrimeKey on a patch and are also
updating our own OCSP services to return the proper values.

 

Here are 2 examples:

https://crt.sh/?id=1707464536 
=ocsp 

https://crt.sh/?id=1725532369 
=ocsp

 

Bug opened:  https://bugzilla.mozilla.org/show_bug.cgi?id=1579413



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


GlobalSign: SSL Certificates with US country code and invalid State/Prov

2019-08-22 Thread Doug Beattie via dev-security-policy
Today we opened a bug disclosing misissuance of some certificates that have
invalid State/Prov values:

   https://bugzilla.mozilla.org/show_bug.cgi?id=1575880

 

On Tuesday August 20th 2019, GlobalSign was notified by a third party
through the report abuse email address that two certificates were discovered
which contained wrong State information, either in the stateOrProvinceName
field or in the jurisdictionStateOrProvinceName field.

 

The two certificates in question were:

https://crt.sh/?id=1285639832 

https://crt.sh/?id=413247173 

 

GlobalSign started and concluded the investigation within 24 hours. Within
this timeframe GlobalSign reached out to the Certificate owners that these
certificates needed to be replaced because revocation would need to happen
within 5 days, following the Baseline Requirements. As of the moment of
reporting, these certificates have not yet been replaced, and the offending
certificates have not been revoked. The revocation will happen at the latest
on the 25th of August.

 

Following this report, GlobalSign initiated an additional internal review
for this problem specifically (unexpected values for US states in values in
the stateOrProvinceName or jurisdictionStateOrProvinceName fields). Expected
values included the full name of the States, or their official abbreviation.
We reviewed all certificates, valid on or after the 21st of August, that
weren't revoked for other unrelated reasons.

 

To accommodate our customers globally, the stateOrProvinceName field or in
the jurisdictionStateOrProvinceName are text fields during our ordering
process. The unexpected values were not spotted or not properly corrected.
We have put additional flagging in place to highlight unexpected values in
both of these fields, and are looking at other remedial actions. None of
these certificates were previously flagged for internal audit, which is
completely randomized.

 

We will update with a full incident report for this and also disclose all
other certificates found based on our research.



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Fwd: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-16 Thread Doug Beattie via dev-security-policy




From: Ben Laurie 
Sent: Friday, August 16, 2019 9:33 AM
To: Doug Beattie 
Cc: Jonathan Rudenberg ; Peter Gutmann 
; mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: Fwd: Intent to Ship: Move Extended Validation Information out of 
the URL bar



On Fri, 16 Aug 2019 at 14:31, Doug Beattie via dev-security-policy 
mailto:dev-security-policy@lists.mozilla.org> > wrote:

DB: Yes, that's true.  I was saying that phishing sites don't use EV, not
that EV sites don't get phished

Surely this shows that EV is not needed to make phishing work, not that EV 
reduces phishing?



[DB] It should show that users are safer when visiting an EV secured site.



-- 

I am hiring! Formal methods, UX, SWE ... verified s/w and h/w. 
#VerifyAllTheThings.



https://g.co/u58vjr https://g.co/adjusu

(Google internal)



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Fwd: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-16 Thread Doug Beattie via dev-security-policy
 

 

From: Jonathan Rudenberg  
Sent: Friday, August 16, 2019 9:04 AM
To: Doug Beattie ; Peter Gutmann
; mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: Fwd: Intent to Ship: Move Extended Validation Information out
of the URL bar

 

On Fri, Aug 16, 2019, at 07:56, Doug Beattie via dev-security-policy wrote:

Peter,

 

I'm not claiming that EV reduces phishing globally, just for those sites

that use them.  Do you have a chart that breaks down phishing attacks by SSL

certificate type? 

 

Here is some research that indicates EV sites have a reduced phishing

percentage, so customers accessing EV protected sites are safer:

   https://cabforum.org/wp-content/uploads/23.-Update-on-London-Protocol.pdf

 

Doug,

 

Can you point me to the specific research you're referring to? All I see in
this presentation that's remotely relevant is a breakdown of the certificate
types used on detected phishing sites across a couple months. If this data
is correct, it doesn't seem to be useful information, and actually proves
one of the points that is behind the removal of EV UI.

 

DB: The presentation identifies that people don't set up phishing sites
using EV certificates, and yes, this data only over the last 11 months or
so.

 

If EV is required for a successful phishing attack, then attackers will just
get EV certificates. But all of the research that has been repeatedly
brought up in this thread shows that users don't use the EV UI when making
decisions about whether to trust a website, explaining why phishing sites
don't use EV very much.

 

DB: One of the reasons that phishers don't get EV certificates is because
the vetting process requires several interactions and corporate repositories
which end up revealing more about their identity.  This leaves a trail back
to the individual that set up the fake site which discourages the use of EV.
DV is completely anonymous and leaves very few traces.

 

Additionally, the idea that sites that use EV experience less phishing seems
deeply flawed. Banks are a huge target for phishing, and most of their
websites have EV certificates.

 

DB: Yes, that's true.  I was saying that phishing sites don't use EV, not
that EV sites don't get phished.

 

An interesting and clear recent example of this is PayPal, which is
obviously a very popular target for phishing. paypal.com technically has an
EV certificate, but due to the certificate chain used since early 2018, the
EV UI does not show up in the most popular browser (Chrome) on the most
popular desktop operating system (Windows)[1]. Given the amount of phishing
that PayPal experiences, it seems likely to me that they would have figured
out how to fix this if they thought it was worth the effort. They haven't.

 

DB: Maybe they should get an EV certificate and help train the users to look
for that on their login page to reduce the chances that their customers are
phished?

 

Jonathan

 

[1]
https://www.troyhunt.com/paypals-beautiful-demonstration-of-extended-validat
ion-fud/



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Fwd: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-16 Thread Doug Beattie via dev-security-policy
Peter,

I'm not claiming that EV reduces phishing globally, just for those sites
that use them.  Do you have a chart that breaks down phishing attacks by SSL
certificate type? 

Here is some research that indicates EV sites have a reduced phishing
percentage, so customers accessing EV protected sites are safer:
   https://cabforum.org/wp-content/uploads/23.-Update-on-London-Protocol.pdf


-Original Message-
From: Peter Gutmann  
Sent: Thursday, August 15, 2019 10:03 PM
To: Doug Beattie ;
mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: Fwd: Intent to Ship: Move Extended Validation Information out
of the URL bar

Doug Beattie  writes:

>Do you have any empirical data to backup the claims that there is no 
>benefit from EV certificates?

Uhhh... I don't even know where to start.  We have over ten years of data
and research publications on this, and the lack of benefit was explicitly
cited by Google and Mozilla as the reason for removing the EV bling... one
example is the most obvious statistic, maintained by the Anti-Phishing
Working Group (APWG), which show an essentially flat trend for phishing over
the period of a year in which EV certificates were phased in, indicating
that they had no effect whatsoever on phishing.  There's endless other stats
showing that the trend towards security is negative, i.e. it's getting worse
every year, here's some five-year stats from a quick google:

https://www.thesslstore.com/blog/wp-content/uploads/2019/05/Phishing-by-Year
.png

If EV certs had any effect at all on security we'd have seen a decrease in
phishing/increase in security.

There is one significant benefit from EV certificates, which I've already
pointed out, which is to the CAs selling them.  So when I say "there's no
benefit" I mean "there's no benefit to end users", which is who the
certificates are putatively helping.

Peter.


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Fwd: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-15 Thread Doug Beattie via dev-security-policy
Peter,

Do you have any empirical data to backup the claims that there is no benefit
from EV certificates?  From the reports I've seen, the percentage of
phishing and malware sites that use EV is drastically lower than DV (which
are used to protect the cesspool of websites).

Doug



-Original Message-
From: dev-security-policy  On
Behalf Of Peter Gutmann via dev-security-policy
Sent: Wednesday, August 14, 2019 9:04 PM
To: mozilla-dev-security-pol...@lists.mozilla.org; Jakob Bohm

Subject: Re: Fwd: Intent to Ship: Move Extended Validation Information out
of the URL bar

Jakob Bohm via dev-security-policy 
writes:

>Problem example:
>[...]

You're explaining how it's supposed to work in theory, not in the real
world.

We have a decade of real-world data showing that it doesn't work, that
there's no benefit from EV certificates apart from the one to CA's balance
sheets.  So the browser vendors are doing the logical thing, responding to
the real-world data and no longer pretending that EV certs add any security
value, both in terms of protecting users and of keeping out the bad guys -
see the attached screen clip, in this case for EV code-signing certs for
malware, but you can buy web site EV certs just as readily.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: How to use Cross Certificates to support Root rollover

2019-08-05 Thread Doug Beattie via dev-security-policy


 

Ryan,

 

Note: I changed the name of the thread because this is a great discussion about 
root roll-over and isn’t really related to the Entrust Root inclusion request.

 

In theory Cross certificates are simple, but I’ve found that in practice they 
are difficult to manage and use.

 

First, it would be a good idea to agree on the definition of a Cross 
Certificate.  The BRs define it as: A certificate that is used to establish a 
trust relationship between two Root CAs.

 

Does that align with your definition?  I ask because you used the term Cross 
signed intermediate, so it seems you may be using a difference definition than 
the one in the BRs.  Are you proposing this type of approach where the Issuer 
of the SSL certificate could be one of 2 different CAs (have same keys and 
Subject Name but are signed by different roots)?

SSL – Intermediate CA 1 – Legacy Root

SSL – Cross Intermediate CA 2 – New Root

 

I thought a cross certificate chain needed to look like this

SSL – Intermediate CA 1 (signed by new root) – Cross cert  – Legacy Root

 

My discussion is focused on the BR definition, so there could be advantages to 
creating multiple intermediate signed CAs which I haven’t considered yet.  What 
definition/approach were you assuming?

 

See responses in-line below

 

From: Doug Beattie  
Sent: Monday, August 5, 2019 7:29 AM
To: Doug Beattie 
Subject: Re: FW: Entrust Root Certification Authority - G4 Inclusion Request

 

 

On Mon, Aug 5, 2019 at 7:12 AM Doug Beattie mailto:doug.beat...@globalsign.com> > wrote:

From: Ryan Sleevi mailto:r...@sleevi.com> > 
Sent: Friday, August 2, 2019 1:49 PM
To: Doug Beattie mailto:doug.beat...@globalsign.com> >
Cc: r...@sleevi.com  ; Bruce mailto:bruce.mor...@entrust.com> >; mozilla-dev-security-policy 
mailto:mozilla-dev-security-pol...@lists.mozilla.org> >
Subject: Re: Entrust Root Certification Authority - G4 Inclusion Request

On Fri, Aug 2, 2019 at 9:59 AM Doug Beattie mailto:doug.beat...@globalsign.com> > wrote:

Ryan,

GlobalSign has been thinking along these lines, but it's not clear how
browsers build their path when a cross certificate is presented to them in
the TLS handshake.

Excellent! Happy to help in any way to make that possible and easier :)

 

 

DB: I knew you would  

 

Can you explain how chrome (windows and Android)  builds a path when a cross
certificate is delivered?  What about the case when the OS (Microsoft
specifically) has cached the cross certificate, is it different?

 It's unclear the objective of the question. That is, are you trying to figure 
out what happens with both paths are valid, or how it handles edge cases, etc?

 

 

DB: We have some customers that mandate a complete SHA-256 chain, including the 
root.  We had been using our older SHA-1 Root (R1) and recently moved to our 
newer SHA-265 root, (R3).  We now can deliver certificates issued with SHA-256 
at all levels, great!  In order to support some legacy applications that didn’t 
have R3 embedded, we created a cross certificate R1-R3.  You can get it here 

 .

 

DB: The customer came back and said, hey, it still chains to R1, what’s up?  
Oh, it’s because the client has the cross certificate cached, don’t worry about 
that, some users will see the chain up to R1 and others to R3.  Hmm, not good 
they say.

 

DB: Anyway, no way around that now (unless you have some other tricks)  I went 
and looked at the links to the MS article and noticed that when the quality of 
the certificates in the chain is the same, then it prefers certificates that 
have a later NotBefore date (I presume this means issued more recently).  Our 
R1-R3 cross was issued in 2018 and the Root R3 was issued in 2009.  When the 
issuing CA is validated it has 2 paths it can follow, 

1) Root R3, issued in 2009, or 

2) the Cross certificate, issued in 2018.  

Even though the path is longer, it uses the cross certificate which chains to 
R1 (SHA-1) because of the not-before date.

 

DB: Does this mean we should have created the cross certificate with the same 
date as the  R3 root (2009)?  

How else can we have clients prefer the shorter higher security SHA-256 chain?  

Perhaps this is means that we need to change the definition of a Cross 
certificate from being a Root to Root chain.

 

DB: Even if this specific web site didn’t configure the extra certificate (the 
R1-R3 cross certificate) into their configuration, the end users may have 
picked it up somewhere else and have it cached so their specific chain goes: 
SSL, Intermediate CA, R1-R3 Cross certificate, Root R1

 

DB: They are stuck with inconsistent user experience and levels of “security” 
for their website. 

 

 

 

 

At present (and this is changing), Chrome uses the CryptoAPI implementation, 
which is the same as IE, Edge, and other Windows applications.

You can read 

How to use Cross Certificates to support Root rollover

2019-08-05 Thread Doug Beattie via dev-security-policy
Ryan,

 

Note: I changed the name of the thread because this is a great discussion about 
root roll-over and isn’t really related to the Entrust Root inclusion request.

 

In theory Cross certificates are simple, but I’ve found that in practice they 
are difficult to manage and use.

 

First, it would be a good idea to agree on the definition of a Cross 
Certificate.  The BRs define it as: A certificate that is used to establish a 
trust relationship between two Root CAs.

 

Does that align with your definition?  I ask because you used the term Cross 
signed intermediate, so it seems you may be using a difference definition than 
the one in the BRs.  Are you proposing this type of approach where the Issuer 
of the SSL certificate could be one of 2 different CAs (have same keys and 
Subject Name but are signed by different roots)?

SSL – Intermediate CA 1 – Legacy Root

SSL – Cross Intermediate CA 2 – New Root

 

I thought a cross certificate chain needed to look like this

SSL – Intermediate CA 1 (signed by new root) – Cross cert  – Legacy Root

 

My discussion is focused on the BR definition, so there could be advantages to 
creating multiple intermediate signed CAs which I haven’t considered yet.  What 
definition/approach were you assuming?

 

See responses in-line below

 

From: Doug Beattie  
Sent: Monday, August 5, 2019 7:29 AM
To: Doug Beattie 
Subject: Re: FW: Entrust Root Certification Authority - G4 Inclusion Request

 

 

On Mon, Aug 5, 2019 at 7:12 AM Doug Beattie mailto:doug.beat...@globalsign.com> > wrote:

From: Ryan Sleevi mailto:r...@sleevi.com> > 
Sent: Friday, August 2, 2019 1:49 PM
To: Doug Beattie mailto:doug.beat...@globalsign.com> >
Cc: r...@sleevi.com  ; Bruce mailto:bruce.mor...@entrust.com> >; mozilla-dev-security-policy 
mailto:mozilla-dev-security-pol...@lists.mozilla.org> >
Subject: Re: Entrust Root Certification Authority - G4 Inclusion Request

On Fri, Aug 2, 2019 at 9:59 AM Doug Beattie mailto:doug.beat...@globalsign.com> > wrote:

Ryan,

GlobalSign has been thinking along these lines, but it's not clear how
browsers build their path when a cross certificate is presented to them in
the TLS handshake.

Excellent! Happy to help in any way to make that possible and easier :)

 I knew you would  

Can you explain how chrome (windows and Android)  builds a path when a cross
certificate is delivered?  What about the case when the OS (Microsoft
specifically) has cached the cross certificate, is it different?

 It's unclear the objective of the question. That is, are you trying to figure 
out what happens with both paths are valid, or how it handles edge cases, etc?

We have some customers that mandate a complete SHA-256 chain, including the 
root.  We had been using our older SHA-1 Root (R1) and recently moved to our 
newer SHA-265 root, (R3).  We now can deliver certificates issued with SHA-256 
at all levels, great!  In order to support some legacy applications that didn’t 
have R3 embedded, we created a cross certificate R1-R3.  You can get it here 

 .

 

The customer came back and said, hey, it still chains to R1, what’s up?  Oh, 
it’s because the client has the cross certificate cached, don’t worry about 
that, some users will see the chain up to R1 and others to R3.  Hmm, not good 
they say.

 

Anyway, no way around that now (unless you have some other tricks)  I went and 
looked at the links to the MS article and noticed that when the quality of the 
certificates in the chain is the same, then it prefers certificates that have a 
later NotBefore date (I presume this means issued more recently).  Our R1-R3 
cross was issued in 2018 and the Root R3 was issued in 2009.  When the issuing 
CA is validated it has 2 paths it can follow, 

1) Root R3, issued in 2009, or 

2) the Cross certificate, issued in 2018.  

Even though the path is longer, it uses the cross certificate which chains to 
R1 (SHA-1) because of the not-before date.

 

Does this mean we should have created the cross certificate with the same date 
as the  R3 root (2009)?  

How else can we have clients prefer the shorter higher security SHA-256 chain?  

Perhaps this is means that we need to change the definition of a Cross 
certificate from being a Root to Root chain.

 

Even if this specific web site didn’t configure the extra certificate (the 
R1-R3 cross certificate) into their configuration, the end users may have 
picked it up somewhere else and have it cached so their specific chain goes: 
SSL, Intermediate CA, R1-R3 Cross certificate, Root R1

 

They are stuck with inconsistent user experience and levels of “security” for 
their website. 

 

 

 

At present (and this is changing), Chrome uses the CryptoAPI implementation, 
which is the same as IE, Edge, and other Windows applications.

You can read a little bit about Microsoft's logic here:

- 

RE: Entrust Root Certification Authority - G4 Inclusion Request

2019-08-02 Thread Doug Beattie via dev-security-policy
Ryan,

GlobalSign has been thinking along these lines, but it's not clear how
browsers build their path when a cross certificate is presented to them in
the TLS handshake.

Can you explain how chrome (windows and Android)  builds a path when a cross
certificate is delivered?  What about the case when the OS (Microsoft
specifically) has cached the cross certificate, is it different?

With this approach, we'd require our customers to configure their web
servers to always send down the extra certificate which:
  * complicates web server administration,
  * increases TLS handshake packet sizes (or extra packet?), and
  * increases the certificate path from 3 to 4 certificates (SSL, issuing
CA, Cross certificate, Root), which increases the path validation time and
is typically seen as a competitive disadvantage

Do you view these as meaningful issues?  Do you know of any CAs that have
taken this approach?


-Original Message-
From: dev-security-policy  On
Behalf Of Ryan Sleevi via dev-security-policy
Sent: Thursday, August 1, 2019 2:51 PM
To: Bruce 
Cc: mozilla-dev-security-policy

Subject: Re: Entrust Root Certification Authority - G4 Inclusion Request

On Fri, Jul 26, 2019 at 4:29 PM Bruce via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Friday, July 26, 2019 at 1:45:06 PM UTC-4, Ryan Sleevi wrote:
> > (In a personal capacity, as normally noted but making sure to 
> > extra-note
> it
> > here)
> >
> > Hi Wayne,
> >
> > It wasn't clear to me from the inclusion request, did Entrust give a
> reason
> > for the requested addition? For example, do they plan to stop 
> > issuing
> from
> > one of the included roots and have it removed?
>
> The purpose of the inclusion request is to add a 4096-bit RSA root 
> which will be used to support larger keys as we move ahead. We are not 
> looking at this root to replace our current roots, but plan to migrate 
> to the new root as the demand for larger keys grows. We are not 
> planning remove any of our roots at this time.
>

It seems like it should be technically possible to use this root to replace
an existing root, which seems like it would align well with the goal of
ensuring larger key support going forward.

For example, if "tomorrow" (hypothetically; I know it takes time) you:
1) Cross-signed the 4K root with an existing root
2) Issued a new issuing intermediate under the 4K root
3) Issued all new certificates going forward from that new issuing
intermediate

Then it would seem like there's a path to ensure that all clients which
support your existing, legacy roots, would automatically support your 4K
root, building a path to your legacy root. Clients which installed/shipped
the 4K root would build shorter paths, and without the intermediate
signature from the legacy root to the 4K root. Once all of your existing
"Legacy" certificates expire (that is, those issued from your old legacy
issuing intermediate) - which, admittedly, would likely be 825 days from
"tomorrow" - clients could remove support for the "Legacy" certificate
without breaking any existing certificates.

Did you consider such a transition plan? That would allow clients to
minimize the number of roots a given organization has, which helps reduce
the security risk and maintenance overhead to clients, while still allowing
a smooth and seamless transition. It seems like a win for everyone, and
would be great to know more about those considerations if deciding to accept
this new root.

>From the current description, it sounds like this new root may not provide
clear user benefit, since it's not clear that it's functionally
differentiated from the existing root, which seems to be wholly sufficient
for the cryptographic needs of Firefox users.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Logotype extensions

2019-07-12 Thread Doug Beattie via dev-security-policy
We've beaten the stuffing out of Logotype, imho.
- CAs want to add it
- Root stores don't
- The BRs permit it (probably).
- I'll report you to the DoJ,
- I'll revoke our Roots,
- bla bla bla

My personal view is that CAs should be able to include data in extensions as
long as they document how they validate it in their CPS.  I understand and
agree that using existing Subject DN attributes is dangerous, but using
custom extensions to convey data should be fine.  If you understand how to
decode it, then you understand what it is and to what extent you can trust
it based on the CA CPS, right?

Wayne, your initial proposal was this:

Due to the risk of misleading Relying Parties and the lack of defined
validation standards for information contained in this field, as discussed
here [2], CAs MUST NOT include the RFC 3709 Logotype extension in CA or
Subscriber certificates.

I'm guessing you have concerns beyond just logos but picked on this one
because of the thread.

I think we should move on and: 
- work on a standardized way to represent Logos along with the associated
validation of the contents.
- apply this same logic (define standard validation rules and
well-structured formatting) to other things that we need to include (or that
we are already including like LEIs).  I'm certain that there are even more
industry uses for certain (not misleading) values in the OU, and those are
excellent candidates for including in a uniform way, just like we did for
PSD2 data.  As long as we have a well defined data structure and a
definition for how the data was validated, I don't believe that we should be
concerned with how strongly the data was validated (leave that up to the
application or person consuming the data based on the stated validation
method).

Doug

-Original Message-
From: dev-security-policy  On
Behalf Of Phillip Hallam-Baker via dev-security-policy
Sent: Thursday, July 11, 2019 11:53 PM
To: Wayne Thayer 
Cc: mozilla-dev-security-policy
; hous...@vigilsec.com

Subject: Re: Logotype extensions

On Thu, Jul 11, 2019 at 12:19 PM Wayne Thayer  wrote:

> On Wed, Jul 10, 2019 at 7:26 PM Phillip Hallam-Baker < 
> ph...@hallambaker.com> wrote:
>
>> Because then the Mozilla ban will be used to prevent any work on 
>> logotypes in CABForum and the lack of CABForum rules will be used as 
>> pretext for not removing the ban.
>>
>> I have been doing standards for 30 years. You know this is exactly 
>> how that game always plays out.
>>
>
> Citation please? The last two examples I can recall of a Browser 
> clarifying or overriding CAB Forum policy are:
> 1. banning organizationIdentifier - resulting in ballot SC17 [1] , 
> which properly defines the requirements for using this Subject attribute.
> 2. banning domain validation method #10 - resulting in the ACME TLS 
> ALPN challenge [2], which is nearly through the standards process.
>
> In both examples, it appears that Browser policy encouraged the 
> development of standards.
>

It is what happened when I proposed logotypes ten years ago.



> If you don't want to use the extension, that is fine. But if you 
> attempt
>> to prohibit anything, ruin it by your lawyers first and ask them how 
>> it is not an a restriction on trade.
>>
>> It is one thing for CABForum to make that requirement, quite another 
>> for Mozilla to use its considerable market power to prevent other 
>> browser providers making use of LogoTypes.
>>
>
> If this proposal applied to any certificate issued by a CA, I might 
> agree, but it doesn't. CAs are free to do whatever they want with 
> hierarchies that aren't trusted by Mozilla. It's not clear to me how a 
> CA would get a profile including a Logotype through a BR audit, but 
> that's beside the point.
>

Since Mozilla uses the same hierarchies that are used by all the other
browsers and are the only hierarchies available, I see a clear restraint of
trade issue.

It is one thing for Mozilla to decide not to use certain data in the
certificate, quite another to prohibit CAs from providing that data to other
parties.

The domain validation case is entirely different because the question there
is how data Mozilla intends to rely on is validated.


A better way to state the requirement is that CAs should only issue
 logotypes after CABForum has agreed validation criteria. But I 
 think that would be a mistake at this point because we probably 
 want to have experience of running the issue process before we 
 actually try to standardize it.


>>> I would be amenable to adding language that permits the use of the 
>>> Logotype extension after the CAB Forum has adopted rules governing its
use.
>>> I don't see that as a material change to my proposal because, either 
>>> way, we have the option to change Mozilla's position based on our 
>>> assessment of the rules established by the CAB Forum, as documented 
>>> in policy section 2.3 "Baseline Requirements Conformance".
>>>
>>> I do not believe that changing the 

RE: Policy 2.7 Proposal: Clarify Section 5.1 ECDSA Curve-Hash Requirements

2019-05-24 Thread Doug Beattie via dev-security-policy
Wayne recommended that we open up a Mozilla incident ticket to track the 8
GlobalSign certificates of that do not contain the required null a parameter
and thus violate the requirements of
https://tools.ietf.org/html/rfc3279#section-2.3.1. 

https://bugzilla.mozilla.org/show_bug.cgi?id=1554259

Hopefully the other CAs will also open up tickets and provide their analysis
of how this happened so we can all learn how to avoid problems like this in
the future. 

Initial analysis is that we accept the CSR and pass along the parameter (or
not) and that this specific field is not flagged during the validation
process, nor "fixed" by EJBCA when the certificate is issued.  We're
currently looking at our options for solving this.

Doug

-Original Message-
From: dev-security-policy  On
Behalf Of Ryan Sleevi via dev-security-policy
Sent: Friday, May 24, 2019 4:39 AM
To: Brian Smith 
Cc: Ryan Sleevi ; mozilla-dev-security-policy
; Wayne Thayer

Subject: Re: Policy 2.7 Proposal: Clarify Section 5.1 ECDSA Curve-Hash
Requirements

On Wed, May 22, 2019 at 7:43 PM Brian Smith  wrote:

> Ryan Sleevi  wrote:
>
>>
>>
>>> It would be easier to understand if this is true if the proposed 
>>> text cited the RFCs, like RFC 4055, that actually impose the 
>>> requirements that result in the given encodings.
>>>
>>
>> Could you clarify, do you just mean adding references to each of the 
>> example encodings (such as the above example, for the SPKI encoding)?
>>
>
> Exactly. That way, it is clear that the given encodings are not 
> imposing a new requirement, and it would be clear which standard is 
> being used to determine to correct encoding.
>

Thanks, did that in
https://github.com/sleevi/pkipolicy/commit/80da8acded63618a058d26c73db1e2438
a6df9ed


>
> I realize that determining the encoding from each of these cited specs 
> would require understanding more specifications, including in 
> particular how ASN.1 DER requires DEFAULT values to be encoded. I 
> would advise against calling out all of these details individually 
> less people get confused by inevitable omissions.
>

Hopefully struck the right balance. These changes are now reflected in the
PR at https://github.com/mozilla/pkipolicy/pull/183
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: GlobalSign misissuance: 4 certificates with invalid CN

2019-05-23 Thread Doug Beattie via dev-security-policy
Hi Nick,

I updated our Mozilla ticket this this info and I wanted to also supply it
here because it answers your questions also
   https://bugzilla.mozilla.org/show_bug.cgi?id=1552586 

Here is an update to this incident:

5/20: After further analysis of the issue, it was determined that the cause
was not the V1 API in general, but that there was a missing check for CN/SAN
validation which was being skipped in a certain scenario.  Specifically,
when the "AEG" product code was being used, this check was skipped.
Typically the AEG product code is used for non-public SSL certificates, and
we found that the conditional CN/SAN check for the publicly trust thread was
not being executed.

5/21: We rolled out updated code that now properly checks the CN and SAN
values for the AEG product code.  We also rolled back the V1 support to
permit continued use of that API.  While it's not being used for certificate
issuance, it was  being used for some other functions that impacted customer
operations for the prior few days.

We reviewed all certificates issued via this product code and found that
these were the only 4 that didn't comply.

Others have asked if we had skipped any other checks, like CAA, when
following this AEG product thread.  Over the past few days we've reviewed
the code and threads and have determined that no other required checks or
validations were skipped.  Organization and Domain validation is done via
our Enterprise model and these certificate requests all were subject to
those constraints.

We're continuing to inspect the AEG thread to double and triple check that
no other required validation steps were missed and will report back if we
find anything new to report, but at this point I believe that we can close
this incident.

Doug

-Original Message-
From: Nick Lamb  
Sent: Saturday, May 18, 2019 3:02 AM
To: dev-security-policy@lists.mozilla.org
Cc: Doug Beattie 
Subject: Re: GlobalSign misissuance: 4 certificates with invalid CN

On Fri, 17 May 2019 21:11:41 +
Doug Beattie via dev-security-policy
 wrote:

> Today our post issuance checker notified us of 4 certificates were 
> issued with invalid CN values this afternoon.
> 
>  
> 
> We posted our incident report here:
> https://bugzilla.mozilla.org/show_bug.cgi?id=1552586

Thanks Doug,

I have two questions that seem relevant to this incident, because it is
reminiscent of problems we had with the sprawl of issuance systems under
Symantec

1. I have examined one of the certificates and I see it contains a bogus SAN
dnsName matching the CN. Please let us know which constraints that should be
in place weren't in place for this API, for example could the customer have
successfully obtained a certificate for a FQDN which has CAA policy saying
GlobalSign should not issue ?


2. The API is described as "deprecated" but I'd like more details to
understand what that means from a practical standpoint. A subscriber was
able (and by the sound of things continues to be able) to cause issuance
through this API - was there already a specific date after which GlobalSign
had announced (to such customers) that the API would cease availability? Is
an equivalent, but so far as you understand compliant, replacement API for
these customers already available ? How should a GlobalSign customer have
known this API (or software using it) was deprecated and when they needed to
stop using it?


"In coordination with the customer, we are assured that no more
non-compliant certificates will be issued" certainly reads to me like you
know this API could issue more non-compliant certs right now, but you're
content to let a subscriber pinky swear not to do so. I don't think that's
what Mozilla has in mind with the phrase "a pledge to the community" but
perhaps Wayne disagrees.


Nick.


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


GlobalSign misissuance: 4 certificates with invalid CN

2019-05-17 Thread Doug Beattie via dev-security-policy
Today our post issuance checker notified us of 4 certificates were issued
with invalid CN values this afternoon.

 

We posted our incident report here:
https://bugzilla.mozilla.org/show_bug.cgi?id=1552586

 

In summary, 4 certificate were issued from an API that had been depreciated,
but not functionally disabled.  All customers were migrated from this API
but the API was not disabled.  One of our custom on-premise applications was
misconfigured to use this old API.

 

The CN of the certificates is: "madmin's macboo.int.mlsel.com"  They were
immediately revoked.

 

Additional detail and ongoing status will be posted in the Mozilla incident.

 



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: AT SSL certificates without the AIA extension

2019-04-30 Thread Doug Beattie via dev-security-policy
Hi Nick,

That's a good idea if we were going to continue with supporting customers
like this; however, we're in the final stages of terminating all customers
running on-premise SSL CAs.  Given the timing, setting up private  CT logs
wouldn't help because that would undoubtedly take longer than our
termination date in about 4 months.

Doug

-Original Message-
From: Nick Lamb  
Sent: Tuesday, April 30, 2019 3:51 AM
To: dev-security-policy@lists.mozilla.org
Cc: Doug Beattie 
Subject: Re: AT SSL certificates without the AIA extension

On Mon, 29 Apr 2019 12:41:07 +
Doug Beattie via dev-security-policy
 wrote:
> It should be noted that these certificates are not posted to CT logs 
> nor are they accessed via browsers as they are used within closed 
> networks, but we'll get more details on their exact usage shortly.

Hi Doug,

Thanks for reporting this problem, I appreciate that this subCA doesn't see
a proportionate reward to logging these certs in the existing well known
public logs and so it makes sense that they wouldn't write to them.

I'm also glad to hear that a 100% sample policy was in place with, it sounds
like, a monthly audit period, given the volumes involved (from what I can
see publicly in e.g. Censys) that seems like a good idea.

Still, in terms of your audit oversight role it could make sense, as
software is replaced/ upgraded, to switch to private CT logging as a
substitute for a human role of uploading certs for audit.

>From your description it sounds as though GlobalSign reasonably trusts that
the assigned AT Employee will provide them with an accurate set of certs,
the thing we're protecting against here is accident or mistake, not a
malevolent subCA operator which would be very hard to detect this way.
Unfortunately this employee (and perhaps one or more
deputies) were on leave. If that assessment is correct then software which
uses RFC6962 methods to write certs on issuance to a log operated by
GlobalSign would satisfy this requirement automatically without a human
action.

With the log not publicly trusted it could operate a much relaxed policy
(e.g. MMD 7 days or even not defined, not publicly accessible) but it would
avoid this dependency on a specific person at AT doing a manual step
periodically in order for GlobalSign to have sight of issued certificates.

With the relative popularity of RFC6962 logging, this becomes an
off-the-shelf hook that can be used to support audit roles easily without
either manual steps to export the certificates or special modifications to
the issuance software. You mentioned EJBCA specifically in this post, and so
I verified that as expected EJBCA does provide a means for CA operators to
configure a log without also then embedding SCTs in certificates (which
might not be desirable for AT's application)

Nick.


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


AT SSL certificates without the AIA extension

2019-04-29 Thread Doug Beattie via dev-security-policy
 

In the course of normal communications with AT, we came across an SSL
certificate that did not have the required AIA extension in it on Friday
April 16th. We had a conference call shortly thereafter and they verified
that one of their current EJBCA certificate profiles is missing this
extension.

They think that the certificate profile was not maintained when they
performed a recent EJBCA upgrade. They believe the upgrade was done in March
and that most of the certificates that were replaced due to the 63 bit
serial number incident have been replaced with certificates that do not
contain the AIA extension.

GlobalSign would have been detected this during our 100% audit of their
March certificates; however due to AT staff vacation schedules, the March
upload of issued certificates was delayed.

We're working with them to obtain the timeline for the change, the dates
during which they misissued certificates, the list of affected certificates,
and the replacement and revocation schedule.

It should be noted that these certificates are not posted to CT logs nor are
they accessed via browsers as they are used within closed networks, but
we'll get more details on their exact usage shortly.

I've created this bug to track this issue:

https://bugzilla.mozilla.org/show_bug.cgi?id=1547691

 



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Organization Identifier field in the Extended Validation certificates accordinf to the EVG ver. 1.6.9

2019-04-18 Thread Doug Beattie via dev-security-policy
Hi Sandor,

You can follow the ballot status in the Server Certificate Working Group
mail archives here:
https://cabforum.org/pipermail/servercert-wg/
and specifically in this thread:
https://cabforum.org/pipermail/servercert-wg/2019-April/000723.html 

Voting will start at least a week after the final proposal is reviewed and
no comments are made to change it.

Doug


-Original Message-
From: dev-security-policy  On
Behalf Of Sándor dr. Szoke via dev-security-policy
Sent: Thursday, April 18, 2019 5:11 AM
To: mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: Organization Identifier field in the Extended Validation
certificates accordinf to the EVG ver. 1.6.9

Thank you for the valuable information.


I try to summarize the possibilities to issue PSD2 QWAC certificates.

- If a CA issues PSD2 QWAC certificate now, it SHALL NOT include the CABF EV
CPOID in it, but instead of that the certificate should contain the CABF OV
CPOID value. 
- If the CA issues PSD2 QWAC certificate with CABF OV CPOID, the issuing CA
can not be EV enabled by the browsers and it will never be EV enabled
because it has already issued not EVG compliant certificate (is it
correct?).
- If the Ballot SC17 will be accepted it will be possible to issue PSD2 QWAC
certificate with the CABF EV CPOID in it, so the issuer CA can be EV enabled
AND EU Qualified at the same time.

As a consequence, 
- if a CA issues PSD2 certificate now, it shall set up new intermediate CA-s
for the issuance of EV certificates which shall be audited and asked for the
EV  enabled status

It seeems to me that the best would be to wait for the result of the Ballot
SC17 voting and not to issue PSD2 certificates now.

Do you have any information about the planned date/schedule of the voting?

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Organization Identifier field in the Extended Validation certificates accordinf to the EVG ver. 1.6.9

2019-04-17 Thread Doug Beattie via dev-security-policy

The ETSI requirements for QWAC are complicated and not all that clear to me, 
but is it possible to use OV certificate and Policy OIDs as the base instead of 
EV?  Since OV permits additional Subject Attributes, then that approach would 
not be noncompliant.

Certainly issuing a QWAC needs to have vetting done in alignment with the EVGL, 
but by virtue of including the QualifiedStatement, you've asserted that, even 
if the certificate Policy OID claims only OV (OV being a subset EV, so it’s not 
a lie to say it’s OV validated).
- CertificatePolicy: CA can specify OV and also include this Policy OID: 
0.4.0.194112.1.4
- qualifiedStatement: qcs-QcCompliance is specified

Is that contradictory? If not, then I'm probably just missing the statement 
that a QWAC MUST be an EV certificate with EV Policy OIDs.

Doug

-Original Message-
From: dev-security-policy  On 
Behalf Of Ryan Sleevi via dev-security-policy
Sent: Wednesday, April 17, 2019 12:52 PM
To: Sándor dr. Szőke 
Cc: mozilla-dev-security-policy 
Subject: Re: Organization Identifier field in the Extended Validation 
certificates accordinf to the EVG ver. 1.6.9

On Wed, Apr 17, 2019 at 11:20 AM Sándor dr. Szőke via dev-security-policy < 
dev-security-policy@lists.mozilla.org> wrote:

> Extended Validation (EV) certificates and EU Qualified certificates 
> for website authentication (QWAC).
>
>
> European Union introduced the QWAC certificates in the eIDAS 
> Regulation in 2014.
>
> Technically the QWAC requirements are based on the CABF EVG and 
> intended to be fully upper compatiable with the EV certificates, but 
> ETSI has set up some further requirements, like the mandatory usage of the QC 
> statements.
>
> ETSI TS 119 495 is a further specialization of the QWAC certificates 
> dedicated for payment services according to the EU PSD2 Directive.
> The PSD2 certificates need to consist amoung others the Organization 
> Identifier [(OrgId) – OID: 2.5.4.97] field in the Subject DN field, 
> which contains PSD2 specific data of the Organization.
>
> Till yesterday the usage of this field was not forbidden in the EV 
> certificates, altough as I know there has been discussion about this 
> topic due to the different interpretation of the EVG requirements.
> As I know there is an ongoing discussion in the CABF about the 
> inclusion of the OrgId field in the definitely allowed fields in the 
> Subject DN of the EV certificates.
>
> Today morning I got an email from the CABF mailing list with the new 
> version of the BR ver. 1.6.5 and the EVG ver. 1.6.9.  The new version 
> of the BR has already been published on the CABF web site but the new 
> EVG version hasn't been published yet.
>
> I would like to ask the current status of this new EVG ver 1.6.9.
>
> It is very important for us to have correct information because our CA 
> has begun to issue PSD2 certificates to financial institutions which 
> are intended to fulfil also the EVG requirements.
> The new version of the EVG definitely states that only the listed 
> fields may be used in the Subject DN and the list doesn't contain the OrgId 
> field.
>
> We plan to fulfil both the QWAC and the EVG requirements 
> simultaneuosly but after having the change in the EVG requirements it 
> seems to be impossible in case of PSD2 QWAC certificates.
> The separation of the EV and the QWAC certificates wouldn't be good 
> for the Customers and it would rise several issues.
>
> Do you have any idea how to solve this issue?
>
> Will the new version of the EVG ver 1.6.9 be published soon?
>
> Isn't it possible to wait with the issuance the result of the ballot 
> regarding the inclusion of the OrgId field?
>

(Writing in a Google capacity)

At present, the ETSI TS 119 495 is specified incompatibly with the requirements 
of the EV Guidelines. The latest version of that TS [1], acknowledges that it 
is fundamentally incompatible with the EV Guidelines, in Section 5.3, by 
placing the ETSI TS version as superseding that of the requirements of the EVGs.

Unfortunately, this means that a TSP cannot issue a PSD2 certificate from a 
publicly trusted certificate and claim compliance with the EV Guidelines, and 
as a consequence, cannot claim compliance with the relevant root store 
requirements, including Mozilla's and Google's. If a TSP issues a certificate 
using the profile in TS 119 495, they must do so from a certificate hierarchy 
not trusted by user agents - and as a result, such certificates will not be 
trusted by browsers.

ETSI and the Browsers have been discussing this for over a year, and the 
browsers offered, within the CA/Browser Forum, a number of alternative 
solutions to ETSI that would allow for these two to harmoniously interoperate. 
ETSI declined to take the necessary steps to resolve the conflict while it was 
still possible. As a consequence, the CA/Browser Forum has attempted to address 
some of these issues itself - however, it still requires action by ETSI to 
harmonize their work.

The 

RE: Survey of (potentially noncompliant) Serial Number Lengths

2019-03-26 Thread Doug Beattie via dev-security-policy
Rob,

I'm sure you provided this info somewhere, but I can't figure our where the
new summary table (named serial_number_entropy_20190325) is located.  Is it
somewhere on your Google Doc, or somewhere else?

https://docs.google.com/spreadsheets/d/1K96XkOFYaCIYOdUKokwTZfPWALWmDed7znjC
Fn6lKoc/edit#gid=1093195185



-Original Message-
From: dev-security-policy  On
Behalf Of Rob Stradling via dev-security-policy
Sent: Monday, March 25, 2019 6:16 PM
To: Hector Martin 'marcan' ;
mozilla-dev-security-pol...@lists.mozilla.org
Cc: Kurt Roeckx 
Subject: Re: Survey of (potentially noncompliant) Serial Number Lengths

On 18/03/2019 21:11, Hector Martin 'marcan' wrote:
> On 19/03/2019 02.17, Rob Stradling via dev-security-policy wrote:
>> On 18/03/2019 17:05, Kurt Roeckx wrote:
>>> On Mon, Mar 18, 2019 at 03:30:37PM +, Rob Stradling via
dev-security-policy wrote:

 When a value in column E is 100%, this is pretty solid evidence of 
 noncompliance with BR 7.1.
 When the values in column E and G are both approximately 50%, this 
 suggests (but does not prove) that the CA is handling the output 
 from their CSPRNG correctly.
>>>
>>> Sould F/G say >= 64, instead of > 64?
>>
>> Yes.  Fixed.  Thanks!
> 
> Perhaps it would make sense to separate out <64, ==64, >64?
> 
> 100% "64-bit" serial numbers would indicate an algorithm using 63 bits 
> of entropy and the top bit coerced to 1.

Even better than that (and many thanks to Andrew Ayer for suggesting this
idea)...

To enable folks to do more thorough statistical analysis, I've produced
another, richer summary table (named serial_number_entropy_20190325) on the
crt.sh DB where each row contains...
- the CA ID.
- a count of the total number of unique serial numbers.
- 160 counts, representing the number of times a given serial number bit is
1.  (Serial numbers of <20 octets were left-padded with 0x00 bytes).

This report covers all serial numbers in certs known to crt.sh where:
- there is an unrevoked serverAuthentication trust path to a Mozilla
built-in root.
- the notBefore date is between 2018-04-01 and 2019-02-22.

Duplicate serial numbers (i.e., precertificate/certificate pairs) are
deduplicated.

--
Rob Stradling
Senior Research & Development Scientist
Sectigo Limited
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Applicability of SHA-1 Policy to Timestamping CAs

2019-03-22 Thread Doug Beattie via dev-security-policy
GlobalSign concurs. 

-Original Message-
From: dev-security-policy  On
Behalf Of Wayne Thayer via dev-security-policy
Sent: Friday, March 22, 2019 2:51 PM
To: mozilla-dev-security-policy

Subject: Applicability of SHA-1 Policy to Timestamping CAs

I've been asked if the section 5.1.1 restrictions on SHA-1 issuance apply to
timestamping CAs. Specifically, does Mozilla policy apply to the issuance of
a SHA-1 CA certificate asserting only the timestamping EKU and chaining to a
root in our program? Because this certificate is not in scope for our policy
as defined in section 1.1, I do not believe that this would be a violation
of the policy. And because the CA would be in control of the entire contents
of the certificate, I also do not believe that this action would create an
unacceptable risk.

I would appreciate everyone's input on this interpretation of our policy.

- Wayne
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Virginia Tech misissuance report for 63 bit serial numbers

2019-03-20 Thread Doug Beattie via dev-security-policy
A Mozilla incident report has been crated to track this issue:

   https://bugzilla.mozilla.org/show_bug.cgi?id=1536760

 

 

Doug

 

From: Doug Beattie 
Sent: Tuesday, March 19, 2019 1:53 PM
To: mozilla-dev-security-pol...@lists.mozilla.org
Cc: Kathleen Wilson ; Wayne Thayer
; Arvid Vermote 
Subject: Virginia Tech misissuance report for 63 bit serial numbers

 

Hi Wayne,


Can you open a Mozilla ticket for one of our older customers, Virginia Tech
(VT)?

 

Thanks.

 

===

 

1. How your CA first became aware of the problem (e.g. via a problem report
submitted to your Problem Reporting Mechanism, a discussion in
mozilla.dev.security.policy, a Bugzilla bug, or internal self-audit), and
the time and date.

We received the disclosure report [1].  Note that this is a technically
constrained CA that stopped issuing certificates in April 2018.



2. A timeline of the actions your CA took in response. A timeline is a
date-and-time-stamped sequence of all relevant events. This may include
events before the incident was reported, such as when a particular
requirement became applicable, or a document changed, or a bug was
introduced, or an audit was done.

3/19/2019: GlobalSign researched VT issuance based on [1] and found that
certificates issued prior to 1 August 2017 were impacted while certificates
issued between 8/1/2017 and 4/26/2018 have sufficient serial number entropy.
They are now obtaining certificates from other CAs so no further
non-compliant certificates will be issued.


3. Whether your CA has stopped, or has not yet stopped, issuing certificates
with the problem. A statement that you have will be considered a pledge to
the community; a statement that you have not requires an explanation.

This CA stopped issuing certificates on 4/26/2018, so the certificates in
question were all issued prior to this date.

4. A summary of the problematic certificates. For each problem: number of
certs, and the date the first and last certs with that problem were issued.

Initial reporting indicates there are 447 certificates issued between
9/30/2016 and 8/1/2017

5. The complete certificate data for the problematic certificates. The
recommended way to provide this is to ensure each certificate is logged to
CT and then list the fingerprints or crt.sh IDs, either in the report or as
an attached spreadsheet, with one list per distinct problem.

We are in the process of collecting the list of impacted certificates from
VT.


6. Explanation about how and why the mistakes were made or bugs introduced,
and how they avoided detection until now.

We will collect the information on how the mistake was made from VT in the
coming days.

7. List of steps your CA is taking to resolve the situation and ensure such
issuance will not be repeated in the future, accompanied with a timeline of
when your CA expects to accomplish these things.

This CA is no longer issuing certificates and it will be revoked as soon as
all issued certificates have expired or have been replaced.

References: [1]
https://docs.google.com/spreadsheets/d/1K96XkOFYaCIYOdUKokwTZfPWALWmDed7znjC
Fn6lKoc/edit#gid=1093195185



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Virginia Tech misissuance report for 63 bit serial numbers

2019-03-19 Thread Doug Beattie via dev-security-policy
Hi Wayne,


Can you open a Mozilla ticket for one of our older customers, Virginia Tech
(VT)?

 

Thanks.

 

===

 

1. How your CA first became aware of the problem (e.g. via a problem report
submitted to your Problem Reporting Mechanism, a discussion in
mozilla.dev.security.policy, a Bugzilla bug, or internal self-audit), and
the time and date.

We received the disclosure report [1].  Note that this is a technically
constrained CA that stopped issuing certificates in April 2018.



2. A timeline of the actions your CA took in response. A timeline is a
date-and-time-stamped sequence of all relevant events. This may include
events before the incident was reported, such as when a particular
requirement became applicable, or a document changed, or a bug was
introduced, or an audit was done.

3/19/2019: GlobalSign researched VT issuance based on [1] and found that
certificates issued prior to 1 August 2017 were impacted while certificates
issued between 8/1/2017 and 4/26/2018 have sufficient serial number entropy.
They are now obtaining certificates from other CAs so no further
non-compliant certificates will be issued.


3. Whether your CA has stopped, or has not yet stopped, issuing certificates
with the problem. A statement that you have will be considered a pledge to
the community; a statement that you have not requires an explanation.

This CA stopped issuing certificates on 4/26/2018, so the certificates in
question were all issued prior to this date.

4. A summary of the problematic certificates. For each problem: number of
certs, and the date the first and last certs with that problem were issued.

Initial reporting indicates there are 447 certificates issued between
9/30/2016 and 8/1/2017

5. The complete certificate data for the problematic certificates. The
recommended way to provide this is to ensure each certificate is logged to
CT and then list the fingerprints or crt.sh IDs, either in the report or as
an attached spreadsheet, with one list per distinct problem.



We are in the process of collecting the list of impacted certificates from
VT.


6. Explanation about how and why the mistakes were made or bugs introduced,
and how they avoided detection until now.

We will collect the information on how the mistake was made from VT in the
coming days.

7. List of steps your CA is taking to resolve the situation and ensure such
issuance will not be repeated in the future, accompanied with a timeline of
when your CA expects to accomplish these things.



This CA is no longer issuing certificates and it will be revoked as soon as
all issued certificates have expired or have been replaced.



References: [1]
https://docs.google.com/spreadsheets/d/1K96XkOFYaCIYOdUKokwTZfPWALWmDed7znjC
Fn6lKoc/edit#gid=1093195185





smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Pre-Incident Report - AT GlobalSign customer CA Serial Number Entropy

2019-03-13 Thread Doug Beattie via dev-security-policy
When the serial number issue was first disclosed we reviewed all GlobalSign
certificates issued from our systems and found no issues wrt serial number
length.  While all GlobalSign systems are compliant, one of our customers
running an on-premise CA that chains to a GlobalSign root, AT, uses EJBCA
and has been using the default configuration.  They have been notified to
immediately stop issuance, update their configurations, replace and then
revoke all affected certificates.

Their Intermediate CA is: https://crt.sh/?caid=10154

Under that CA they have 3 CAs, and here is the estimated number of
non-compliant active certificates:
https://crt.sh/?caid=10155 (fewer than 200 active certificates)
https://crt.sh/?caid=12658 (14 active 10-day certificates)
https://crt.sh/?caid=10157  (4 active certificates) 


 
1. How your CA first became aware of the problem (e.g. via a problem report
submitted to your Problem Reporting Mechanism, a discussion in
mozilla.dev.security.policy, a Bugzilla bug, or internal self-audit), and
the time and date.
 
When performing self-compliance check on our Trusted Root customers based on
emails to mdsp list with similar issues.
 
2. A timeline of the actions your CA took in response. A timeline is a
date-and-time-stamped sequence of all relevant events. This may include
events before the incident was reported, such as when a particular
requirement became applicable, or a document changed, or a bug was
introduced, or an audit was done.
 
3/1/2019: GlobalSign self-assessment on certificates issued from our data
center.  All certificates are compliant as we had set sufficient serial
number lengths prior to the CABF requirement to move to 64 bits of entropy.

3/13/2019: GlobalSign initiated and completed assessment of SSL certificates
issued by our 3 remaining customers that have CAs chaining to GlobalSign
roots.  We observed that one of these customers, AT, uses EJBCA with the
default serial number settings.


3. Whether your CA has stopped, or has not yet stopped, issuing certificates
with the problem. A statement that you have will be considered a pledge to
the community; a statement that you have not requires an explanation.
 
We have informed AT to stop issuance and will confirm that this is the
case tomorrow morning.

4. A summary of the problematic certificates. For each problem: number of
certs, and the date the first and last certs with that problem were issued.

Initial reporting indicates there are fewer than 200 active certificates.
The links above can be used to identify the detailed list of certificates
and we will compile a complete list based on input from AT
 
5. The complete certificate data for the problematic certificates. The
recommended way to provide this is to ensure each certificate is logged to
CT and then list the fingerprints or crt.sh IDs, either in the report or as
an attached spreadsheet, with one list per distinct problem.

We will compute a report shortly, but currently the scope is limited to the
3 CAs are listed above.  Every active certificate under these CAs has serial
numbers that contain fewer than 64 bits of entropy.
 
6. Explanation about how and why the mistakes were made or bugs introduced,
and how they avoided detection until now.

We will collect this information from AT in the coming days
 
7. List of steps your CA is taking to resolve the situation and ensure such
issuance will not be repeated in the future, accompanied with a timeline of
when your CA expects to accomplish these things.
 
We are working with AT to correct this problem.  Our plans to revoke these
CAs and to terminate all Trusted Root SSL CAs is on track for August.



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


usareally.com and OFAC lists

2019-01-11 Thread Doug Beattie via dev-security-policy
A few of us have been discussing the usareally.com "issue" recently.  In
case you didn't know, the US Treasure put out a notice that US companies
must not do business with USA Really:

https://home.treasury.gov/news/press-releases/sm577

 

Let's Encrypt mapped that release to certificates they had issued to the
domain and revoked them:

https://www.mcclatchydc.com/news/policy/technology/cyber-security/article223
832790.html 

 

They came to the GlobalSign Russia organization then to WoTrus:

https://crt.sh/?q=usareally.com

US CAs should take notice and put the proper controls in place.

 

This site never appeared on Google Safe Browsing as it's not a malware "bad
site", and it's safe to visit.  You can even issue them a certificate or do
business with them if you're not a US company.  It's likely that there are
governmental notices like this in other regions which would be useful to
share and factor into the CA's High Risk checks.

 

Does this group have any recommendations for how/where such "claims" or
announcements could be posted? Is the this list off-limits for such
communication?

 

 



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: P-521 Certificates

2019-01-10 Thread Doug Beattie via dev-security-policy
Jason - where did you see this requirement?

-Original Message-
From: dev-security-policy  On
Behalf Of Jason via dev-security-policy
Sent: Thursday, January 10, 2019 9:38 AM
To: mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: P-521 Certificates

I would say that the problem here would be that a child certificate can't
use a higher cryptography level than the issuer, this is agains good
practices and, AFAIK, agains the Webtrust audit criteria.
Jason
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: SSL private key for *.alipcsec.com embedded in PC client executables

2018-12-12 Thread Doug Beattie via dev-security-policy
As a follow-up, The certificate was revoked about 2 hours ago:

   https://crt.sh/?id=300288180=ocsp



-Original Message-
From: Doug Beattie 
Sent: Tuesday, December 11, 2018 8:09 AM
To: 'dev-security-policy@lists.mozilla.org'

Cc: 'Xiaoyin Liu' ; Mark Steward

Subject: RE: SSL private key for *.alipcsec.com embedded in PC client
executables

Thank you for this report.  We've verified disclosure of the private key for
this certificate and have notified the customer that their certificate will
be revoked.  Due to the large customer impact, we're provided them 24 hours
to get new client executables prepared and ready for download by their
customers.  We'll post a message when the certificate has been revoked.

https://crt.sh/?id=300288180 


Doug

-Original Message-
From: dev-security-policy  On
Behalf Of Xiaoyin Liu via dev-security-policy
Sent: Tuesday, December 11, 2018 6:52 AM
To: Mark Steward 
Cc: dev-security-policy@lists.mozilla.org
Subject: Re: SSL private key for *.alipcsec.com embedded in PC client
executables

Thank you for your helpful reply, Mark! Finally I found the key in memory
too.



I sent another report with the private key to Alibaba. Hopefully they will
take actions. If Alibaba doesn't reply me tomorrow, I will report to
GlobalSign.



Best,
Xiaoyin




From: Mark Steward 
Sent: Tuesday, December 11, 2018 3:24:21 PM
To: xiaoyi...@outlook.com
Cc: dev-security-policy@lists.mozilla.org
Subject: Re: SSL private key for *.alipcsec.com embedded in PC client
executables

This time it's just hanging around in memory, no need to do anything about
the anti-debug.

$ openssl x509 -noout -modulus -in 300288180.crt|md5sum
f423a009387fb7a306673b517ed4f163  -
$ openssl rsa -noout -modulus -in alibaba-localhost.key.pem|md5sum
f423a009387fb7a306673b517ed4f163  -

You can verify that I've signed lorem ipsum with the following:

$ wget https://crt.sh/?d=300288180 -O 300288180.crt $ wget
https://rack.ms/b/UsNQv74sfH40/msg.txt{,.sig-sha256.b64}
$ openssl dgst -sha256 -verify <(openssl x509 -in 300288180.crt -pubkey
-noout) -signature <(base64 -d msg.txt.sig-sha256.b64) msg.txt

As the domain name suggests, this is part of the AlibabaProtect/"Alibaba PC
Safe Service" that comes bundled with the Youku client.


Mark


Mark
On Tue, Dec 11, 2018 at 5:37 AM Xiaoyin Liu via dev-security-policy
 wrote:
>
> Hello,
>
> I think I found a SSL certificate misuse issue, but I am not sure if this
is indeed a misuse, so I want to ask about it on this list.
>
> Here is the issue: After I installed Youku Windows client
(https://pd.youku.com/pc, installer:
https://pcclient.download.youku.com/youkuclient/youkuclient_setup_7.6.7.1122
0.exe), it starts a local HTTPS server, listening on localhost:6691. Output
of "openssl s_client -connect 127.0.0.1:6691" indicates that this local
server uses a valid SSL certificate, issued to "Alibaba (China) Technology
Co., Ltd." CN=*.alipcsec.com, and issued by GlobalSign. It's a publicly
trusted OV cert, and is valid until Jan 13, 2019. Later, I found that
local.alipcsec.com resolves to 127.0.0.1, and
https://local.alipcsec.com:6691/ is used for inter-process communication.
>
> It's clear that the private key for *.alipcsec.com is embedded in the
executable, but all the executables that may embed the private key are
packed by VMProtect, and the process has anti-debugging protection. I tried
plenty of methods to extract the private key, but didn't succeed. I reported
this to Alibaba SRC anyway. They replied that they ignore this issue unless
I can successfully extract the key.
>
> So is this a certificate misuse issue, even if the private key is
obfuscated? If so, do I have to extract the private key first before the CA
can revoke the cert?
>
> Thank you!
>
> Best,
> Xiaoyin Liu
>
> Here is the certificate:
> -BEGIN CERTIFICATE-
> MIIHTjCCBjagAwIBAgIMCpI/GtuuSFspBu4EMA0GCSqGSIb3DQEBCwUAMGYxCzAJ
> BgNVBAYTAkJFMRkwFwYDVQQKExBHbG9iYWxTaWduIG52LXNhMTwwOgYDVQQDEzNH
> bG9iYWxTaWduIE9yZ2FuaXphdGlvbiBWYWxpZGF0aW9uIENBIC0gU0hBMjU2IC0g
> RzIwHhcNMTgwMTEyMDgxMTA1WhcNMTkwMTEzMDgxMTA1WjB7MQswCQYDVQQGEwJD
> TjERMA8GA1UECBMIWmhlSmlhbmcxETAPBgNVBAcTCEhhbmdaaG91MS0wKwYDVQQK
> EyRBbGliYWJhIChDaGluYSkgVGVjaG5vbG9neSBDby4sIEx0ZC4xFzAVBgNVBAMM
> DiouYWxpcGNzZWMuY29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA
> 9PJcPzpUNRJeA8+YF8cRZEn75q+fSsWWkm6JfIlOKorYXwYJB80de4+Bia3AgzfO
> wqwWfPGrRYh5OY4ujjsKF5XkWG22SLlzi5xB9zAeVKHYTo2U6aKrKnht9XyYvnZX
> ocIuaSxkqq4rQ9UwiEYB6lvy8RY1orYu33HtrGD5W3w9SWf2AwB0rCNp0BeSRaGB
> JEEXzgVECbL+deJZgZflae1gQ9q4PftDHuGXLNe8PLYq2D4+oKbYvbYtI9WKIMuh
> 1dL70QBbcW0y4jFr2/337H8/KhBaCb3ZBZQI4LUnYL8RVeAVJFpX/PuiHMh9uNTm
> oW1if7XQswJCWx3td5tWiwIDAQABo4ID5TCCA+EwDgYDVR0PAQH/BAQDAgWgMIGg
> BggrBgEFBQcBAQSBkzCBkDBNBggrBgEFBQcwAoZBaHR0cDovL3NlY3VyZS5nbG9i
> YWxzaWduLmNvbS9jYWNlcnQvZ3Nvcmdhbml6YXRpb252YWxzaGEyZzJyMS5jcnQw
> PwYIKwYBBQUHMAGGM2h0dHA6Ly9vY3NwMi5nbG9iYWxzaWduLmNvbS9nc29yZ2Fu
> 

RE: Maximal validity of the test TLS certificate issued by a private PKI system

2018-12-11 Thread Doug Beattie via dev-security-policy

Option 1 is the intended interpretation.  We specified 30 days because the
tokens used for domain validation (Random Number) need to have a useful life
of 30 days.  The 30-day usage period needed to be put into the definition of
the Test Certificate, or into Method 3.2.2.4.9, and we selected the validity
period as the means to convey this requirement.

Note that Method 9 has identified security issues as it relates to shared IP
addresses, so currently it's not permitted to be used (according to Google),
even though it remains in the BRs.  It should be updated or removed.  Method
10 has similar issues which are being mitigated with ALPN approach, but no
work has been done on Method 9 in this regard.

Doug


-Original Message-
From: dev-security-policy  On
Behalf Of Sándor dr. Szoke via dev-security-policy
Sent: Tuesday, December 11, 2018 1:24 PM
To: mozilla-dev-security-pol...@lists.mozilla.org
Subject: Maximal validity of the test TLS certificate issued by a private
PKI system


It is not absolutely clear for us how to manage the test certificates which
were issued by a CA where there are no certificate chains to a root
certificate subject to the Baseline Requirements (for example an independent
test CA hierarchy).

The BR wording is as follows:

Test Certificate: A Certificate with a maximum validity period of 30 days
and which: (i) includes a critical extension with the specified Test
Certificate CABF OID (2.23.140.2.1), or (ii) is issued under a CA where
there are no certificate paths/chains to a root certificate subject to these
Requirements.


I can interpret the definition in two different ways:

1. by using the listing as a parenthesis, so

Test Certificate: 
A Certificate
{with a maximum validity period of 30 days} AND
which: 
{
(i) includes a critical extension with the specified Test Certificate CABF
OID (2.23.140.2.1), OR
(ii) is issued under a CA where there are no certificate paths/chains to a
root certificate subject to these Requirements.
}

So it means that any test certificate shall have the validity max. 30 days,
AND we have two possibilities:
(i) if the test certificate issued under a CA where there is a certificate
chain to a root certificate subject to the BR, then the test certificate
shall include the critical extension with the specified Test Certificate
CABF OID (2.23.140.2.1)
(ii) if the test certificate is issued under a CA where there are no
certificate chains to a root certificate subject to the BR, then there is no
further requirement.

Question:

if this interpretation is correct, then why do we have a requirement to
limit the validity period of the test certificate for 30 days, if the issuer
CA is out of the scope of the BR?



2. by thinking as an engineer, where the AND operation is higher level than
the OR, the separation looks like this:

Test Certificate: 
A Certificate
{
with a maximum validity period of 30 days AND
which: 
(i) includes a critical extension with the specified Test Certificate CABF
OID (2.23.140.2.1), } OR {
(ii) is issued under a CA where there are no certificate paths/chains to a
root certificate subject to these Requirements.
}

So it means, that
(i) if the test certificate issued under a CA where there is a certificate
chain to a root certificate subject to the BR, then the test certificate
shall include the critical extension with the specified Test Certificate
CABF OID (2.23.140.2.1) AND the validity period of the test certificate is
maximum 30 days
(ii) if the test certificate is issued under a CA where there are no
certificate chains to a root certificate subject to the BR, then there is no
any further requirement

In this interpretation the wording seems to be strange, but the requirement
seems to be more realistic and clear.

Which interpretation is correct?

Is it allowed to issue test TLS certificates in an independent test system
with more than 30 days validity?


___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: SSL private key for *.alipcsec.com embedded in PC client executables

2018-12-11 Thread Doug Beattie via dev-security-policy
Thank you for this report.  We've verified disclosure of the private key for
this certificate and have notified the customer that their certificate will
be revoked.  Due to the large customer impact, we're provided them 24 hours
to get new client executables prepared and ready for download by their
customers.  We'll post a message when the certificate has been revoked.

https://crt.sh/?id=300288180 


Doug

-Original Message-
From: dev-security-policy  On
Behalf Of Xiaoyin Liu via dev-security-policy
Sent: Tuesday, December 11, 2018 6:52 AM
To: Mark Steward 
Cc: dev-security-policy@lists.mozilla.org
Subject: Re: SSL private key for *.alipcsec.com embedded in PC client
executables

Thank you for your helpful reply, Mark! Finally I found the key in memory
too.



I sent another report with the private key to Alibaba. Hopefully they will
take actions. If Alibaba doesn't reply me tomorrow, I will report to
GlobalSign.



Best,
Xiaoyin




From: Mark Steward 
Sent: Tuesday, December 11, 2018 3:24:21 PM
To: xiaoyi...@outlook.com
Cc: dev-security-policy@lists.mozilla.org
Subject: Re: SSL private key for *.alipcsec.com embedded in PC client
executables

This time it's just hanging around in memory, no need to do anything about
the anti-debug.

$ openssl x509 -noout -modulus -in 300288180.crt|md5sum
f423a009387fb7a306673b517ed4f163  -
$ openssl rsa -noout -modulus -in alibaba-localhost.key.pem|md5sum
f423a009387fb7a306673b517ed4f163  -

You can verify that I've signed lorem ipsum with the following:

$ wget https://crt.sh/?d=300288180 -O 300288180.crt $ wget
https://rack.ms/b/UsNQv74sfH40/msg.txt{,.sig-sha256.b64}
$ openssl dgst -sha256 -verify <(openssl x509 -in 300288180.crt -pubkey
-noout) -signature <(base64 -d msg.txt.sig-sha256.b64) msg.txt

As the domain name suggests, this is part of the AlibabaProtect/"Alibaba PC
Safe Service" that comes bundled with the Youku client.


Mark


Mark
On Tue, Dec 11, 2018 at 5:37 AM Xiaoyin Liu via dev-security-policy
 wrote:
>
> Hello,
>
> I think I found a SSL certificate misuse issue, but I am not sure if this
is indeed a misuse, so I want to ask about it on this list.
>
> Here is the issue: After I installed Youku Windows client
(https://pd.youku.com/pc, installer:
https://pcclient.download.youku.com/youkuclient/youkuclient_setup_7.6.7.1122
0.exe), it starts a local HTTPS server, listening on localhost:6691. Output
of "openssl s_client -connect 127.0.0.1:6691" indicates that this local
server uses a valid SSL certificate, issued to "Alibaba (China) Technology
Co., Ltd." CN=*.alipcsec.com, and issued by GlobalSign. It's a publicly
trusted OV cert, and is valid until Jan 13, 2019. Later, I found that
local.alipcsec.com resolves to 127.0.0.1, and
https://local.alipcsec.com:6691/ is used for inter-process communication.
>
> It's clear that the private key for *.alipcsec.com is embedded in the
executable, but all the executables that may embed the private key are
packed by VMProtect, and the process has anti-debugging protection. I tried
plenty of methods to extract the private key, but didn't succeed. I reported
this to Alibaba SRC anyway. They replied that they ignore this issue unless
I can successfully extract the key.
>
> So is this a certificate misuse issue, even if the private key is
obfuscated? If so, do I have to extract the private key first before the CA
can revoke the cert?
>
> Thank you!
>
> Best,
> Xiaoyin Liu
>
> Here is the certificate:
> -BEGIN CERTIFICATE-
> MIIHTjCCBjagAwIBAgIMCpI/GtuuSFspBu4EMA0GCSqGSIb3DQEBCwUAMGYxCzAJ
> BgNVBAYTAkJFMRkwFwYDVQQKExBHbG9iYWxTaWduIG52LXNhMTwwOgYDVQQDEzNH
> bG9iYWxTaWduIE9yZ2FuaXphdGlvbiBWYWxpZGF0aW9uIENBIC0gU0hBMjU2IC0g
> RzIwHhcNMTgwMTEyMDgxMTA1WhcNMTkwMTEzMDgxMTA1WjB7MQswCQYDVQQGEwJD
> TjERMA8GA1UECBMIWmhlSmlhbmcxETAPBgNVBAcTCEhhbmdaaG91MS0wKwYDVQQK
> EyRBbGliYWJhIChDaGluYSkgVGVjaG5vbG9neSBDby4sIEx0ZC4xFzAVBgNVBAMM
> DiouYWxpcGNzZWMuY29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA
> 9PJcPzpUNRJeA8+YF8cRZEn75q+fSsWWkm6JfIlOKorYXwYJB80de4+Bia3AgzfO
> wqwWfPGrRYh5OY4ujjsKF5XkWG22SLlzi5xB9zAeVKHYTo2U6aKrKnht9XyYvnZX
> ocIuaSxkqq4rQ9UwiEYB6lvy8RY1orYu33HtrGD5W3w9SWf2AwB0rCNp0BeSRaGB
> JEEXzgVECbL+deJZgZflae1gQ9q4PftDHuGXLNe8PLYq2D4+oKbYvbYtI9WKIMuh
> 1dL70QBbcW0y4jFr2/337H8/KhBaCb3ZBZQI4LUnYL8RVeAVJFpX/PuiHMh9uNTm
> oW1if7XQswJCWx3td5tWiwIDAQABo4ID5TCCA+EwDgYDVR0PAQH/BAQDAgWgMIGg
> BggrBgEFBQcBAQSBkzCBkDBNBggrBgEFBQcwAoZBaHR0cDovL3NlY3VyZS5nbG9i
> YWxzaWduLmNvbS9jYWNlcnQvZ3Nvcmdhbml6YXRpb252YWxzaGEyZzJyMS5jcnQw
> PwYIKwYBBQUHMAGGM2h0dHA6Ly9vY3NwMi5nbG9iYWxzaWduLmNvbS9nc29yZ2Fu
> aXphdGlvbnZhbHNoYTJnMjBWBgNVHSAETzBNMEEGCSsGAQQBoDIBFDA0MDIGCCsG
> AQUFBwIBFiZodHRwczovL3d3dy5nbG9iYWxzaWduLmNvbS9yZXBvc2l0b3J5LzAI
> BgZngQwBAgIwCQYDVR0TBAIwADBJBgNVHR8EQjBAMD6gPKA6hjhodHRwOi8vY3Js
> Lmdsb2JhbHNpZ24uY29tL2dzL2dzb3JnYW5pemF0aW9udmFsc2hhMmcyLmNybDAn
> BgNVHREEIDAegg4qLmFsaXBjc2VjLmNvbYIMYWxpcGNzZWMuY29tMB0GA1UdJQQW
> MBQGCCsGAQUFBwMBBggrBgEFBQcDAjAdBgNVHQ4EFgQUoIFBQJomlUEiLibD+luC
> 

RE: Increasing number of Errors found in crt.sh

2018-10-01 Thread Doug Beattie via dev-security-policy
Thanks Wayne.

 

Rob, Adriano : I had no idea that crt.sh included logs that supported test 
roots or roots that weren’t in some/all root programs.  I assumed these were 
all production level roots that needed to comply with the BRs.  Thanks for that 
tid-bit!

 

Alex: I’ll keep an eye on https://misissued.com  and use that as a better, more 
filtered report once it returns to life.

 

Doug

 

 

From: Wayne Thayer  
Sent: Monday, October 1, 2018 2:58 PM
To: Doug Beattie 
Cc: mozilla-dev-security-policy 
Subject: Re: Increasing number of Errors found in crt.sh

 

Doug,

 

Responding to your original question, I look at crt.sh and other data sources 
for certificate errors when reviewing inclusion requests or doing other sorts 
of investigations. I am not currently reviewing the crt.sh report for 
misissuance on a regular basis, but maybe I should.

 

I went through the current list and identified the following problems affecting 
certificates trusted by Mozilla:

* KIR S.A.: Multiple issues - 
https://bugzilla.mozilla.org/show_bug.cgi?id=1495497

* Government of Spain FNMT: OU exceeds 64 characters - 
https://bugzilla.mozilla.org/show_bug.cgi?id=1495507

* Assecco DS (Certum): Unallowed key usage for EC public key - 
https://bugzilla.mozilla.org/show_bug.cgi?id=1495518

* Certinomis: issued & revoked a precertificate containing a SAN of 'www', 
didn't report it - https://bugzilla.mozilla.org/show_bug.cgi?id=1495524

 

- Wayne

 

On Mon, Oct 1, 2018 at 8:51 AM Rob Stradling via dev-security-policy 
mailto:dev-security-policy@lists.mozilla.org> > wrote:

Hi Iñigo.

I suspect it's because my script that produces the 1 week summary data 
[1] isn't using a consistent view of the underlying linting results 
throughout its processing.  Hopefully this [2] will fix it.

100% errors from that Comodo issuing CA is because it's issuing SHA-1 
certs that chain to a no-longer-publicly-trusted root.


[1] 
https://github.com/crtsh/certwatch_db/blob/master/lint_update_1week_stats.sql

[2] 
https://github.com/crtsh/certwatch_db/commit/8ce0c96c9c50bfb51db33c6f44c9c1d1a9f5a96c

On 01/10/2018 15:35, Inigo Barreira wrote:
> And checking this site, how can Comodo have more certs with errors (15030) 
> than certs issued (15020).
> 
> Regards
> 
> From: dev-security-policy  <mailto:dev-security-policy-boun...@lists.mozilla.org> > on behalf of Adriano 
> Santoni via dev-security-policy  <mailto:dev-security-policy@lists.mozilla.org> >
> Sent: Monday, October 01, 2018 10:09 PM
> To: Rob Stradling; Doug Beattie
> Cc: mozilla-dev-security-policy
> Subject: Re: Increasing number of Errors found in crt.sh
> 
> I also agree.
> 
> As I said before, that's a non-trusted certificate. It was issued by a
> test CA that does /not/ chain to a public root.
> 
> 
> Il 01/10/2018 16:04, Rob Stradling ha scritto:
>> On 01/10/2018 15:02, Doug Beattie via dev-security-policy wrote:
>>> Hi Adriano,
>>>
>>> First, I didn't mean to call you out specifically, but you happened
>>> to be
>>> first alphabetically, sorry.  I find this link very helpful to list
>>> all CAs
>>> with errors or warnings: https://crt.sh/?cablint=1+week
>>>
>>> Second, How do you define a "test CA"?  I thought that any CA that
>>> chains to
>>> a public root was by definition not a test CA,
>>
>> I agree with that.
>>
>>> and since the issued cert was
>>> in CT logs, I assumed that your root was publicly trusted. Maybe I'm
>>> mistaken on one of these points
>>
>> Actually, some non-publicly-trusted roots are accepted by some of the
>> logs that crt.sh monitors.
>>
>>> Doug
>>>
>>> -Original Message-
>>> From: dev-security-policy
>>> >> <mailto:dev-security-policy-boun...@lists.mozilla.org> > On
>>> Behalf Of Adriano Santoni via dev-security-policy
>>> Sent: Monday, October 1, 2018 9:49 AM
>>> To: dev-security-policy@lists.mozilla.org 
>>> <mailto:dev-security-policy@lists.mozilla.org> 
>>> Subject: Re: Increasing number of Errors found in crt.sh
>>>
>>> Thank you Rob!
>>>
>>> If I am not mistaken, it seems to me that we have just 1 certificate
>>> in that
>>> list, and it's a non-trusted certificate (it was issued by a test CA).
>>>
>>>
>>> Il 01/10/2018 15:43, Rob Stradling via dev-security-policy ha scritto:
>>>> On 01/10/2018 14:38, Adriano Santoni via dev-security-policy wrote:
>>>>> Is it possible to filter the list https://crt.sh/?cablint=issues
>>>>> based on the issuing CA ?
>>>>
>>>> Yes.
>&g

RE: Increasing number of Errors found in crt.sh

2018-10-01 Thread Doug Beattie via dev-security-policy
Hi Adriano,

First, I didn't mean to call you out specifically, but you happened to be
first alphabetically, sorry.  I find this link very helpful to list all CAs
with errors or warnings: https://crt.sh/?cablint=1+week 

Second, How do you define a "test CA"?  I thought that any CA that chains to
a public root was by definition not a test CA, and since the issued cert was
in CT logs, I assumed that your root was publicly trusted.  Maybe I'm
mistaken on one of these points

Doug

-Original Message-
From: dev-security-policy  On
Behalf Of Adriano Santoni via dev-security-policy
Sent: Monday, October 1, 2018 9:49 AM
To: dev-security-policy@lists.mozilla.org
Subject: Re: Increasing number of Errors found in crt.sh

Thank you Rob!

If I am not mistaken, it seems to me that we have just 1 certificate in that
list, and it's a non-trusted certificate (it was issued by a test CA).


Il 01/10/2018 15:43, Rob Stradling via dev-security-policy ha scritto:
> On 01/10/2018 14:38, Adriano Santoni via dev-security-policy wrote:
>> Is it possible to filter the list https://crt.sh/?cablint=issues 
>> based on the issuing CA ?
>
> Yes.
>
> First, visit this page:
> https://crt.sh/?cablint=1+week
>
> Next, click on the link in the "Issuer CN, OU or O" column that 
> corresponds to the issuing CA you're interested in.
>
>> Il 01/10/2018 15:26, Doug Beattie via dev-security-policy ha scritto:
>>> Hi Wayne and all,
>>>
>>>
>>> I've been noticing an increasing number of CA errors,
>>> https://crt.sh/?cablint=issues  Is anyone monitoring this list and 
>>> asking
>>> for misissuance reports for those that are not compliant? There are 15
>>> different errors and around 300 individual errors (excluding the SHA-1
>>> "false" errors).  Some CAs are issuing certs to CNs of localhost, are
>>> including RFC822 SANs, not including OCSP links and many more.
>>>
>>> -  Actalis,
>>>
>>> -  Digicert,
>>>
>>> -  Microsoft,
>>>
>>> -
>>>
>>>
>>> There are also some warning checks that should actually be errors like
>>> underscores in CNs or SANs.
>>>
>>>
>>> Doug
>


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Increasing number of Errors found in crt.sh

2018-10-01 Thread Doug Beattie via dev-security-policy
Hi Wayne and all,

 

I've been noticing an increasing number of CA errors,
https://crt.sh/?cablint=issues  Is anyone monitoring this list and asking
for misissuance reports for those that are not compliant?  There are 15
different errors and around 300 individual errors (excluding the SHA-1
"false" errors).  Some CAs are issuing certs to CNs of localhost, are
including RFC822 SANs, not including OCSP links and many more.

-  Actalis,

-  Digicert,

-  Microsoft,

-   

 

There are also some warning checks that should actually be errors like
underscores in CNs or SANs.

 

Doug



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Bit encoding (AW: Policy 2.6 Proposal: Add prohibition on CA key generation to policy)

2018-05-15 Thread Doug Beattie via dev-security-policy
Wayne,

This going to require 19 randomly generated Base64 characters and that does not 
include removing common confused characters which will drive up the length a 
bit more, but if this is what the Mozilla risk assessment came up with, then 
we’ll all have to comply.  I hope there is a sufficiently long time for CAs to 
change their processes and APIs and to roll out updated training and 
documentation to their customers (for this unplanned change).

Did you consider any changes based on Jakob’s comments?  If the PKCS#12 is 
distributed via secure channels, how strong does the password need to be?

Doug



From: Wayne Thayer [mailto:wtha...@mozilla.com]
Sent: Monday, May 14, 2018 4:54 PM
To: Doug Beattie <doug.beat...@globalsign.com>; mozilla-dev-security-policy 
<mozilla-dev-security-pol...@lists.mozilla.org>
Subject: Re: Bit encoding (AW: Policy 2.6 Proposal: Add prohibition on CA key 
generation to policy)

On Mon, May 14, 2018 at 11:50 AM Doug Beattie via dev-security-policy 
<dev-security-policy@lists.mozilla.org<mailto:dev-security-policy@lists.mozilla.org>>
 wrote:

I hope some other CAs weigh in in this: Robin, Bruce, Jeremy, Daymion, Dean???
- We can’t permit user generated passwords (at least that is Tim's proposal, 
Wayne may not agree yet but he will when he reads this email)
- We can’t distribute both the password and PKCS#12 over the same channel, even 
if it's a secure channel like HTTPS

We have 2 choices for where the password is generated: CA or User
>
Or the user could generate the key :-)
>
1) If we require CAs to generate the passwords and they can’t distribute the 
necessary information to the end user via the portal over TLS (because of the 
dual channel requirement), then that is a relatively large impact on us, and 
probably anyone else that supports PKCS#12 file formats.  If the channel is 
secure, do you need to use different channels?


2) Trying to compute the entropy of a user generated password is  nearly 
impossible.  According to NIST Special Publication 800-63, a good 20 character 
password will have just 48 bits of entropy, and characters after that only add 
1 bite of entropy each.  User stink at generating Entropy (right Tim?)

NIST Special Publication 800-63 of June 2004 (revision 2) suggested the 
following scheme to roughly estimate the entropy of human-generated passwords 
(Subsequent updates of this publication gave up trying to compute entropy for 
user generated passwords, and when they talk about entropy they talk about 20 
bits max):
•   The entropy of the first character is four bits;
•   The entropy of the next seven characters are two bits per character;
•   The ninth through the twentieth character has 1.5 bits of entropy per 
character;
•   Characters 21 and above have one bit of entropy per character.
•   A "bonus" of six bits is added if both upper case letters and 
non-alphabetic characters are used.
•   A "bonus" of six bits is added for passwords of length 1 through 19 
characters following an extensive dictionary check to ensure the password is 
not contained within a large dictionary. Passwords of 20 characters or more do 
not receive this bonus because it is assumed they are pass-phrases consisting 
of multiple dictionary words.

https://pages.nist.gov/800-63-3/

Some CAs are probably asking the user for a password during the request thus 
there is no need to distribute it later.  But, if the Applicant provides the 
password over HTTPS and then later the CA provides the PKCS#12 via download 
link and they obtain it via HTTPS, is that a single channel that they were both 
distributed over?

I still object to not being able to use HTTPS for collection and/or 
distribution of the Password and the PKCS#12.  I also believe 112 bits of 
entropy is way too much for user generated password (assuming we want to 
continue supporting that option).
Perhaps the following language is a workable solution to the first objection?

PKCS#12 files must employ an encryption key and algorithm that is sufficiently 
strong to protect the key pair for its useful life based on current guidelines 
published by a recognized standards body. PKCS#12 files MUST be encrypted and 
signed; or, MUST have a password that exhibits at least 112 bits of entropy, 
and the password MUST be transmitted via a secure channel.

I really don't seem a benefit to user generation of these passwords - either 
they are weak and memorable, or sufficiently complicated that there's little 
value in being able to choose it.

Doug

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Bit encoding (AW: Policy 2.6 Proposal: Add prohibition on CA key generation to policy)

2018-05-14 Thread Doug Beattie via dev-security-policy

I hope some other CAs weigh in in this: Robin, Bruce, Jeremy, Daymion, Dean???
- We can’t permit user generated passwords (at least that is Tim's proposal, 
Wayne may not agree yet but he will when he reads this email)
- We can’t distribute both the password and PKCS#12 over the same channel, even 
if it's a secure channel like HTTPS

We have 2 choices for where the password is generated: CA or User

1) If we require CAs to generate the passwords and they can’t distribute the 
necessary information to the end user via the portal over TLS (because of the 
dual channel requirement), then that is a relatively large impact on us, and 
probably anyone else that supports PKCS#12 file formats.  If the channel is 
secure, do you need to use different channels? 


2) Trying to compute the entropy of a user generated password is  nearly 
impossible.  According to NIST Special Publication 800-63, a good 20 character 
password will have just 48 bits of entropy, and characters after that only add 
1 bite of entropy each.  User stink at generating Entropy (right Tim?) 

NIST Special Publication 800-63 of June 2004 (revision 2) suggested the 
following scheme to roughly estimate the entropy of human-generated passwords 
(Subsequent updates of this publication gave up trying to compute entropy for 
user generated passwords, and when they talk about entropy they talk about 20 
bits max):
•   The entropy of the first character is four bits;
•   The entropy of the next seven characters are two bits per character;
•   The ninth through the twentieth character has 1.5 bits of entropy per 
character;
•   Characters 21 and above have one bit of entropy per character.
•   A "bonus" of six bits is added if both upper case letters and 
non-alphabetic characters are used.
•   A "bonus" of six bits is added for passwords of length 1 through 19 
characters following an extensive dictionary check to ensure the password is 
not contained within a large dictionary. Passwords of 20 characters or more do 
not receive this bonus because it is assumed they are pass-phrases consisting 
of multiple dictionary words.

https://pages.nist.gov/800-63-3/ 

Some CAs are probably asking the user for a password during the request thus 
there is no need to distribute it later.  But, if the Applicant provides the 
password over HTTPS and then later the CA provides the PKCS#12 via download 
link and they obtain it via HTTPS, is that a single channel that they were both 
distributed over? 

I still object to not being able to use HTTPS for collection and/or 
distribution of the Password and the PKCS#12.  I also believe 112 bits of 
entropy is way too much for user generated password (assuming we want to 
continue supporting that option).

Doug

> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+doug.beattie=globalsign@lists.mozilla.org] On Behalf Of Tim
> Hollebeek via dev-security-policy
> Sent: Monday, May 14, 2018 12:52 PM
> To: Ryan Hurst ; mozilla-dev-security-
> pol...@lists.mozilla.org
> Subject: RE: Bit encoding (AW: Policy 2.6 Proposal: Add prohibition on CA key
> generation to policy)
> 
> For the record, I posted someone else's strength testing algorithm, and
> pointed out that it was bad   I personally don't think building strength 
> testing
> algorithms is hopeless, and I think good ones are very useful.  I tend to 
> agree
> with the current NIST recommendation, which is to primarily only consider
> length, along with things like history, dictionary words, and reuse.
> 
> But in this case, the public is at risk if the key is compromised, so I don't 
> trust a
> password chosen by an end user, no matter what strength function it may or
> may not pass.
> 
> Some form of random password of sufficient length, with the randomness
> coming from a CSPRNG, encoded into a more user friendly form, is the right
> answer here.
> 
> -Tim
> 
> > -Original Message-
> > From: dev-security-policy [mailto:dev-security-policy-
> > bounces+tim.hollebeek=digicert@lists.mozilla.org] On Behalf Of
> > bounces+Ryan
> > Hurst via dev-security-policy
> > Sent: Friday, May 4, 2018 5:19 PM
> > To: mozilla-dev-security-pol...@lists.mozilla.org
> > Subject: Re: Bit encoding (AW: Policy 2.6 Proposal: Add prohibition on
> > CA key generation to policy)
> >
> >
> > > True, but CAs can put technical constraints on that to limit the
> > > acceptable
> > passwords to a certain strength. (hopefully with a better
> > strength-testing algorithm than the example Tim gave earlier)
> >
> > Tim is the best of us -- this is hard to do well :)
> >
> > ___
> > dev-security-policy mailing list
> > dev-security-policy@lists.mozilla.org
> > https://clicktime.symantec.com/a/1/B4EQCI-
> >
> M91W3VFdrYnu8NKa6AWUA0Oca9gCvph6YNAo=?d=1AFyDzj7qs0LPt1qH7YZK
> > X7VDlKTG3u4_pF-smh1LdxQUjK6Fx2ySSFy5RdxazxX-
> >
> 

RE: FW: Bit encoding (AW: Policy 2.6 Proposal: Add prohibition on CA key generation to policy)

2018-05-10 Thread Doug Beattie via dev-security-policy
Hi Wayne,

I’m OK with this as long as this permits the password (fully or partially 
generated by the CA) and PKCS#12 file to be picked up by a user over HTTPS (a 
single channel).

Doug


From: Wayne Thayer [mailto:wtha...@mozilla.com]
Sent: Wednesday, May 9, 2018 11:43 PM
To: Doug Beattie 
Cc: mozilla-dev-security-policy 
Subject: Re: FW: Bit encoding (AW: Policy 2.6 Proposal: Add prohibition on CA 
key generation to policy)


I think we have settled on the following resolution to this issue:

Add the following to section 5.2 (Forbidden and Required Practices):

CAs MUST NOT generate the key pairs for end-entity certificates that have an 
EKU extension containing the KeyPurposeIds id-kp-serverAuth or
anyExtendedKeyUsage.

PKCS#12 files must employ an encryption key and algorithm that is sufficiently 
strong to protect the key pair for its useful life based on current guidelines 
published by a recognized standards body. PKCS#12 files MUST be encrypted and 
signed; or, MUST have a password that exhibits at least 112 bits of entropy, 
and the password MUST be transferred using a different channel than the PKCS#12 
file.

Unless there is further discussion, I will include this language in the final 
version of the policy.

- Wayne
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


FW: Bit encoding (AW: Policy 2.6 Proposal: Add prohibition on CA key generation to policy)

2018-05-09 Thread Doug Beattie via dev-security-policy


>From: Wayne Thayer [mailto:wtha...@mozilla.com] 
>Sent: Monday, May 7, 2018 8:43 PM
>To: Doug Beattie <doug.beat...@globalsign.com>
>Cc: Ryan Hurst <ryan.hu...@gmail.com>; mozilla-dev-security-policy 
>pol...@lists.mozilla.org>
>Subject: Re: Bit encoding (AW: Policy 2.6 Proposal: Add prohibition on CA key 
>generation to policy)
>
>Doug,
>
>On Mon, May 7, 2018 at 11:24 AM Doug Beattie via dev-security-policy 
><mailto:dev-security->pol...@lists.mozilla.org> wrote: 
>> -Original Message-
>> From: dev-security-policy [mailto:mailto:dev-security-policy-
>> bounces+doug.beattie=mailto:globalsign@lists.mozilla.org] On Behalf Of 
>> Ryan
>> Hurst via dev-security-policy
>> Sent: Friday, May 4, 2018 4:35 PM
>> To: mailto:mozilla-dev-security-pol...@lists.mozilla.org
>> Subject: Re: Bit encoding (AW: Policy 2.6 Proposal: Add prohibition on CA key
>> generation to policy)
>> 
>> On Friday, May 4, 2018 at 1:00:03 PM UTC-7, Doug Beattie wrote:
>> > First comments on this: "MUST be encrypted and signed; or, MUST have a
>> password that..."
>> > - Isn't the password the key used for encryption?  I'm not sure if the "or"
>> makes sense since in both cases the password is the key for encryption
>> 
>> There are modes of PKCS#12 that do not use passwords.
>If you're stating that we should include the use of PKCS#12 that don't use 
>passwords and that are 
>encrupted, then we need to define the parameters of the key used for that 
>purpose,
>
>Would it be enough to say that "PKCS#12 files must employ an encryption key 
>and algorithm that is 
>sufficiently strong..." (add "key and")?
Sure, that works for me.

>> > - In general, I don't think PKCS#12 files are signed, so I'd leave that 
>> > out, a
>> signature isn't necessary.  I could be wrong...
>> 
>> They may be, see: http://unmitigatedrisk.com/?p=543
>The requirement seems to imply it must be signed, and I don't think we want 
>that, do we?  I think 
>should remove "or signed" and that will permit them to be signed, but not 
>require it.
>
> That's not hoe I read it. The proposed language provides the option of 
>'encrypted and signed' or 
>protected with a password'. Since your use case is 'protected with a 
>password', there is no requirement 
>for the file to be signed.
OK

>>
>> >
>> > I'd still like to see a modification on the requirement: "password MUST be
>> transferred using a different channel than the PKCS#12 file".  A user should 
>> be
>> able to download the P12 and password via HTTP.  Can we add an exception
>> for that?
>> 
>> Why do you want to allow the use of HTTP?
>Sorry, I meant HTTPS.  
 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Bit encoding (AW: Policy 2.6 Proposal: Add prohibition on CA key generation to policy)

2018-05-04 Thread Doug Beattie via dev-security-policy
Hey Wayne,

This should be a really easy thing, but it's not.

First comments on this: "MUST be encrypted and signed; or, MUST have a password 
that..."
- Isn't the password the key used for encryption?  I'm not sure if the "or" 
makes sense since in both cases the password is the key for encryption
- In general, I don't think PKCS#12 files are signed, so I'd leave that out, a 
signature isn't necessary.  I could be wrong...

I'd still like to see a modification on the requirement: "password MUST be 
transferred using a different channel than the PKCS#12 file".  A user should be 
able to download the P12 and password via HTTP.  Can we add an exception for 
that?

Doug

> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+doug.beattie=globalsign@lists.mozilla.org] On Behalf Of Wayne
> Thayer via dev-security-policy
> Sent: Friday, May 4, 2018 2:58 PM
> To: mozilla-dev-security-policy 
> 
> Subject: Re: Bit encoding (AW: Policy 2.6 Proposal: Add prohibition on CA key
> generation to policy)
> 
> The optimist in me thinks we might be getting close to resolving this issue 
> (the
> last one remaining for the 2.6 policy update). Here is another proposal that
> attempts to account for most of the input we've received:
> 
> Add the following to section 5.2 (Forbidden and Required Practices):
> 
> CAs MUST NOT generate the key pairs for end-entity certificates that have
> > an EKU extension containing the KeyPurposeIds id-kp-serverAuth or
> > anyExtendedKeyUsage.
> >
> > PKCS#12 files must employ an encryption algorithm that is sufficiently
> > strong to protect the key pair for its useful life based on current
> > guidelines published by a recognized standards body. PKCS#12 files
> > MUST be encrypted and signed; or, MUST have a password that exhibits
> > at least 112 bits of entropy, and the password MUST be transferred
> > using a different channel than the PKCS#12 file.
> >
> 
> This isn't perfect. I would appreciate your comments if you have significant
> concerns with this proposed policy.
> 
> - Wayne
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Policy 2.6 Proposal: Add prohibition on CA key generation to policy

2018-04-30 Thread Doug Beattie via dev-security-policy
We should allow someone to obtain/view the P12 password and to download the P12 
over an authenticated web site (managed portal), and that seems to be precluded 
by the definition below.

Doug


From: Tim Hollebeek [mailto:tim.holleb...@digicert.com] 
Sent: Monday, April 30, 2018 3:05 PM
To: Wayne Thayer <wtha...@mozilla.com>
Cc: Doug Beattie <doug.beat...@globalsign.com>; Buschart, Rufus 
<rufus.busch...@siemens.com>; mozilla-dev-security-policy 
<mozilla-dev-security-pol...@lists.mozilla.org>; Wichmann, Markus Peter 
<markus.wichm...@siemens.com>; Enrico Entschew <enr...@entschew.com>; Grotz, 
Florian <florian.gr...@siemens.com>; Heusler, Juergen 
<juergen.heus...@siemens.com>
Subject: RE: Policy 2.6 Proposal: Add prohibition on CA key generation to policy

OOB passwords are generally tough to integrate into automation, and if the 
channel really is “secure” then they might not be buying you anything, 
depending where the “secure” channel starts and ends and how it is 
authenticated.

That might not be a GOOD reason to allow it, but it is the one reason that 
comes to mind.  Taking the other side, I’d argue that it’s unlikely that the 
“secure” channel stretches unbroken from the site of key generation to the key 
loading/usage site.  And it’s possible that “secure” is being used incorrectly, 
and the channel is encrypted but not authenticated.  In that case, having a 
strong password does help for at least a portion of the transmission.

-Tim

From: Wayne Thayer [mailto:wtha...@mozilla.com] 
Sent: Monday, April 30, 2018 2:25 PM
To: Tim Hollebeek <tim.holleb...@digicert.com>
Cc: Doug Beattie <doug.beat...@globalsign.com>; Buschart, Rufus 
<rufus.busch...@siemens.com>; mozilla-dev-security-policy 
<mozilla-dev-security-pol...@lists.mozilla.org>; Wichmann, Markus Peter 
<markus.wichm...@siemens.com>; Enrico Entschew <enr...@entschew.com>; Grotz, 
Florian <florian.gr...@siemens.com>; Heusler, Juergen 
<juergen.heus...@siemens.com>
Subject: Re: Policy 2.6 Proposal: Add prohibition on CA key generation to policy

The current policy seems inconsistent on the trust placed in passwords to 
protect PKCS#12 files. On one hand, it forbids transmission via insecure 
electronic channels regardless of password protection. But it goes on to permit 
transmission of PKCS#12 files on a storage device as long as a "sufficiently 
strong" password is delivered via a different means. If we trust PKCS#12 
encryption with a strong password (it's not clear that we should [1]), then the 
policy could be:

PKCS#12 files SHALL have a password containing at least 64 bits of output from 
a CSPRNG, and the password SHALL be transferred using a different channel than 
the PKCS#12 file.

This eliminates the need for separate rules pertaining to physical storage 
devices.

Is there a good reason to allow transmission of PKCS#12 files with weak/no 
passwords over "secure" channels?

[1] http://unmitigatedrisk.com/?p=543

On Mon, Apr 30, 2018 at 10:46 AM, Tim Hollebeek 
<mailto:tim.holleb...@digicert.com> wrote:
Once again, CSPRNGs are not overkill.  They are widely available in virtually 
every
programming language in existence these days.  I have never understood why
there is so much pushback against something that often appears near the top of 
many top ten lists about basic principles for secure coding.

Also, while I'm responding, and since it got copied into your proposal, 32 bits 
is 
still way too small.

"irrecoverable physical damage" ?  You want to go beyond tamper evident,
and even tamper responsive, and require self-destruction on tamper??  
I personally think we probably want to get out of the area of writing 
requirements about physical distribution.  They're VERY hard to get right.
That is copied from the current policy, and while it's confusing I believe it 
just means 'tamper evident'.

-Tim

> -Original Message-
> From: dev-security-policy [mailto:mailto:dev-security-policy-
> bounces+tim.hollebeek=mailto:digicert@lists.mozilla.org] On Behalf Of Doug
> Beattie via dev-security-policy
> Sent: Monday, April 30, 2018 1:06 PM
> To: Buschart, Rufus <mailto:rufus.busch...@siemens.com>; mozilla-dev-security-
> policy <mailto:mozilla-dev-security-pol...@lists.mozilla.org>
> Cc: Wichmann, Markus Peter <mailto:markus.wichm...@siemens.com>; Enrico
> Entschew <mailto:enr...@entschew.com>; Grotz, Florian
> <mailto:florian.gr...@siemens.com>; Heusler, Juergen
> <mailto:juergen.heus...@siemens.com>; Wayne Thayer 
> <mailto:wtha...@mozilla.com>
> Subject: RE: Policy 2.6 Proposal: Add prohibition on CA key generation to 
> policy
> 
> 
> I agree we need to tighten up Wayne's initial proposal a little.
> 
> -
> Initial proposal (Wayne):
> 
> CAs MUST NOT distribute or transfer certifi

RE: Policy 2.6 Proposal: Add prohibition on CA key generation to policy

2018-04-30 Thread Doug Beattie via dev-security-policy

I agree we need to tighten up Wayne's initial proposal a little.

-
Initial proposal (Wayne):

CAs MUST NOT distribute or transfer certificates in PKCS#12 form through 
insecure electronic channels. The PKCS#12 file must have a sufficiently secure 
password, and the password must not be transferred together with the file. If a 
PKCS#12 file is distributed via a physical data storage device, then the 
storage must be packaged in a way that the opening of the package causes 
irrecoverable physical damage. (e.g. a security seal)

-
Proposal #1 (Rufus):

CAs SHOULD NOT distribute or transfer certificates in PKCS#12 form through 
insecure electronic channels. If the CA chooses to do so, the PKCS#12 file 
SHALL have a  password containing at least 32 bit of output from a CSPRNG, and 
the password SHALL be transferred using a different channel as the PKCS#12 file.


Proposal #2 (Doug)

If the PKCS#12 is distributed thought an insecure electronic channel then the 
PKCS#12 file SHALL have a  password containing at least 32 bit of entropy and 
the PKCS#12 file and the password SHALL be transferred using a different 
channels. 

If the PKCS#12 is distributed through a secure electronic channel, then...  

If a PKCS#12 file is distributed via a non-secure physical data storage device 
, then
a) the storage must be packaged in a way that the opening of the package causes 
irrecoverable physical damage. (e.g. a security seal), or 
b) the PKCS#12 must have a password of at least 32 bits of entropy and the 
password must be sent via separate channel.


Comments:

1) The discussions to date have not addressed the use of secure channels on the 
quality of the password, nor on the use of multiple channels.  What is the 
intent?  We should specify that so it's clear.

2) I think the use of CSPRNG is overkill for this application.  Can we leave 
this at a certain entropy level?

3) The tamper requirement would only seem applicable if the P12 wasn't 
protected well (via strong P12 password on USB storage or via "good" PIN on a 
suitably secure crypto token).



> -Original Message-
> 
> I would like to suggest to rephrase the central sentence a little bit:
> 
> Original:
> 
> CAs MUST NOT distribute or transfer certificates in PKCS#12 form through
> insecure electronic channels. The PKCS#12 file must have a  sufficiently 
> secure
> password, and the password must not be transferred  together with the file.
> 
> Proposal:
> 
> CAs SHOULD NOT distribute or transfer certificates in PKCS#12 form through
> insecure electronic channels. If the CA chooses to do so, the PKCS#12 file
> SHALL have a  password containing at least 32 bit of output from a CSPRNG,
> and the password SHALL be transferred using a different channel as the
> PKCS#12 file.
> 
> My proposal would allow a CA to centrally generate a P12 file, send it to the
> Subject by unencrypted email and send the P12 pin as a SMS or Threema
> message. This is an important use case if you want to have email encryption on
> a mobile device that is not managed by a mobile device management system.
> Additionally I made the wording a little bit more rfc2119-ish and made clear,
> what defines a 'sufficiently secure password' as the original wording lets a 
> lot
> of room for 'interpretation'.
> 
> What do you think?
> 
> /Rufus
> 
> 
> Siemens AG
> Information Technology
> Human Resources
> PKI / Trustcenter
> GS IT HR 7 4
> Hugo-Junkers-Str. 9
> 90411 Nuernberg, Germany
> Tel.: +49 1522 2894134
> mailto:rufus.busch...@siemens.com
> www.twitter.com/siemens
> 
> www.siemens.com/ingenuityforlife
> 
> Siemens Aktiengesellschaft: Chairman of the Supervisory Board: Jim Hagemann
> Snabe; Managing Board: Joe Kaeser, Chairman, President and Chief Executive
> Officer; Roland Busch, Lisa Davis, Klaus Helmrich, Janina Kugel, Cedrik Neike,
> Michael Sen, Ralf P. Thomas; Registered offices: Berlin and Munich, Germany;
> Commercial registries: Berlin Charlottenburg, HRB 12300, Munich, HRB 6684;
> WEEE-Reg.-No. DE 23691322
> 
> > -Ursprüngliche Nachricht-
> > Von: dev-security-policy
> > [mailto:dev-security-policy-bounces+rufus.buschart=siemens.com@lists.m
> > ozilla.org] Im Auftrag von Wayne Thayer via dev-security-policy
> > Gesendet: Freitag, 27. April 2018 19:30
> > An: Enrico Entschew
> > Cc: mozilla-dev-security-policy
> > Betreff: Re: Policy 2.6 Proposal: Add prohibition on CA key generation
> > to policy
> >
> > On Fri, Apr 27, 2018 at 6:40 AM, Enrico Entschew via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
> >
> > > I suggest to make the requirement „* The PKCS#12 file must have a
> > > sufficiently secure password, and the password must be transferred
> > > via a separate channel than the PKCS#12 file.” binding for both
> > > transfer methods and not be limited to physical data storage.
> > > Otherwise I agree with this proposal.
> > >
> > > Enrico
> > >
> > > That seems like a good and reasonable change, resulting in the
> > 

RE: Policy 2.6 Proposal: Require separate intermediates for different usages (e.g. server auth, S/MIME)

2018-04-17 Thread Doug Beattie via dev-security-policy


> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+doug.beattie=globalsign@lists.mozilla.org] On Behalf Of
> Wayne Thayer via dev-security-policy
> Sent: Tuesday, April 17, 2018 2:24 PM
> To: mozilla-dev-security-policy  pol...@lists.mozilla.org>
> Subject: Policy 2.6 Proposal: Require separate intermediates for different
> usages (e.g. server auth, S/MIME)
> 
> This proposal is to require intermediate certificates to be dedicated to
> specific purposes by EKU. Beginning at some future date, all newly created
> intermediate certificates containing either the id-kp-serverAuth or id-kp-
> emailProtection EKUs would be required to contain only a single EKU.

We'll need to support a list of EKUs if this becomes a requirement.  Server 
Auth certificates should be able to support lots of different EKUs, for 
example: 
id-kp-serverAuth
id-kp-clientAuth
id-kp-ipsecEndSystem
id-kp-ipsecTunnel
id-kp-ipsecUser
KDC Authentication
Smart Card Logon
iPSec IKE 
IKE Intermediate

> Arguments for this requirement are that it reduces risk of an incident in 
> which
> one type of certificate affecting another type, and it could allow some
> policies to be restricted to specific types of certificates.
> 
> It was pointed out that Microsoft already requires dedicated intermediates
> [1].

I agree with using dedicated intermediates, but I'd prefer that they should not 
be required to be EKU constrained.

> I would appreciate everyone's input on this topic.
> 
> I suspect that it will be tempting to extend this discussion into intermediate
> rollover policies, but I would remind everyone of the prior inconclusive
> discussion on that topic [2].
> 
> This is: https://github.com/mozilla/pkipolicy/issues/26
> 
> [1] https://aka.ms/rootcert
> [2]
> https://groups.google.com/d/msg/mozilla.dev.security.policy/3NdNMiM-
> TQ8/hgVsCofcAgAJ
> ---
> 
> This is a proposed update to Mozilla's root store policy for version 2.6.
> Please keep discussion in this group rather than on GitHub. Silence is 
> consent.
> 
> Policy 2.5 (current version):
> https://github.com/mozilla/pkipolicy/blob/2.5/rootstore/policy.md
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Policy 2.6 Proposal: Add prohibition on CA key generation to policy

2018-04-10 Thread Doug Beattie via dev-security-policy
Wayne: I agree with your latest proposal.

> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+doug.beattie=globalsign@lists.mozilla.org] On Behalf Of Wayne
> Thayer via dev-security-policy
> Sent: Monday, April 9, 2018 7:10 PM
> To: mozilla-dev-security-policy 
> 
> Subject: Re: Policy 2.6 Proposal: Add prohibition on CA key generation to 
> policy
> 
> Getting back to the earlier question about email certificates, I am now of the
> opinion that we should limit the scope of this policy update to TLS 
> certificates.
> The current language for email certificates isn't clear and any attempt to 
> fix it
> requires us to answer the bigger question of "under what circumstances is CA
> key generation acceptable?"
> 
> My updated proposal is to add the following paragraphs to section 5.3
> “Forbidden and Required Practices”:
> 
> CAs MUST not generate the key pairs for end-entity certificates, except for
> > email certificates with the Extended Key Usage extension present and
> > set to id-kp-emailProtection.
> >
> 
> >
> CAs MUST not distribute or transfer certificates in PKCS#12 form through
> > insecure electronic channels. If a PKCS#12 file is distributed via a
> > physical data storage device, then:
> > * The storage must be packaged in a way that the opening of the
> > package causes irrecoverable physical damage. (e.g. a security seal)
> > * The PKCS#12 file must have a sufficiently secure password, and the
> > password must not be transferred together with the storage.
> 
> 
> Once again, I would appreciate your comments on this proposal.
> 
> - Wayne
> 
> 
> On Mon, Apr 9, 2018 at 3:54 PM, Wayne Thayer 
> wrote:
> 
> > On Thu, Apr 5, 2018 at 12:29 PM, Jakob Bohm via dev-security-policy <
> > dev-security-policy@lists.mozilla.org> wrote:
> >
> >> On 05/04/2018 18:55, Wayne Thayer wrote:
> >>
> >>> On Thu, Apr 5, 2018 at 3:15 AM, Dimitris Zacharopoulos
> >>>  >>> >
> >>> wrote:
> >>>
> >>> My proposal is "CAs MUST NOT distribute or transfer private keys and
>  associated certificates in PKCS#12 form through insecure physical
>  or electronic channels " and remove the rest.
> 
>  +1 - I support this proposal.
> 
> >>>
> >> But that removes the explicit exception for methods such as the
> >> following *example* protocol (securing such a protocol is the job and
> >> expertise of the affected CAs).
> >>
> >> This is a valid point, so perhaps we should stick with the original
> > language regarding distribution of PKCS12 files on physical storage devices.
> >
> > 1. Set the notBefore data in the new certificate several days or weeks
> >>   into the future.
> >>
> >> 2. Securely store the PKCS#12 or other private key format on a USB
> >>   stick, USB token or smartcard.
> >>
> >> 3. Place that device in a physically sealed envelope or package.
> >>
> >> 4. Send it through regular postal mail (an insecure physical channel).
> >>
> >> 5. Upon receiving the envelope/package, the subscriber must verify that
> >>   the seal is unbroken and acknowledge that, through a secure electronic
> >>   channel.  The procedure may/should include additional steps to verify
> >>   that the sealed envelope/package is the same one sent.
> >>
> >> 6. If this is not done before the certificate's notBefore date, the
> >>   certificate is preemptively revoked due to private key compromise and
> >>   issuance is retried with a new key.
> >>
> >>
> >> Enjoy
> >>
> >> Jakob
> >>
> >>
> >
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Mozilla’s Plan for Symantec Roots

2018-03-02 Thread Doug Beattie via dev-security-policy
Hi Wayne,

Is the Firefox 60 update in May the same as the combination of the April and 
October Chrome updates, in that all Symantec certificates will be untrusted on 
this date (5 months before Chrome)?

Doug

> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+doug.beattie=globalsign@lists.mozilla.org] On Behalf Of Wayne
> Thayer via dev-security-policy
> Sent: Friday, March 2, 2018 1:12 PM
> Cc: mozilla-dev-security-policy 
> 
> Subject: Re: Mozilla’s Plan for Symantec Roots
> 
> Update:
> 
> Mozilla is moving forward with our implementation of the consensus plan for
> Symantec roots [1]. With the exception of whitelisted subordinate CAs using
> the keys listed on the wiki [2], Symantec certificates are now blocked by
> default on Nightly builds of Firefox. The preference
> "security.pki.distrust_ca_policy" can be used to override these changes. A
> custom error message is also being implemented [3]. These changes are part of
> Firefox 60, which is scheduled to be released in May [4].
> 
> There are still a lot of websites using Symantec certificates, but the number 
> are
> declining rapidly. Lists of affected sites and regularly updated metrics are
> available via bug 1434300 [5].
> 
> - Wayne
> 
> [1]
> https://groups.google.com/d/msg/mozilla.dev.security.policy/FLHRT79e3XE/
> 90qkf8jsAQAJ
> [2] https://wiki.mozilla.org/CA/Additional_Trust_Changes#Symantec
> [3] https://bugzilla.mozilla.org/show_bug.cgi?id=1441223
> [4] https://wiki.mozilla.org/RapidRelease/Calendar
> [5] https://bugzilla.mozilla.org/show_bug.cgi?id=1434300
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: GlobalSign certificate with far-future notBefore

2018-01-24 Thread Doug Beattie via dev-security-policy
Can we consider this case closed with the action that the VWG will propose a 
ballot that addresses pre and postdating certificates?

Doug

> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+doug.beattie=globalsign@lists.mozilla.org] On Behalf Of Tim
> Hollebeek via dev-security-policy
> Sent: Wednesday, January 24, 2018 11:49 AM
> To: Rob Stradling ; Jonathan Rudenberg
> ; mozilla-dev-security-policy  pol...@lists.mozilla.org>
> Subject: RE: GlobalSign certificate with far-future notBefore
> 
> 
> > > This incident makes me think that two changes should be made:
> > >
> > > 1) The Root Store Policy should explicitly ban forward and
> > > back-dating
> the
> > notBefore date.
> >
> > I think it would be reasonable and sensible to permit back-dating
> > insofar
> as it is
> > deemed necessary to accommodate client-side clock-skew.
> 
> Indeed.  This was discussed at a previous Face to Face meeting, and it was
> generally agreed that a requirement that the notBefore date be within +-1
> week of issuance would not be unreasonable.
> 
> The most common practice is backdating by a few days for the reason Rob
> mentioned.
> 
> -Tim

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: GlobalSign certificate with far-future notBefore

2018-01-24 Thread Doug Beattie via dev-security-policy


> -Original Message-
> From: Gervase Markham [mailto:g...@mozilla.org]
> Sent: Wednesday, January 24, 2018 7:00 AM
> To: Doug Beattie ; mozilla-dev-security-
> pol...@lists.mozilla.org
> Subject: Re: GlobalSign certificate with far-future notBefore
> 
> Hi Doug,
> 
> Thanks for the quick response.
> 
> On 24/01/18 11:52, Doug Beattie wrote:
> > In the case below, the customer ordered a 39 month certificate and set
> > the notBefore date for 2 months into the future.
> 
> Momentary 2017/2018 confusion in my brain had me thinking that this was
> further into the future than it actually was. But yet still, it is the other 
> side of a
> reduction in certificate lifetime deadline.
> 
> > We permit customers to set a notBefore date into the future, possibly
> > for the reason listed below, but there could be other reasons.
> 
> So if a customer came to you today and renewed their certificate for
> www.example.com with validity from 24th Jan 2017 to 24th Apr 2020
> (perfectly fine), and then requested a second 39-month certificate valid from
> 24th Apr 2020 to 24th July 2023, would you issue this second one?

No, we would not issue that certificate.  In no case would we issue a 
certificate that has a notAfter more than 39 months from today, which is 
currently 24 Apr 2021.


> Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: GlobalSign certificate with far-future notBefore

2018-01-24 Thread Doug Beattie via dev-security-policy
 I'll try to respond to the few questions on the topic in this one email.

In the case below, the customer ordered a 39 month certificate and set the 
notBefore date for 2 months into the future.  The notAfter is within the 
allowed 39 month validity as measured from time of issuance.  Posting the 
precertificate to CT helps document the actual issuance date as "proof".

We permit customers to set a notBefore date into the future, possibly for the 
reason listed below, but there could be other reasons.  We will never permit 
the notAfter date ever exceed 39 months from the issuance date (and soon this 
will be 825 days).

As Jonathan pointed out, "the certificate issued was valid for 1129 days (more 
than three years)" but the expiration date is less than 39 months from the date 
of the SCT (by a few seconds).
- Date posted to CT logs: 2018-01-23 09:32:50
- NotAfter:  2021-04-23  09:32:47 

Not renewing a month earlier isn't a valid use case since the notAfter never 
violates the BR max validity as measured from issuance time to expiration time.

We don't allow customers to set the notBefore date into the past.

And regarding the Mozilla checks for 
https://bugzilla.mozilla.org/show_bug.cgi?id=908125, perhaps the "notBefore" 
date used in the check should be the earlier of the certificate NotBefore or 
the date the included SCT was created.   

I don't know how Chrome would handle this certificate, but if it marks it as 
invalid, it would be good to know so we can relay this to customers that have 
set the notBefore date after March 1st.

Doug

> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+doug.beattie=globalsign@lists.mozilla.org] On Behalf Of Gervase
> Markham via dev-security-policy
> Sent: Wednesday, January 24, 2018 5:05 AM
> To: David E. Ross ; mozilla-dev-security-
> pol...@lists.mozilla.org
> Subject: Re: GlobalSign certificate with far-future notBefore
> 
> On 24/01/18 04:57, David E. Ross wrote:
> > I am not sure about prohibiting forward-dating the notBefore date.  I
> > can picture a situation where an existing site certificate is going to
> > expire.  The site's administration decides to obtain a new certificate
> > from a different certification authority.  Because of various
> > administrative processes, the switch to the new site certificate
> > cannot be accomplished quickly (e.g., moving the server); so they
> > establish a notBefore date that is a month in the future.
> 
> Why would that be _necessary_? What would go wrong if the cert was cut
> with a notBefore of the current date, apart from the fact that they'd need to
> renew it a month earlier?
> 
> Gerv
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: TLS-SNI-01 and compliance with BRs

2018-01-19 Thread Doug Beattie via dev-security-policy
Matthew,

That’s a good summary.  It seems we have  2 methods that can be used only if 
the CAs using the methods have the design and risk mitigation factors reviewed 
and approved.  It’s basically the old “any other method”, except before you can 
use it, the Root programs must review the design/implementation and can 
approve/reject them on a case by case basis.  Is that where we are with these 
methods – Not approved unless disclosed and reviewed?

Given this discussion, there must be no other CAs using method 9 or 10, else 
they would have come forward by now with disclosures and have demonstrated 
their compliance..  Maybe we need to post this on the CABF public list?

Based on this, do we need a ballot to remove them from the BRs, or put in a 
statement in them to the effect that they can be used only if approved by 
Google on this list?  I’m not picking on Ryan, but he’s the only root program 
representative that has expressed strong views on what is permitted and what is 
not (else you have your CA revoked or root pulled from the program).

Doug

From: Matthew Hardeman [mailto:mharde...@gmail.com]
Sent: Friday, January 19, 2018 1:45 PM
To: Doug Beattie 
Cc: r...@sleevi.com; Alex Gaynor ; 
mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: TLS-SNI-01 and compliance with BRs

One opinion I'd like to add to the discussion...

In as far as that at this point, it looks like it's time for guidance from the 
root programs officially on whether or not and under what circumstances 
TLS-SNI-01 and/or any other mechanism based on method #10 are allowable moving 
forward

I'd like to point out that both Let's Encrypt recognized an issue and 
voluntarily disclosed and took measures in the direction of securing the WebPKI 
above and beyond any demands made of them.

Additionally, GlobalSign was obviously diligent in their responsibility to 
monitor this mailing list and others and actively discern whether any ongoing 
discussion may pertain to their operations.  As evidenced by their preemptive 
disclosure and shut down of their method #10 validation mechanism, they've 
shown strong adherence to the best practices espoused by this community -- 
actively monitoring the broad discussions and concerns and actively considering 
the impact of the issues surfaced in terms of their own CA operations.

Ultimately, if it should arise that other CAs who rely on mechanisms 
implementing or claiming to implement method #10 have similar risk and 
vulnerabilities, those CAs should be called to task for not having timely 
disclosed and remediated.  Further, perhaps those CAs should suffer the burden 
of mandatory revalidation under a different mechanism, as the vulnerability 
category has now been acknowledged in the community for some time and the 
recent press has been significant.

In contrast, I think any remediation plan should reward Let's Encrypt and 
GlobalSign for their diligence and compliance to best practice.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: TLS-SNI-01 and compliance with BRs

2018-01-19 Thread Doug Beattie via dev-security-policy

I think we’ve gotten off track.  While the general discussion is good and we 
need to update the validation methods to provide more precise details, I want 
to get back to the point in hand which is asking if the ACME TLS-SNO-01 method 
is compliant with method 10.  If method 10 specified that you could validate 
the random number at the same IP address as the SAN being validated, then it 
would have said that.  How does validating the “Random Value within a 
Certificate on the IP address of the Authorization Domain Name” comply with 
validating the “Random Value within a Certificate on the Authorization Domain 
Name”?  The TLS-SNI method specifically directs the CA to check for the random 
number on a location other than the ADN.


Many CA’s haven’t complied with the Mozilla requirement to list the methods 
they use (including Google btw), so it’s hard to tell which CAs are using 
method 10.  Of the CA CPSs I checked, only Symantec has method 10 listed, and 
with the DigiCert acquisition, it’s not clear if that CPS is still active.  We 
should find out on January 31st who else uses it.

In the meantime, we should ban anyone from using TLS-SNI as a non-compliant 
implementation, even outside shared hosting environments.  There could well be 
other implementations that comply with method 10, so I’m not suggesting we 
remove that from the BRs yet (those that don’t allow SNI when validating the 
presence of the random number within the certificate of a TLS handshake are 
better).

Regarding the comment on the ACME protocol: “The ACME specification is useful 
in it's the first attempt I'm aware of that attempts to fully, normatively 
specify how to validate assurances in an open and interoperable way.”  Yes, 
open review of the protocol was good.  As you are likely aware, the 
specification points out [1] vulnerabilities with the use of ACME by hosting 
providers “The use of hosting providers is a particular risk for ACME 
validation.”  It appears that the detailed analysis into these risks wasn’t 
performed or considered prior to using ACME.  If the analysis was done the risk 
mitigation wasn’t documented in spec.


Lastly, are any of the Platinum Let’s Encrypt sponsors (Mozilla, Akamai, Cisco, 
EFF, OVH and Chrome) using TLS-SNI-01?  I only call them out because as large 
financial supports, they may be more incentivized to use it than others.

Personally, I think the use of TLS-SNI-01  should be banned immediately, 
globally (not just by Let’s Encrypt), but without knowing which CAs use it, 
it’s difficult to enforce.

[1] https://tools.ietf.org/html/draft-ietf-acme-acme-09#section-10.2


From: Ryan Sleevi [mailto:r...@sleevi.com]
Sent: Thursday, January 18, 2018 7:25 PM
To: Doug Beattie <doug.beat...@globalsign.com>
Cc: Alex Gaynor <agay...@mozilla.com>; 
mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: TLS-SNI-01 and compliance with BRs

I think others have already responded, but I do want to highlight one other 
problem with the reasoning being offered here: SNI is not mandatory in TLS. 
It's an extension (RFC 6066) that is optional.

More concretely, Methods .6, .8, .9, and .10 are all effectively demonstrations 
over the IP address pointed to by a domain - rather than the domain itself. I 
mention .6 in there because there is, for example, no requirement to use a 
"Host" header - you could use HTTP/1.0 (as some CAs, I'm told, do).

Similarly, one can read that .10 doesn't actually require the TLS handshake to 
complete, nor for a ServerKeyExchange to be in any way related to the 
Certificate. It is, for example, sufficient merely to send a Client Hello and 
Server Hello+Certificate and terminate the connection.

This is the challenge of defining validation methods in the abstract, rather 
than with concrete specifications. The ACME specification is useful in it's the 
first attempt I'm aware of that attempts to fully, normatively specify how to 
validate assurances in an open and interoperable way. The historic ambiguities 
derived from the BRs, working in abstract, technology-neutral ways, necessarily 
leads to these sorts of contrived scenarios. For example, .7 doesn't 
demonstrate control over an ADN - in as much as it allows control over a 
subdomain of an ADN to be treated as control over the ADN itself (if it has a 
leading prefix). .9 doesn't require the domain name appear within the Test 
Certificate - similar to the point being raised here about the domain name not 
appearing within the TLS handshake for .10.

On Thu, Jan 18, 2018 at 4:46 PM, Doug Beattie via dev-security-policy 
<dev-security-policy@lists.mozilla.org<mailto:dev-security-policy@lists.mozilla.org>>
 wrote:
The point is, you don’t really connect to the Certificate on the Authorization 
Domain Name, you connect to a certificate on the same IP address as the ADN, 
but you actually intentionally ask for a different server name, which has no 
relationship to the ADN (except they h

TLS-SNI-01 and compliance with BRs

2018-01-18 Thread Doug Beattie via dev-security-policy
Now that I'm more familiar with method 9 and 10 domain validation methods and 
heard a few side discussions about the topic, it's made me (and others) wonder 
if the ACME TLS-SNI-01 is compliant with BR Method 10.

The BRs say:
3.2.2.4.10. TLS Using a Random Number
Confirming the Applicant's control over the FQDN by confirming the presence of 
a Random Value within a Certificate on the Authorization Domain Name which is 
accessible by the CA via TLS over an Authorized Port.

But it's my understanding that the CA validates the presence of the random 
number on "random.acme.invalid" and not on the ADN specifically.  Is the 
validation done by confirming the presence of a random number within the 
certificate on the ADN, or some other location?  I'm probably misreading the 
ACME spec, but is sure seems like the validation is not being done on the ADN.

Doug

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Possible Issue with Domain Validation Method 9 in a shared hosting environment

2018-01-16 Thread Doug Beattie via dev-security-policy
Ryan,

Here is some more information to continue the discussion.

-  We will continue to post all certificates to CT logs so issuance can 
be monitored.

-  We will reduce validity period of OneClick certificates to 6 months.

-  We will work with the hosting providers (on a case by case basis) to 
implement processes and procedures which prevent the uploading and use of test 
certificates on user controlled shared IP addresses (similar to how LE worked 
with their larger customers to blocking acme.invalid from being used)

More below.

From: Ryan Sleevi [mailto:r...@sleevi.com]
Sent: Monday, January 15, 2018 4:56 PM
To: Doug Beattie 
Cc: r...@sleevi.com; mozilla-dev-security-pol...@lists.mozilla.org; Gervase 
Markham ; Wayne Thayer 
Subject: Re: Possible Issue with Domain Validation Method 9 in a shared hosting 
environment
As suggested, we encourage you to work on devising technical mitigations or 
alternative methods of validating such certificates that can meet the use case. 
We don't think that, as described, the OneClick method meets the necessary 
level of assurance, nor do the necessary level of mitigating factors exist, to 
consider such certificates trustworthy.

Ryan – I’m at a loss.  The security threat is that a user can request a 
certificate for a domain they don’t own from hosting companies that permit SNI 
mappings to domains the user doesn’t own or control.  This permits them to pass 
validation for a domain they don’t control that is on the same IP address as 
their legitimate site (or at least to which they have this level of SNI 
control).  We will verify that our OneClick customers will request certificates 
for domains the hosting company is actively managing for their users and not 
permit malicious actions (much like LE verifying that their hosting companies 
do not permit “acme.invalid” domains to be used).  This eliminates the problems 
of SNI being used as an avenue for domain validation for malicious actors.  Is 
this not sufficient for some reason?

Surely you agree that within non-shared hosting environments OneClick is not 
vulnerable and can be used.

No, it's not sufficient.

The failure mode unfortunately necessarily includes a failure by GlobalSign 
process and/or personnel, and in that failure mode, there are further no 
mitigating factors.

- If GlobalSign adds a vulnerable entity to their whitelist
  - The certificates issued will be valid for 1-3 years, leaving only the 
(broken) revocation system as recourse
We can and will reduce max validity to 6 months as a standard configuration 
option within our system.

  - There is no step organizations can take to pre-emptively mitigate the risk 
of GlobalSign adding to the whitelist (compared to, say, blocking .invalid)
Actually, there is and I apologize for not making this more clear before.  We 
have site operators that manage the issuance of certificates for their users.  
End users have no access to the issuance process, in uploading test 
certificates to their sites, or any involvement in the issuance process as this 
is automated by the site operators.  Given this approach is verified with the 
provider, we would propose whitelisting the account.

  - There is no ability for site operators to detect such situations. A 
consideration, not listed within the full set when discussing Let's Encrypt and 
the ACME TLS-SNI method, is that we have at least public commitment by Let's 
Encrypt and demonstrated evidence of sustained/long-term compliance with 
publicly disclosing all issued certificates ( as noted in 
https://letsencrypt.org/certificates/ ). While I realize you've offered to do 
so, I can find no evidence of GlobalSign doing so by default, and so this 
further adds to the risk calculus of a commitment to do something not yet 
practice and thus not yet consistently, reliably delivered on.
We currently include SCTs in all certificates we issue with the possible 
exception of some Enterprise customers that prefer to keep their OV 
certificates private (at least for now).  This has been configured since 
mid-November for all GlobalSign SSL products.

There is not, in our view, reason to accept this significantly greater 
(holistically considered) risk.

We're open to understanding whether GlobalSign has additional proposals how to 
mitigate this risk, given the set of concerns expressed - technical measures 
and policy measures. These may provide a path to allowing such issuance in the 
future, but we don't think that, given the holistic risk assessment, it's 
appropriate to allow it to immediately resume. We are keen to find solutions 
that work, as we understand that these can enable powerful new use cases, but 
we want to balance that with the risk posed.

I would encourage GlobalSign to consult Sections 3.2.2.4 and 3.2.2.5 to see if 
there are any other alternative methods to validating that might represent an 

RE: Possible Issue with Domain Validation Method 9 in a shared hosting environment

2018-01-15 Thread Doug Beattie via dev-security-policy


From: Ryan Sleevi [mailto:r...@sleevi.com]
Sent: Monday, January 15, 2018 4:14 PM
To: Doug Beattie <doug.beat...@globalsign.com>
Cc: r...@sleevi.com; mozilla-dev-security-pol...@lists.mozilla.org; Gervase 
Markham <g...@mozilla.org>; Wayne Thayer <wtha...@mozilla.com>
Subject: Re: Possible Issue with Domain Validation Method 9 in a shared hosting 
environment



On Mon, Jan 15, 2018 at 3:36 PM, Doug Beattie via dev-security-policy 
<dev-security-policy@lists.mozilla.org<mailto:dev-security-policy@lists.mozilla.org>>
 wrote:
Ryan,

I’m not sure where we go from here.

As suggested, we encourage you to work on devising technical mitigations or 
alternative methods of validating such certificates that can meet the use case. 
We don't think that, as described, the OneClick method meets the necessary 
level of assurance, nor do the necessary level of mitigating factors exist, to 
consider such certificates trustworthy.

Ryan – I’m at a loss.  The security threat is that a user can request a 
certificate for a domain they don’t own from hosting companies that permit SNI 
mappings to domains the user doesn’t own or control.  This permits them to pass 
validation for a domain they don’t control that is on the same IP address as 
their legitimate site (or at least to which they have this level of SNI 
control).  We will verify that our OneClick customers will request certificates 
for domains the hosting company is actively managing for their users and not 
permit malicious actions (much like LE verifying that their hosting companies 
do not permit “acme.invalid” domains to be used).  This eliminates the problems 
of SNI being used as an avenue for domain validation for malicious actors.  Is 
this not sufficient for some reason?

Surely you agree that within non-shared hosting environments OneClick is not 
vulnerable and can be used.

We have customers that need certificates and they have demonstrated they can 
comply with not permitting the creation and use of certificates for domains 
other than those that the hosting company is hosting for that customer.  All 
certificates will continue to be posted to CT logs.

While understanding and sensitive to this, what you're asking is that, on the 
basis of an abstract need, that known-insecure methods be used, with the 
assurance that the CA has taken steps (which are fundamentally 
non-interoperable) to mitigate, and for which an improper decision by a CA has 
no further mitigating factors. This does not provide a sufficient level of 
assurance to permit its continued use.

As far as the wildcard question, when someone asks for a wildcard cert for a 
domain like *.us.example.com<http://us.example.com>, we validate on that minus 
the * (so, us.example.com<http://us.example.com> in this case).

I'm afraid you're still missing the point of FQDN versus Authorization Domain 
Name. This further does not instill confidence that it's fully been described.
We’re using us.example.com as the ADN for validation in this example. We always 
use the FQDN minus “*.” For the ADN.

We’d like to move forward with issuing certificates with controls in place.

I'm sorry, but at present, we do not feel it is in the appropriate interests of 
users, sites, or the ecosystem to permit this.


If there are any other controls you need us to implement to resume issuance, 
let us know.  For example, if we limit validity to 1 year (possibly up to 15 
months) and if we put a firm end date for OneClick for July 1, 2018, would that 
suffice?

As stated, we believe 90 days is an appropriate and necessary upper-bound for 
such certificates.



___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Possible Issue with Domain Validation Method 9 in a shared hosting environment

2018-01-15 Thread Doug Beattie via dev-security-policy
> -Original Message-
> From: Nick Lamb [mailto:n...@tlrmx.org]
> Sent: Monday, January 15, 2018 2:39 PM
> 
> > -  Total number of active OneClick customers: < 10
> 
> What constitutes a OneClick customer in this sense?

These are web hosting companies that receive certificates for their users. We 
used to focus this on cPanel and similar control panels, but have largely moved 
away from them.  These are customers that want an automated method to issue 
certificates and where HTTP and DNS methods are not suitable, or where they 
haven't wanted to re-work their APIs to use them.  We believe all of these 
customers can be migrated over to HTTP or DNS methods (there are basically no 
other automated options if both 9 and 10 have security vulnerabilities).

Each customer has an account with us so we know where the requests are coming 
from.

> The focus of concern for tls-sni-01 was service providers who present an
> HTTPS endpoint for many independent entities, most commonly a bulk web
> host or a CDN. These function as essentially a "Confused Deputy" in the
> discovered attack on tls-sni-01. For those providers there would undoubtedly
> be a temptation to pretend all is well (to keep things
> working) even if in fact they aren't able to defeat this attack or some 
> trivial
> mutation of it, and that's coloured Let's Encrypt's response, because there's
> just no way to realistically police whitelisting of thousands or tens of
> thousands of such service providers.

> From the volumes versus numbers of customers, it seems as though OneClick
> must be targeting the same type of service providers, is that right?
 
Yes.

> The small number of such customers suggests that, unlike Let's Encrypt, it
> could be possible for GlobalSign to diligently affirm that each of the 
> customers
> has technical countermeasures in place to protect their clients from each
> other.

 Yes, for those customers that want to continue with this method, we would 
confirm they meet the criteria.

> In my opinion such an approach ought to be adequate to continue using
> OneClick in the short term, say for 12-18 months with the understanding that
> this validation method will either be replaced by something less problematic 
> or
> the OneClick service will go away in that time.

We can do with an even shorter period - 6 months should be sufficient.

Thanks for the support!

> But of course I do not speak for Google, Mozilla or any major trust store.

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Possible Issue with Domain Validation Method 9 in a shared hosting environment

2018-01-15 Thread Doug Beattie via dev-security-policy
Ryan,

I’m not sure where we go from here.  We have customers that need certificates 
and they have demonstrated they can comply with not permitting the creation and 
use of certificates for domains other than those that the hosting company is 
hosting for that customer.  All certificates will continue to be posted to CT 
logs.

As far as the wildcard question, when someone asks for a wildcard cert for a 
domain like *.us.example.com, we validate on that minus the * (so, 
us.example.com in this case).

We’d like to move forward with issuing certificates with controls in place.  If 
there are any other controls you need us to implement to resume issuance, let 
us know.  For example, if we limit validity to 1 year (possibly up to 15 
months) and if we put a firm end date for OneClick for July 1, 2018, would that 
suffice?

Doug


From: Ryan Sleevi [mailto:r...@sleevi.com]
Sent: Monday, January 15, 2018 2:31 PM
To: Doug Beattie 
Cc: r...@sleevi.com; Wayne Thayer ; Gervase Markham 
; mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: Possible Issue with Domain Validation Method 9 in a shared hosting 
environment



On Mon, Jan 15, 2018 at 1:18 PM, Doug Beattie 
> wrote:


From: Ryan Sleevi [mailto:r...@sleevi.com]
Sent: Friday, January 12, 2018 5:53 PM
To: Doug Beattie 
>
Cc: Wayne Thayer >; Gervase 
Markham >; 
r...@sleevi.com; 
mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: Possible Issue with Domain Validation Method 9 in a shared hosting 
environment

(Wearing a Google Hat)

Doug,

Thanks for sharing additional details. On the basis of what you've shared so 
far, we do not believe this results in an appropriate level of security for the 
ecosystem, and request that you do not re-enable issuance at this time. This 
applies for any CA using methods similar to what you're using.

Broadly speaking, 
https://groups.google.com/d/msg/mozilla.dev.security.policy/RHsIInIjJA0/HACyY9tMAAAJ
 has shared the some of the principles we've used in this consideration. If 
there is additional details that GlobalSign can share, related to those 
principles, this would be invaluable.
Ryan,

I had a hard time digesting that email because it compared so many different 
items, many of which aren’t directly applicable to the OneClick vs. method 10 
that I want to focus on.  The key points I took away from your email are:

“weak” manual method comparison with methods 9 and 10 (not applicable to the 
methods 9-10 comparison since we’re not comparing them to manual methods).

Short validity certificates represent more risk to ecosystem (expiration) and 
less risk (certs issued under the exploit will expire within 90 days – badness 
lasts for only 90 days).
I’ll address this point below, but given LE will allow renewals of possibly bad 
validations and attackers generally only operate with short periods of attacks 
before moving on, I don’t see the value of short lived certificates having 
meaningful reduction in risk within this context.

Ease of which an alternate method exists and can be used (discussion of manual 
vs. automated methods): Not applicable to the methods 9-10 comparison since 
they are both automated and have the same characteristics.

Risk is applicable to shared service providers and an accepted risk mitigation 
is to block SNI negotiations that contain “.invalid”.  We also propose working 
with our customers on an account by account basis to assure they comply with 
the guidelines for use of method 9 until such time it’s re-affirmed, improved 
or deprecated from the BRs.

Perhaps I missed some other key points from that email.

I think these points may not have been fully appreciated. I don't see evidence 
from this mail, or from the ecosystem, that the OneClick method poses both the 
same risk and the same level of review as ACME's TLS-SNI, and I think we may 
fundamentally disagree about the risk profile of certificates with long 
validity periods, and both the detrimental effect they have on reasoning about 
ecosystem security AND the ways in which they mitigate the need to 'quickly 
re-enable this'


This assessment is based on a number of factors, but includes:
- The validity period of certificates issued via this method means that there 
is an unacceptably large window for certificates improperly issued to be used.
Risk should not be based so heavily on the validity period, which seems to be 
one of your consistent points.  The number of certificates issued along with 
the probability of a failure should both be used in the ecosystem risk 
computation.

We must disagree then. Risk is profoundly dependent on 

RE: Possible Issue with Domain Validation Method 9 in a shared hosting environment

2018-01-15 Thread Doug Beattie via dev-security-policy


From: Ryan Sleevi [mailto:r...@sleevi.com]
Sent: Friday, January 12, 2018 5:53 PM
To: Doug Beattie 
Cc: Wayne Thayer ; Gervase Markham ; 
r...@sleevi.com; mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: Possible Issue with Domain Validation Method 9 in a shared hosting 
environment

(Wearing a Google Hat)

Doug,

Thanks for sharing additional details. On the basis of what you've shared so 
far, we do not believe this results in an appropriate level of security for the 
ecosystem, and request that you do not re-enable issuance at this time. This 
applies for any CA using methods similar to what you're using.

Broadly speaking, 
https://groups.google.com/d/msg/mozilla.dev.security.policy/RHsIInIjJA0/HACyY9tMAAAJ
 has shared the some of the principles we've used in this consideration. If 
there is additional details that GlobalSign can share, related to those 
principles, this would be invaluable.
Ryan,

I had a hard time digesting that email because it compared so many different 
items, many of which aren’t directly applicable to the OneClick vs. method 10 
that I want to focus on.  The key points I took away from your email are:

“weak” manual method comparison with methods 9 and 10 (not applicable to the 
methods 9-10 comparison since we’re not comparing them to manual methods).

Short validity certificates represent more risk to ecosystem (expiration) and 
less risk (certs issued under the exploit will expire within 90 days – badness 
lasts for only 90 days).
I’ll address this point below, but given LE will allow renewals of possibly bad 
validations and attackers generally only operate with short periods of attacks 
before moving on, I don’t see the value of short lived certificates having 
meaningful reduction in risk within this context.

Ease of which an alternate method exists and can be used (discussion of manual 
vs. automated methods): Not applicable to the methods 9-10 comparison since 
they are both automated and have the same characteristics.

Risk is applicable to shared service providers and an accepted risk mitigation 
is to block SNI negotiations that contain “.invalid”.  We also propose working 
with our customers on an account by account basis to assure they comply with 
the guidelines for use of method 9 until such time it’s re-affirmed, improved 
or deprecated from the BRs.

Perhaps I missed some other key points from that email.


This assessment is based on a number of factors, but includes:
- The validity period of certificates issued via this method means that there 
is an unacceptably large window for certificates improperly issued to be used.
Risk should not be based so heavily on the validity period, which seems to be 
one of your consistent points.  The number of certificates issued along with 
the probability of a failure should both be used in the ecosystem risk 
computation.  Given LE issues orders of magnitude more certificates to unique 
endpoints, I think the risk to the eco system at large with the GlobalSign 
issuance is lower of that with LE (when it comes to the topic of validity 
periods).

Risk = impact x probability:  With the number of LE endpoints (or anyone using 
Method 10 in high volumes), the probability of a successful attack is vastly 
higher due to the sheer number of servers, and the impact for both methods is 
the same (a certificate issued to a successful attacker)

- Based on the available information of expiration times and the potential 
difficulty in renewing certificates using this method, the ecosystem risk of 
disallowing this method is much less.
How did you come to the conclusion that validity periods and renewal challenges 
substantially increase the risk of method 9?
1) While a GlobalSign certificate would be valid for a longer period than LE 
(typically 1 year, but up to 3), typical attacks are done, detected, resolved 
within days or weeks  I don’t believe that the validity period of certificates 
significantly  increases the risk when exploited in the way as described (the 
target site would typically notice they were compromised and it would be 
reported and the certified revoked within days or weeks).  A more important 
factor is the number of certificates that may be issued, not their validity 
period.
2) While LE’s validity period is shorter, they re-use the validation for 
subsequent issuance thus the time between validation and expiration is longer 
than 90 days (I believe the domain validations can be cached for 60 days).  
This equates to 5 months vs. generally 12 months for GlobalSign.  And since LE 
will permit domain renewal of possibly bad authentications, the 5 months could 
average out to be substantially higher.
3) While the renewal process is currently not optimal, it’s been working for 5 
years without significant pushback from our customers.  I fail to see how this 
factors into risk in a meaningful way.  I may have missed your point.


- The 

RE: Possible Issue with Domain Validation Method 9 in a shared hosting environment

2018-01-12 Thread Doug Beattie via dev-security-policy
Wayne,

We didn’t really investigate wildcard issuance yet, but we can.

Given the discuss so far, we’re planning to proceed with a whitelisting 
approach tomorrow and we will plan to end the use of Method 9 (schedule TBD) 
which follows Let’s Encrypt handling of Method 10.  If there are any additional 
security concerns that we need to be made aware of, please let me know and we 
can adjust the plan accordingly.

Doug


From: Wayne Thayer [mailto:wtha...@mozilla.com]
Sent: Friday, January 12, 2018 3:43 PM
To: Doug Beattie 
Cc: Gervase Markham ; r...@sleevi.com; 
mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: Possible Issue with Domain Validation Method 9 in a shared hosting 
environment

On Fri, Jan 12, 2018 at 11:21 AM, Doug Beattie 
> wrote:

Normally a web hosting provider should not let you set SNI for a domain someone 
else is using, especially on that IP address.  I think this is where method 9 
deviates from method 10.

I agree, it seems somewhat less likely that a hosting provider would allow 
someone to create a site for abc.example.com if one 
already exists on the same server. Are you aware of any hosting providers that 
do allow this? Also, did you consider wildcard DNS records in your analysis of 
the vulnerability? (see below)

For method 10, you set up SNI on your server and add the acme.invalid string 
associated with your request/cert.  Since nobody owns that invalid domain, the 
provider probably doesn’t care that you set up that SNI name and are using a 
certificate for that fqdn on their shared IP address.

It's also possible that the only thing the hosting provider checks is if there 
is already an SNI entry for that FQDN, in which case sites that aren't 
configured for TLS would be vulnerable.

For method 10 we look explicitly for the FQDN in the cert and there is no 
special SNI reconfiguration required (the site is there before, during and 
after the validation and issuance).

Are you confusing method 9 and method 10 in this sentence and the one below?
Yes, Method 9.

  Do hosting providers allow you to set SNI for domains you don’t own on a 
shared IP addresses?

I think that is exactly what has been found to be true.

  That sounds bad, but I defer to the experts here.  Method 9 does not require 
that.


Also, the ACME client actively support the process of allowing this random 
acme.invalid value to be tied to the real FQDN and looks for requests based on 
that SNI name.  All of the OneClick plugins (which btw, support similar 
features like client like key generation, cert installation and apache 
configuration), require that the FQDN being validated match the value in the 
certificate and the SNI server name.  Validation will fail when the SNI does 
not match what is expected.  The vast majority of OneClick endpoints are not 
vulnerable (yes, bad guys can modify the plugins and subvert the security we 
built in).  Yes, there is a vulnerability, but I think it’s a smaller scale 
than what’s in TLS-SNI-01.

Do you perform wildcard certificate validation with this method? If so, could 
someone create a site for evil.example.com on the same 
server as www.example.com and then get a cert for 
*.example.com by relying on a wildcard DNS record in the 
example.com zone (i.e. DNS responds to a query for 
evil.example.com with the IP for 
www.example.com)?



___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Possible Issue with Domain Validation Method 9 in a shared hosting environment

2018-01-12 Thread Doug Beattie via dev-security-policy
Wayne and Gerv,

I’ll try to answer both of your questions here.

From: Wayne Thayer [mailto:wtha...@mozilla.com]
Sent: Friday, January 12, 2018 11:03 AM
To: Doug Beattie 
Cc: r...@sleevi.com; mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: Possible Issue with Domain Validation Method 9 in a shared hosting 
environment

Doug,

I have some questions:

c.The hosting company must allow you to manually create and upload a 
CSR for a site you don’t own
Did you mean to say 'certificate' here instead of 'CSR'?
Yes, I meant to say certificate.


d.   The user must be able to trick the hosting provider to enable SNI for 
this domain and link it to the certificate they uploaded
Is 'trick' the right term here? Isn't this just a default configuration for 
vulnerable hosting providers?

From Gerv: Doug: what do you see as the exact differences between your setup 
and the TLS-SNI-01 configuration? It seems to me that both are vulnerable in 
the same circumstances (i.e., hosting provider has many users hosted on the 
same IP address, and users have the ability to upload certificates for 
arbitrary names without proving domain control).

Normally a web hosting provider should not let you set SNI for a domain someone 
else is using, especially on that IP address.  I think this is where method 9 
deviates from method 10.

For method 10, you set up SNI on your server and add the acme.invalid string 
associated with your request/cert.  Since nobody owns that invalid domain, the 
provider probably doesn’t care that you set up that SNI name and are using a 
certificate for that fqdn on their shared IP address.

For method 10 we look explicitly for the FQDN in the cert and there is no 
special SNI reconfiguration required (the site is there before, during and 
after the validation and issuance).  Do hosting providers allow you to set SNI 
for domains you don’t own on a shared IP addresses?  That sounds bad, but I 
defer to the experts here.  Method 9 does not require that.

Also, the ACME client actively support the process of allowing this random 
acme.invalid value to be tied to the real FQDN and looks for requests based on 
that SNI name.  All of the OneClick plugins (which btw, support similar 
features like client like key generation, cert installation and apache 
configuration), require that the FQDN being validated match the value in the 
certificate and the SNI server name.  Validation will fail when the SNI does 
not match what is expected.  The vast majority of OneClick endpoints are not 
vulnerable (yes, bad guys can modify the plugins and subvert the security we 
built in).  Yes, there is a vulnerability, but I think it’s a smaller scale 
than what’s in TLS-SNI-01.


While the vulnerabilities and risks are different between ACME TLS-SNI-01 and 
OneClick,

Can you explain this statement? My impression is that the same vulnerability 
affects both methods.
Listed above.


we’d like to propose a risk mitigation approach similar to Let’s Encrypt with 
the use of a whitelist.  We’ll verify that certain providers have secure 
practices in place to prevent users from requesting certificates outside of 
their permitted domains and then whitelist them.
Let's Encrypt  has stated that this is a short- to medium-term mitigation. Is 
your plan to continue to use this method indefinitely? Or are you ultimately 
planning to fix or deprecate the method?

If we’re required to deprecate this because of similar security concerns, we 
can do that.

If this is acceptable, we’d like to resume issuance today if possible.
If my understanding of the 3.2.2.4.9 vulnerability being essentially the same 
as the 3.2.2.4.10 vulnerability, then this seems reasonable to me, at least in 
the short term.

Thanks Wayne.
Wayne
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Possible Issue with Domain Validation Method 9 in a shared hosting environment

2018-01-12 Thread Doug Beattie via dev-security-policy
Ryan,

I’d like to follow up on our investigation and provide the community with some 
more information about how we use Method 9.

We use a process that we refer to as OneClick to automate the domain validation 
and issuance of certificates by issuing a test certificate to an FQDN and then 
verifying that the certificate is present on that FQDN.  This is different from 
ACME method TLS-SNI-01, regardless of what some GlobalSign tweets may have 
mentioned.   Where dedicated IP addresses are used, we believe this method is 
safe and secure. So, I’ll focus this discussion on when there are shared IP 
addresses and SNI is used. This is how the OneClick validation works:

1)  Client requests a test certificate for a domain (only one FQDN)

2)  We issue a test certificate valid for 7 days

3)  Client places the test certificate on their server

4)  We connect to the server (DNS look-up and then use SNI to ask for the 
certificate)

5)  If the certificate is returned, the validation passes and we issue a 
production certificate which is downloaded and installed.  The issued 
certificate can have validity up to 39 months (soon 825 days)

For shared IP address environments, it may be possible to receive a certificate 
for a domain you don’t actually control, but a number of things need to happen 
in order for this to be successful.  What can go wrong?

1)  A user could request a test certificate for a domain they don’t own 
within a shared IP address environment.  In order for this to be successful:

a.   User must know which other sites are hosted on the same IP address 
(the attack is limited to the set of customers on that shared IP address)

b.   For this case, I’m assuming that sites don’t have TLS enabled (if they 
did when we went to validate them, we’d receive their certificate – more on 
this below in case 2)

c.The hosting company must allow you to manually create and upload a 
CSR for a site you don’t own

d.   The user must be able to trick the hosting provider to enable SNI for 
this domain and link it to the certificate they uploaded

2)  In the event that the target site does have TLS enabled and the 
attacker wants to override the account settings to provide this test 
certificate, they would need the provider to allow multiple accounts to claim 
the SNI traffic for that site. This scenario seems unlikely (and if they did, 
it would be generally insecure)

Our typical hosting customers have integrated certificate provisioning into 
their account/service set-up so a certificate can be provisioned quickly and 
easily.  Normally, there is no user involvement in key generation and the 
backend systems take care of this provisioning and would not allow test 
certificates to be uploaded other than for the purpose they are intended (to 
secure a specific site).  In this case, we don’t believe that there is a 
security issue since the system would be creating and validating 
domains/certificates as expected.

If users are able to initiate the domain validation process and if they are 
permitted to upload certificates for sites they don’t control, then there is a 
possibility that they could get a certificate for that domain.  We can’t 
control what every provider does, so this risk remains.

While the vulnerabilities and risks are different between ACME TLS-SNI-01 and 
OneClick, we’d like to propose a risk mitigation approach similar to Let’s 
Encrypt with the use of a whitelist.  We’ll verify that certain providers have 
secure practices in place to prevent users from requesting certificates outside 
of their permitted domains and then whitelist them.

If this is acceptable, we’d like to resume issuance today if possible.

Doug


From: Ryan Sleevi [mailto:r...@sleevi.com]
Sent: Thursday, January 11, 2018 5:19 PM
To: Doug Beattie <doug.beat...@globalsign.com>
Cc: mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: Possible Issue with Domain Validation Method 9 in a shared hosting 
environment



On Thu, Jan 11, 2018 at 4:50 PM, Doug Beattie via dev-security-policy 
<dev-security-policy@lists.mozilla.org<mailto:dev-security-policy@lists.mozilla.org>>
 wrote:

Based on reported issues with TLS-SNI-01, we started investigation of our 
systems late yesterday regarding the use of "Test Certificate" validation, BR 
section  3.2.2.4.9.

We found that this method may be vulnerable to the some of the same underlying 
issue as the ACME TLS-SNI-01 so we disabled it at 10:51 AM today EST, January 
11th.

While TLS-SNI-01 uses a host name like 773c7d.13445a.acme.invalid, GlobalSign 
uses the actual host name, 
www.example.com<http://www.example.com><http://www.example.com> which limits 
abuse, but we believe that the process might be vulnerable in some cases.

We're continuing to research this and will let you know what we find.

Doug

(Wearing a Chrome Hat, again)

Doug,

Thanks for the update. That seems consistent with C

Possible Issue with Domain Validation Method 9 in a shared hosting environment

2018-01-11 Thread Doug Beattie via dev-security-policy

Based on reported issues with TLS-SNI-01, we started investigation of our 
systems late yesterday regarding the use of "Test Certificate" validation, BR 
section  3.2.2.4.9.

We found that this method may be vulnerable to the some of the same underlying 
issue as the ACME TLS-SNI-01 so we disabled it at 10:51 AM today EST, January 
11th.

While TLS-SNI-01 uses a host name like 773c7d.13445a.acme.invalid, GlobalSign 
uses the actual host name, www.example.com which limits 
abuse, but we believe that the process might be vulnerable in some cases.

We're continuing to research this and will let you know what we find.

Doug


Doug Beattie
Vice President of Product Management
GlobalSign
Two International Drive | Suite 150 | Portsmouth, NH 03801
Email: doug.beat...@globalsign.com
www.globalsign.com

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Changes to CA Program - Q1 2018

2018-01-10 Thread Doug Beattie via dev-security-policy
Thanks Kathleen.  I only asked because you are trying to reduce the manpower 
for processing applications, and if a CA was already in the program there might 
not be a need to do as much.  But on the other hand, this forces us to all 
comply with those pesky set of questions in "CA/Forbidden or Problematic 
Practices" that we've ignored and forces a formal review of the CPS.

Doug

> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+doug.beattie=globalsign@lists.mozilla.org] On Behalf Of
> Kathleen Wilson via dev-security-policy
> Sent: Wednesday, January 10, 2018 1:45 PM
> To: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Re: Changes to CA Program - Q1 2018
> 
> > Is the same process used for existing CAs that need to add a new root and
> new CAs applying for the first time?
> 
> Yes.
> 
>  From
> https://wiki.mozilla.org/CA/Application_Process#Process_Overview
> ""
> The same process is used to request:
> - Root certificate inclusion for all CAs, even if the CA already has root
> certificates included in Mozilla's root store
> - Turning on additional trust bits for an already-included root certificate
> - Enabling EV treatment for an already-included root certificate
> - Including a renewed version of an already-included root certificate ""
> 
> Kathleen
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Changes to CA Program - Q1 2018

2018-01-10 Thread Doug Beattie via dev-security-policy
Hi Kathleen,

Is the same process used for existing CAs that need to add a new root and new 
CAs applying for the first time?  

Doug

> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+doug.beattie=globalsign@lists.mozilla.org] On Behalf Of Kathleen
> Wilson via dev-security-policy
> Sent: Tuesday, January 9, 2018 7:24 PM
> To: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Changes to CA Program - Q1 2018
> 
> All,
> 
> I would like to thank Aaron Wu for all of his help on our CA Program, and am
> sorry to say that his last day at Mozilla will be January 12. I have 
> appreciated all
> of Aaron’s work, and it has been a pleasure to work with him.
> 
> I will be re-assigning all of the root inclusion/update Bugzilla Bugs back to 
> me,
> and I will take back responsibility for the high-level verification of the CA-
> provided data for root inclusion/update requests.
> I will also take back responsibility for verifying CA annual updates, and we 
> will
> continue to work to improve that process and automation via the CCADB.
> 
> Wayne Thayer, Gerv Markham, and Ryan Sleevi have already taken
> responsibility for the CA Incident bugs
> (https://wiki.mozilla.org/CA/Incident_Dashboard). Thankfully, many of you
> members of the CA Community are helping with this effort.
> 
> Wayne and Devon O’Brien will take responsibility for ensuring that thorough
> reviews of CA root inclusion/update requests happen (see below), and Wayne
> will be responsible for the discussion phase of CA root inclusion/update
> requests. We greatly appreciate all of the input that you all provide during 
> the
> discussions of these requests, and are especially grateful for the thorough
> reviews that have been performed and documented, with special thanks to
> Ryan Sleevi, Andrew Whalley, and Devon O’Brien.
> 
> I think this is a good time for us to make some changes to Mozilla’s Root
> Inclusion Process to improve the effectiveness of the public discussion phase 
> by
> performing the detailed CP/CPS review prior to the public discussion. The 
> goal of
> this change is to focus the discussion period on gathering community input and
> to allow the process to continue when no objections are raised.
> 
> As such, I propose that we make the following changes to
> https://wiki.mozilla.org/CA/Application_Process#Process_Overview
> 
> ~~ PROPOSED CHANGES ~~
> 
> Step 1: A representative of the CA submits the request via Bugzilla and 
> provides
> the information a listed in https://wiki.mozilla.org/CA/Information_Checklist.
> 
> * Immediate change: None
> 
> * Future change: CAs will directly input their Information Checklist data 
> into the
> CCADB.
> All root inclusion/update requests will begin with a Bugzilla Bug, as we do 
> today.
> However, we will create a process by which CAs will be responsible for 
> entering
> and updating their own data in the CCADB for their request.
> 
> Step 2: A representative of Mozilla verifies the information provided by the 
> CA.
> 
> * Immediate change: None
> This will continue to be a high-level review to make sure that all of the 
> required
> data has been provided, per the Information Checklist, and that the required
> tests have been performed.
> 
> * Future change: Improvements/automation in CCADB for verifying this data.
> 
> Step 3: A representative of Mozilla adds the request to the queue for public
> discussion.
> 
> * Immediate change: Replace this step as follows.
> NEW Step 3: A representative of Mozilla or of the CA Community (as agreed by a
> Mozilla representative) thoroughly reviews the CA’s documents, and adds a
> Comment in the Bugzilla Bug about their findings.
> If the CA has everything in order, then the Comment will be that the request
> may proceed, and the request will be added to the queue for public discussion.
> Otherwise the Comment will list actions that the CA must complete. This may
> include, but is not limited to fixing certificate content, updating process,
> updating the CP/CPS, and obtaining new audit statements. The list of actions 
> will
> be categorized into one of the following 3 groups:
>--- 1: Must be completed before this request may proceed.
>--- 2: Must be completed before this request may be approved, but the 
> request
> may continue through the public discussion step in parallel with the CA
> completing their action items.
>--- 3: Must be completed before the CA’s next annual audit, but the request
> may continue through the rest of the approval/inclusion process.
> 
> Step 4: Anyone interested in the CA's application participates in discussions 
> of CA
> requests currently in discussion in the mozilla.dev.security.policy forum.
> 
> * Immediate Change: Delete this step from the wiki page, because it is a 
> general
> statement that does not belong here.
> 
> Step 5: When the application reaches the head of the queue, a representative 
> of
> Mozilla starts the public discussion 

RE: ComSign Root Renewal Request

2017-12-19 Thread Doug Beattie via dev-security-policy
Hi Wayne,

I noticed your comment on IDN validation. Is there a requirement that CAs 
establish an effective safeguard against homograph spoofing?

The reason I ask is that Let's Encrypt's CPS  says this: "Regarding 
Internationalized Domain Names, ISRG will have no objection so long as the 
domain is resolvable via DNS. It is the CA’s position that homoglyph spoofing 
should be dealt with by registrars, and Web browsers should have sensible 
policies for when to display the punycode versions of names."

Doug

> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+doug.beattie=globalsign@lists.mozilla.org] On Behalf Of
> Wayne Thayer via dev-security-policy
> Sent: Tuesday, December 5, 2017 1:44 PM
> To: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Re: ComSign Root Renewal Request
> 
> > We can restart the discussion and please review their updated documents
> and comment in this discussion if you have further questions or concerns
> about this request.
> >
> After reviewing Comsign's updated CPS and related documents, I have the
> following comments:
> 
> == Good ==
> - CPS follows RFC 3647 and includes a table of revisions
> - CAA requirements are met
> - Audit reports cover a full year
> - Contact information for problem reporting is clearly stated in section 4.9.3
> - Aside from what I’ve listed below, all of the issues reported earlier by 
> Ryan
> Sleevi appear to have been addressed.
> 
> == Meh ==
> - Fingerprints in the audit reports are SHA-1; should be SHA-256
> - The CPS is located at https://www.comsign.co.il/CPS under the heading
> “CPS – in accordance with the Electronic Signature Law of Israel” but earlier
> discussions indicate that SSL certificates aren’t covered by this law, in 
> which
> case it’s not clear what the difference is between this CPS and the one listed
> under “CPS – for - Certificates that are not under the Electronic Signature
> Law of Israel” on the same page.
> - None of the subordinate CAs contain an EKU extension. [1]
> - Section 3.1.3 states that “Comsign will not issue an Electronic Certificate
> bearing a nickname of the Subscriber or one that does not state the name of
> the Subscriber” but section 7.1.2.3(iv) shows a DV certificate profile that
> doesn’t name the Subscriber. If the term ‘Electronic Certificate’ is intended
> to only apply to non-SSL certificates, then the definition should be 
> clarified.
> - The domain validation methods specified in CPS section 3.2.2.4 are nearly
> cut-and-paste from the BRs, so this section provides little information that
> can be used to evaluate Comsign’s domain validation practices. [2]
> - None of the four intermediates shown in the root hierarchy diagram [3] are
> disclosed in CCADB at this time (this isn’t required until the root is 
> included).
> There are (at least) 3 different “ComSign Organizational CA” subordinate CA
> certificates with the same public key that should be disclosed.
> 
> == Bad ==
> - The Hebrew version of the CPS at https://www.comsign.co.il/repository/ is
> version 3.1 while the English version on the same page is 4.0, so I assume
> that these are different documents. I see nothing in the English version
> stating that it takes precedence over the Hebrew version.
> - Section 1 of the CPS doesn’t clearly state that Comsign adheres to the
> **latest** version of the BRs, nor that the BRs take precedence over the CPS
> (BR 2.2).
> - The Creative Commons license is not listed in the CPS (Mozilla policy 3.3).
> - Audit reports don’t list any intermediates covered by the audit (Mozilla
> policy 3.1.4).
> - 3.2.2.4 states “All authentication  and  verification  procedures  in  this 
>  sub-
> section shall be  performed  either  directly by Comsign's personnel (RAs) or
> by Comsign's authorized representatives.”. There is no definition of who can
> be an ‘authorized representative’, but in this context it sounds like a
> Delegated Third Party, and CAs are not permitted to delegate domain
> validation (BR 1.3.2).
> - CPS 3.2.2.4 states: “For  issuing certificates to organizations requesting 
> SSL
> certificates,Comsign performs domain name owners verification to detect
> cases of homographic spoofing of IDNs. Comsign employs an automated or
> manual process that searches various ‘whois’ services to find the owner of a
> particular domain. A search failure result is flagged and the RA rejects the
> Certificate Request. Additionally, the RA rejects any domain name that
> visually appears to be made up of multiple scripts within one hostname
> label.” How does a WHOIS check or a human review effectively detect mixed
> scripts in a label? I don’t believe this is an effective safeguard against
> homograph spoofing.
> - The audit reports supplied cover the period from 2015-04-27 to present.
> This doesn’t appear to satisfy the requirement for an unbroken sequence of
> audit periods back to the issuance of the first certificate on 2014-10-26 
> (refer

Forbidden Practices: Subscriber key generation

2017-11-14 Thread Doug Beattie via dev-security-policy
Hi Gerv and Kathleen,

We're working on the Mozilla CA self-assessment checklist and referenced 
requirements you have placed on CAs.  On your page of Forbidden or Problematic 
Practices [1], you state that CAs must not generate private keys for signer 
certificates.
CAs must never generate the key pairs for signer or SSL certificates. CAs may 
only generate the key pairs for SMIME encryption certificates.

The Code signing standard [2], section 10.2.4 permits CAs to generate private 
keys for code signing certificates.  Specifically:
If the CA or any Delegated Third Party is generating the Private Key on behalf 
of the Subscriber where the Private Keys will be transported to the Subscriber 
outside of the Signing Service's secure infrastructure, then the entity 
generating the Private Key MUST either transport the Private Key in hardware 
with an activation method that is equivalent to 128 bits of encryption or 
encrypt the Private Key with at least 128 bits of encryption strength. Allowed 
methods include using a 128-bit AES key to wrap the private key or storing the 
key in a PKCS 12 file encrypted with a randomly generated password of more than 
16 characters containing uppercase letters, lowercase letters, numbers, and 
symbols for transport.


The question is, if we issue Code Signing certificates via P12 files in 
compliance with the Code Signing standard, are we out of compliance with the 
Mozilla policy?  How do you recommend we respond to this checklist question?

And the same for S/MIME and SSL certificates.  If CAs generate and then 
securely distribute the keys to the subscribers using similar methods, is that 
permitted provided we implement similar security, or does that practice need to 
immediately stop?  Your guidance in this area would be appreciated.

Side question: Is there a deadline when you expect to receive self-assessments 
from all CAs?  We've found that complying with the checklist means a major 
update to our CPS (among other things...), and I suspect most other CAs will 
also need a major update.

Doug

[1] https://wiki.mozilla.org/CA/Forbidden_or_Problematic_Practices
[2] 
https://casecurity.org/wp-content/uploads/2016/09/Minimum-requirements-for-the-Issuance-and-Management-of-code-signing.pdf


Doug Beattie
Product Mangement
GMO GlobalSign, Inc.
Portsmouth, NH USA

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Proposed policy change: require private pre-notification of 3rd party subCAs

2017-10-24 Thread Doug Beattie via dev-security-policy
Gerv,

I assume this applies equally to cross signing, but not to "Vanity" CAs that 
are set up and run by the CA on behalf of a customer.  Is that accurate?

Doug

> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+doug.beattie=globalsign@lists.mozilla.org] On Behalf Of
> Gervase Markham via dev-security-policy
> Sent: Tuesday, October 24, 2017 11:28 AM
> To: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Proposed policy change: require private pre-notification of 3rd party
> subCAs
> 
> One of the ways in which the number of organizations trusted to issue for
> the WebPKI is extended is by an existing CA bestowing the power of issuance
> upon a third party in the form of control of a non-technically-constrained
> subCA. Examples of such are the Google and Apple subCAs under GeoTrust,
> but there are others.
> 
> Adding new organizations to the list of those trusted is a big deal, and
> currently Mozilla has little pre-insight into and not much control over this
> process. CAs may choose to do this for whoever they like, the CA then bears
> primary responsibility for managing that customer, and as long as they are
> able to file clean audits, things proceed as normal.
> 
> Mozilla is considering a policy change whereby we require private pre-
> notification of such delegations (or renewals of such delegations).
> We would not undertake to necessarily do anything with such notifications,
> but lack of action should not be considered permissive in an estoppel sense.
> We would reserve the right to object either pre- or post-issuance of the
> intermediate. (Once the intermediate is issued, of course, the CA has seven
> days to put it in CCADB, and then the relationship would probably become
> known unless the fields in the cert were misleading.)
> 
> This may not be where we finally want to get to in terms of regulating such
> delegations of trust, but it is a small step which brings a bit more
> transparency while acknowledging the limited capacity of our team for
> additional tasks.
> 
> Comments are welcome.
> 
> Gerv
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Issuing and using SHA-1 OCSP signing certificates

2017-10-03 Thread Doug Beattie via dev-security-policy

Hello Gerv,

The BRs are clear on the use of SHA-1, but I have a question about the Mozilla 
policy and how it relates to the use of SHA-1 OCSP signing certificates.

In December 2016 the Mozilla policy 2.3 was published and it didn't address the 
use of SHA-1 on OCSP signing certificates (see anyone that needs it, this page 
for links to the Mozilla CA policies: https://wiki.mozilla.org/CA:CertPolicy )

In February 2017, Mozilla Policy 2.4 was published which added stipulations for 
use of SHA-1 and that has been subsequently updated a few times this year to 
evolve to this:

5.1.1 SHA-1
CAs MAY sign SHA-1 hashes over end-entity certificates which chain up to roots 
in Mozilla's program only if all the following are true:
1.The end-entity certificate:
ois not within the scope of the Baseline Requirements;
ocontains an EKU extension which does not contain either of the 
id-kp-serverAuth or anyExtendedKeyUsage key purposes;
ohas at least 64 bits of entropy from a CSPRNG in the serial number.
2.The issuing certificate:
ocontains an EKU extension which does not contain either of the 
id-kp-serverAuth or anyExtendedKeyUsage key purposes;
ohas a pathlen:0 constraint.
Point 2 does not apply if the certificate is an OCSP signing certificate 
manually issued directly from a root.
In late 2016 we pre-generated a number of OCSP signing certificates for use in 
signing OCSP messages under our SSL CAs, but since we didn't know this same 
rule would be applied to non-BR certificates, we didn't pre-generate any OCSP 
signing certificates for those CAs.

The specific issue is that these client certificate CAs don't have the EKU 
extension even though we have no intent of issuing SSL certificates (they are 
WT audited and verified to not issue any SSL certificates per the BRs).

Is it permissible to continue issuing SHA-1 OCSP signing certificates for these 
existing legacy non-SSL CAs so we may continue providing revocation services 
using algorithms they support until all certificates under the CAs expire?  
This would be no later than the end of 2020.




___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: SHA-1 Usage in OCSP Responder

2017-08-29 Thread Doug Beattie via dev-security-policy
Hi Harshal,

Yes, we took the option of pre-generating some OCSP signing certificates in 
2016 for use in 2017 and 2018 vs. creating long validity OCSP signing 
certificates or moving to SHA-256.  Since the not-before dates are in 2017 when 
this would have been prohibited, so we posted them to CT logs in 2016 so there 
was no confusion about when they were created.

Regarding your statement that they don’t appear to be revoked: OCSP signing 
certificates can’t be revoked, thus they will never show up as revoked.

While browsers don't trust SHA-1, there are some legacy applications that do, 
and they probably don’t support SHA-256 OCSP signed certificates.  When the 
validation rate of these SHA-1 SSL certificates falls acceptably low, we'll 
revoke the SHA-1 CA and turn off all of the related OCSP services, but until 
then we have a few OCSP signing certificates we can use to provide revocation 
services.

Doug

> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+doug.beattie=globalsign@lists.mozilla.org] On Behalf Of
> Harshal Sheth via dev-security-policy
> Sent: Monday, August 28, 2017 5:52 PM
> To: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: SHA-1 Usage in OCSP Responder
> 
> Hello,
> 
> The following certificates are using the SHA-1 signature algorithm. They will
> all be valid for approximately three months in 2018, as none have been
> revoked thus far.
> 
> https://crt.sh/?id=62407589=cablint
> https://crt.sh/?id=62416636=cablint
> https://crt.sh/?id=62423790=cablint
> https://crt.sh/?id=62423799=cablint
> https://crt.sh/?id=62423818=cablint
> https://crt.sh/?id=62423833=cablint
> https://crt.sh/?id=62423686=cablint
> https://crt.sh/?id=62423690=cablint
> 
> Based on the information contained within the subject, they appear to be
> involved in OCSP responder signing. The BR states "CAs MUST NOT issue
> OCSP responder certificates using SHA‐1 (inferred)." by 2017-01-01. I am not
> sure if this applies, as all of these certificates were entered to CT logs on
> 2016-12-12.
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Responding to a misissuance

2017-08-18 Thread Doug Beattie via dev-security-policy


> -Original Message-
> From: Gervase Markham [mailto:g...@mozilla.org]
> Sent: Friday, August 18, 2017 9:42 AM
> To: Doug Beattie ; richmoor...@gmail.com;
> mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Re: Responding to a misissuance
> 
> On 18/08/17 13:03, Doug Beattie wrote:
> > And if there is any guidance on processing misissuance reports for
> > Name constrained sub-CA vs. not name constrained, that would be
> > helpful also.
> 
> What parts of a response do you think might be different for name-
> constrained sub-CAs?

Technically constrained CAs need to follow the BRs, but the "damage" they can 
do is limited to the set of domains they are constrained to, so I had assumed a 
different process might result.  But, given your pointed question, I can’t 
actually come up with what would be different.

> Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Responding to a misissuance

2017-08-18 Thread Doug Beattie via dev-security-policy
And if there is any guidance on processing misissuance reports for Name 
constrained sub-CA vs. not name constrained, that would be helpful also.

> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+doug.beattie=globalsign@lists.mozilla.org] On Behalf Of
> richmoore44--- via dev-security-policy
> Sent: Friday, August 18, 2017 7:51 AM
> To: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Re: Responding to a misissuance
> 
> Perhaps some explicit statements about sub-CAs would be helpful - detailing
> where responsibility lies and how a CA is required to deal with a sub-CA who
> is found to have misissued.
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: DigiCert-Symantec Announcement

2017-08-03 Thread Doug Beattie via dev-security-policy


> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+doug.beattie=globalsign@lists.mozilla.org] On Behalf Of
> Jeremy Rowley via dev-security-policy
> Sent: Wednesday, August 2, 2017 10:54 PM
> To: Peter Kurrasch ; mozilla-dev-security-policy
> 
> Subject: RE: DigiCert-Symantec Announcement
> * Will there be other players in Symantec's SubCA plan or is DigiCert the only
> one?
> 
> 
> 
> [DC] Only DigiCert.

Jeremy - It's my understanding that as of December 1st every certificate issued 
by Symantec or a Managed CA must have the domains validated by the Managed CA 
(in this case only DigiCert). Is it feasible that DigiCert revalidate every 
domain in use by Symantec Enterprise customs between now and then, and to keep 
up with all reissues/renewals and new Retail and Partner orders?  It seems like 
a huge challenge, especially given that you are not able to use Symantec 
employees or systems for this.  Maybe my assumptions are not accurate.

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Validation of Domains for secure email certificates

2017-07-20 Thread Doug Beattie via dev-security-policy
Hi Gerv,

OK, I see your point.  We'll come up with what we think are reasonable methods 
and document that in the CPS.  That should work better than Gerv's vacation 
thoughts!

Doug

> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+doug.beattie=globalsign@lists.mozilla.org] On Behalf Of
> Gervase Markham via dev-security-policy
> Sent: Thursday, July 20, 2017 10:58 AM
> To: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Re: Validation of Domains for secure email certificates
> 
> Hi Doug,
> 
> On 20/07/17 13:04, Doug Beattie wrote:
> > Since there is no BR equivalent for issuance of S/MIME certificates (yet),
> this is all CAs have to go on.  I was curious if you agree that all of these
> methods meet the above requirement:
> 
> As you might imagine, this question puts me in a difficult position. If I say
> that a certain method does meet the requirement, I am making Mozilla policy
> up on the fly (and while on holiday ;-). If I say it does not, I would perhaps
> panic a load of CAs into having to update their issuance systems for fear of
> being dinged for misissuance.
> 
> It is unfortunate that there is no BR equivalent for email. However, I'm not
> convinced that the best way forward is for Mozilla to attempt to write one by
> degrees in response to questioning from CAs :-) I think the best thing for you
> to do is to look at your issuance processes and ask yourself whether you
> would be willing to stand up in a court of law and assert that they were
> "reasonable measures". When thinking about that, you could perhaps ask
> yourself whether you were doing any things which had been specifically
> outlawed or deprecated in an SSL context by the recent improvements in
> domain validation on that side of the house.
> 
> Gerv
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Validation of Domains for secure email certificates

2017-07-20 Thread Doug Beattie via dev-security-policy
Gerv,



Mozilla Policy 2.5 states this:



For a certificate capable of being used for digitally signing or encrypting 
email messages, the CA takes reasonable measures to verify that the entity 
submitting the request controls the email account associated with the email 
address referenced in the certificate or has been authorized by the email 
account holder to act on the account holder's behalf.



Since there is no BR equivalent for issuance of S/MIME certificates (yet), this 
is all CAs have to go on.  I was curious if you agree that all of these methods 
meet the above requirement:



1.   On a per request basis (noting that some of these are overkill for 
issuance of a single certificate):

a.   3.2.2.4.1 Validating the Applicant as a Domain Contact

b.  3.2.2.4.2 Email, Fax, SMS, or Postal Mail to Domain Contact

c.   3.2.2.4.3 Phone Contact with Domain Contact

d.  3.2.2.4.4 Email to Constructed Address

e.  3.2.2.4.5 Domain Authorization Document

f.3.2.2.4.6 Agreed-Upon Change to Website

g.   3.2.2.4.7 DNS Change

2.   On a per Domain basis.  One approval is sufficient to approve issuance 
for certificates in this domain space since these represent administrator 
actions provided subsequent requests are all performed via authenticated 
channel to the CA . This approval would 
last until this customer notified the CA otherwise :

a.   3.2.2.4.1 Validating the Applicant as a Domain Contact

b.  3.2.2.4.2 Email, Fax, SMS, or Postal Mail to Domain Contact

c.   3.2.2.4.3 Phone Contact with Domain Contact

d.  3.2.2.4.4 Email to Constructed Address

e.  3.2.2.4.5 Domain Authorization Document

f.3.2.2.4.6 Agreed-Upon Change to Website

g.   3.2.2.4.7 DNS Change

3.   Assuming issuance to a service provider (email hosting entity like 
Microsoft, Yahoo or Google) that hosts email for many domains, CA verifies that 
the Email domain DNS MX record points to the hosting company which indicates 
the company has delegated email control to the hosting company.

4.   A DNS TXT record for the domain indicating approval to issue email 
certificates, or perhaps a CAA record with a new tag like issuesmime which 
permits the CA to issue certificates to this domain .  Details in CA CPS.

5.   A DNS TXT record for the domain indicating approval to issue email 
certificates, or perhaps a CAA record with a new tag like issuesmime which 
permits the email hosting company to issue certificates to this domain .  Details in CA CPS



Are there any other methods that you had in mind when writing this requirement? 
 Since issuance needs to be WT audited, there should be some level of 
"agreement" on acceptable validation methods.



Doug


___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Root Store Policy 2.5: Call For Review and Phase-In Periods

2017-07-06 Thread Doug Beattie via dev-security-policy
Gerv,

Moving to a new CA within 6 months is certain reasonable, but having enterprise 
customers also replace all certificates so the CA can be revoked within 6 
months might be a bit short, especially since several of those months are over 
the holidays.  Would you consider an approach were the CAs MUST not issue new 
certificates after 15 November (4 months) and the CAs SHALL be revoked no later 
than 15 April (9 months)?

Doug

> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+doug.beattie=globalsign@lists.mozilla.org] On Behalf Of
> Gervase Markham via dev-security-policy
> Sent: Thursday, June 22, 2017 8:50 AM
> To: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Re: Root Store Policy 2.5: Call For Review and Phase-In Periods
> 
> On 21/06/17 16:58, Doug Beattie wrote:
> >> It's worth noting that if we had discovered this situation for SSL -
> >> that an unconstrained intermediate or uncontrolled power of issuance
> >> had been given to a company with no audit - we would be requiring the
> >> intermediate be revoked today, and probably taking further action as well.
> >
> > Agree
> 
> After consultation, I have decided to implement this requirement with a
> phase-in period of six months, for already-existing intermediates. So before
> 15th January 2018 (add a bit because of Christmas) these customers, and any
> others like them at any other CA, need to have audits (over at least 30 days 
> of
> operations), move to a name-constrained intermediate, or move to a
> managed service which does domain ownership validation on each domain
> added to the system. I expect these two intermediates to be revoked on or
> before 15th January 2018.
> 
> I realise this is not what you were hoping for, but it's not reasonable to 
> leave
> unconstrained intermediates in the hands of those not qualified to hold them
> for a further 2 years. I am allowing six months because, despite the weakness
> of the previous controls, you were in compliance with them and so it's not
> reasonable to ask for a super-quick move.
> 
> https://github.com/mozilla/pkipolicy/commit/44ae763f24d6509bb2311d339
> 50108ec5ec87082
> 
> (ignore the erroneously-added logfile).
> 
> > Are there any other CAs or mail vendors that have tested name constrained
> issuing CAs? If using name constrained CAs don’t work with some or all of the
> mail applications, it seems like we might as well recommend a change to the
> requirement.
> 
> I am open to hearing further evidence on this point.
> 
> Gerv
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Root Store Policy 2.5: Call For Review and Phase-In Periods

2017-06-21 Thread Doug Beattie via dev-security-policy


> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+doug.beattie=globalsign@lists.mozilla.org] On Behalf Of
> Gervase Markham via dev-security-policy
> Sent: Wednesday, June 21, 2017 4:16 PM
> To: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Re: Root Store Policy 2.5: Call For Review and Phase-In Periods

> In your view, having an EKU limiting the intermediate to just SSL or to just
> email makes it a technically constrained CA, and therefore not subject to
> audit under any root program?

The BRs clearly specify SSL CAs without name constraints are required to follow 
the BRs and must be audited.

> I ask because Microsoft's policy at http://aka.ms/auditreqs says:
> 
> "Microsoft requires that every CA submit evidence of a Qualifying Audit on
> an annual basis for the CA and any non-limited root within its PKI chain."
> 
> In your view, are these two intermediates, which are constrained only by
> having the email and client auth EKUs, "limited" or "non-limited"?
>

Yes, I'd call these Secure mail CAs limited.

> >>> The other customer complies the prior words in the Mozilla policy
> >> regarding "Business Controls".
> 
> By implication, and reading your previous emails, are you saying that the 
> first
> customer does not comply with those words?

The first customer does comply with "business Controls", in our view.  We have 
contracts that specifies what they are allowed to do.

> > That is correct.  Enforcement is via contractual/business controls which is
> compliant with the current policy, as vague and weak as that is (and you've
> previously acknowledged).  Moving from this level of control to being
> audited or having name constraints will take more time that just a couple of
> months.
> 
> Leaving aside the requirements of other root programs, I agree this
> arrangement with the second customer is compliant with our current policy.
> For the new policy, they have 3 options: a) get an audit, b) use a name-
> constrained intermediate, or c) move to a hosted service which limits them
> to an approved set of domains.

Agree, there are options for both of these customers and we're conformable we 
can make this happen within 12 months with another 12 months to keep the CA 
live for cert management and then doing a revocation of the CA.

> Consistent with the principles outlined for Symantec regarding business
> continuity, the fact that GlobalSign does not have the capability to provide 
> c)
> should not be a factor in us determining how long we should allow this
> particular situation to continue.
> 
> It's worth noting that if we had discovered this situation for SSL - that an
> unconstrained intermediate or uncontrolled power of issuance had been
> given to a company with no audit - we would be requiring the intermediate
> be revoked today, and probably taking further action as well.

Agree

> > Two  further points:
> > 1) It’s not clear of email applications work with name constrained CAs.
> Some have reported email applications do not work, however, I have not
> tested this case.
> 
> That sounds like something you might want to investigate as a matter of
> urgency :-)

Are there any other CAs or mail vendors that have tested name constrained 
issuing CAs? If using name constrained CAs don’t work with some or all of the 
mail applications, it seems like we might as well recommend a change to the 
requirement.




___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Root Store Policy 2.5: Call For Review and Phase-In Periods

2017-06-21 Thread Doug Beattie via dev-security-policy


> -Original Message-
> From: Gervase Markham [mailto:g...@mozilla.org]
> Sent: Tuesday, June 20, 2017 9:12 PM
> To: Doug Beattie ; mozilla-dev-security-
> pol...@lists.mozilla.org
> Subject: Re: Root Store Policy 2.5: Call For Review and Phase-In Periods
> > We have 2 customers that can issue Secure Email certificates that are
> > not technically constrained with name Constraints (the EKU is
> > constrained to Secure Email and ClientAuth).> One customer operates
> > the CA within their environment and has been doing so for several
> > years. Even though we've been encouraging them to move back to a Name
> > Constrained CA or a hosted service,
> 
> To be clear: this customer has the ability to issue email certificates for any
> email address on the planet, and they control their own intermediate in
> their own infrastructure?

Yes, but see qualifications below.

> Do they have audits of any sort?

There had not been any audit requirements for EKU technically constrained CAs, 
so no, there are no audits.

> What are their objections to moving to a hosted service?

They are integrated with a Microsoft CA and moving will take some time to 
integrate with a different delivery of certificates.  It will just take some 
time.

> > The other customer complies the prior words in the Mozilla policy
> regarding "Business Controls".  We have an agreement with them where we
> issue them Secure Email certificates from our Infrastructure for domains
> they host and are contractually bound to using those certificates only for the
> matching mail account.  Due to the number of different domains managed
> and fact they obtain certificates on behalf of the users, it's difficult to
> enforce validation of the email address.  We have plans to add features to
> this issuance platform that will resolve this, but not in the near term.
> 
> So even though this issuance is from your infrastructure, there are no
> restrictions on the domains they can request issuance from?

That is correct.  Enforcement is via contractual/business controls which is 
compliant with the current policy, as vague and weak as that is (and you've 
previously acknowledged).  Moving from this level of control to being audited 
or having name constraints will take more time that just a couple of months.  

Two  further points:
1) It’s not clear of email applications work with name constrained CAs.  Some 
have reported email applications do not work, however, I have not tested this 
case. 
2) It’s unlikely that a secure email cert which is not compliant with the NC 
extension would be identified by email applications as non-compliant.  Again, 
this is something I haven't tested either. Maybe some others have first-hand 
knowledge for how email applications work (or not) with NC CAs?

Both of the customers are large US based companies with contractual obligations 
to only issue secure email certificates to domains which they own and control 
so I hope we can come to an agreement on the phased plan.

> Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Root Store Policy 2.5: Call For Review and Phase-In Periods

2017-06-20 Thread Doug Beattie via dev-security-policy
H Gerv,

I'd like to recommend a phase in of the requirement for technically constrained 
CAs that issue Secure email certificates.

We have 2 customers that can issue Secure Email certificates that are not 
technically constrained with name Constraints (the EKU is constrained to Secure 
Email and ClientAuth).

We'd like to propose:
- All new CAs shall comply with Policy 2.5 on its effective date
- All existing CAs can continue to operate in issuance mode for one year
- All existing CAs may continue to operate in maintenance mode to support 
revocation services for up to 1 additional year (allow all 1-year certificates 
to expire), then the CA must be revoked.

One customer operates the CA within their environment and has been doing so for 
several years.  Even though we've been encouraging them to move back to a Name 
Constrained CA or a hosted service, we've been unable to set firm plans in 
place without a Root program deadline we can reference.  Due to the nature of 
the company and their acquisitions, they need to keep supporting new domains so 
name constraints is difficult to keep up with.

The other customer complies the prior words in the Mozilla policy regarding 
"Business Controls".  We have an agreement with them where we issue them Secure 
Email certificates from our Infrastructure for domains they host and are 
contractually bound to using those certificates only for the matching mail 
account.  Due to the number of different domains managed and fact they obtain 
certificates on behalf of the users, it's difficult to enforce validation of 
the email address.  We have plans to add features to this issuance platform 
that will resolve this, but not in the near term.

Doug


> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+doug.beattie=globalsign@lists.mozilla.org] On Behalf Of
> Gervase Markham via dev-security-policy
> Sent: Thursday, June 8, 2017 11:43 AM
> To: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Root Store Policy 2.5: Call For Review and Phase-In Periods
> 
> Hi everyone,
> 
> I've made the last change I currently intend to make for version 2.5 of
> Mozilla's Root Store Policy. The last task before shipping it is to assess
> whether any of the changes require a phase-in period, i.e. for some reason,
> they can't be applicable immediately.
> 
> CAs and others are requested to comment, with rationale, as to why
> particular changes will need a phase-in period and what period they are
> proposing as appropriate. This is also an opportunity for interested parties 
> to
> do a general final review.
> 
> I hope to ship the policy immediately after the CAB Forum meeting in Berlin,
> which is happening from the 20th to the 22nd of June.
> 
> You can see the differences between version 2.4.1 and version 2.5 here in
> diff format (click "Files Changed" and then "Load Diff"):
> https://github.com/mozilla/pkipolicy/compare/2.4.1...master
> 
> or here in a more rich format:
> https://github.com/mozilla/pkipolicy/compare/2.4.1...master?short_path=b
> 7447c8
> (click "Files Changed" and scroll down).
> 
> The CCADB Policy changes are trivial and can be ignored.
> 
> Here is my summary of what's changed that's significant (with section
> numbers in brackets), although you should not rely on it to be complete:
> 
> 
> 1) Certificates with anyEKU have been added to the scope. (1.1)
> 
> 2) CAs are required to "follow industry best practice for securing their
> networks, for example by conforming to the CAB Forum Network Security
> Guidelines or a successor document". (2.1)
> 
> 3) Accounts which perform "Registration Authority or Delegated Third Party
> functions" are now also required to have multi-factor auth. (2.1)
> 
> 4) CAs are required to follow, but not required to contribute to,
> mozilla.dev.security.policy. (2.1)
> 
> 5) CAs are required to use only the 10 Blessed Methods for domain
> validation. (2.2) This requirement has already had a deadline set for it in 
> the
> most recent CA Communication; that deadline is 21st July 2017.
> 
> 6) WebTrust BR audits must now use version 2.2 or later of the audit criteria.
> (3.1.1)
> 
> 7) The ETSI audit criteria requirements have been updated to be accurate.
> (3.1.2.2). ETSI TS 102 042 and TS 101 456 audits will only be accepted for
> audit periods ending in July 2017 or earlier.
> 
> 8) There are further requirements on the information that needs to be
> contained in publicly-available audit information. (3.1.3)
> 
> 9) Mozilla now requires that auditors be qualified in the scheme they are
> using, unless agreed in writing beforehand. (3.2)
> 
> 10) When CAs do their BR-required yearly update of their CPs and CPSes,
> they MUST indicate this by incrementing the version number and adding a
> dated changelog entry. (3.3)
> 
> 11) The Mozilla CCADB Policy has been merged into the Root Store Policy,
> but the requirements have not changed. (4.1/4.2)
> 
> 12) CA are required at all times 

RE: Policy 2.5 Proposal: Clarify requirement for multi-factor auth

2017-06-01 Thread Doug Beattie via dev-security-policy

From: Ryan Sleevi [mailto:r...@sleevi.com] 
Sent: Thursday, June 1, 2017 8:46 AM
To: Gervase Markham 
Cc: Doug Beattie ; mozilla-dev-security-policy 

Subject: Re: Policy 2.5 Proposal: Clarify requirement for multi-factor auth

> > "enforce multi-factor authentication for all accounts capable of 
> > directly causing certificate issuance"
> >
> > to
> >
> > "enforce multi-factor authentication for all accounts capable of 
> > causing certificate issuance or performing validation functions"

> > Does anyone have suggestions as to how we can word this provision to
> > make this distinction?

> Do you think it's a valid reading to suggest that the e-mail confirmation 
> link is, in fact, performing > a validation function?

> That is, I can appreciate the tortured reading that results in this - and I 
> can appreciate the desire 
> for greater clarity - but I'm not sure it's worth expending significant 
> effort on. In the worst case, a 
> CA who reads it like Doug suggests will result in a more secure system 
> (vis-a-vis the discussion in 
> the CA/Browser Forum regarding email scanning devices that 'click' on links).

Yea, I didn’t really think that 2-factor auth needed to apply to this, but I 
don’t see how it applies to any of the automated domain validation processes 
either.  When a user requests the validation of a domain we'll provide them a 
Random Number via email, or one that they need to incorporate into DNS, Test 
Certificate or web site change.  Once the email is received or the random value 
is in place, the CA checks for this (maybe upon being asked by the partner or 
applicant).  I don’t see any place in these processes where 2-factor auth is 
applicable. Even in a managed account where an authenticated  Applicant says: 
"I want to add this domain to my account" and we provide a Random Number for 
them to use to demonstrate control I don’t see a need for 2-factor auth for 
that "account".

I understand the increased importance on domain validation, but I'm not clear 
how we map this to domain validation at all, except perhaps for doing it 
manually via who-is by an RA (and RAs already need 2-factor auth).

If this is the case, then in what cases do you see 2-factor auth being a 
requirement where it was not before?

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Policy 2.5 Proposal: Clarify requirement for multi-factor auth

2017-06-01 Thread Doug Beattie via dev-security-policy

> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+doug.beattie=globalsign@lists.mozilla.org] On Behalf Of Gervase
> Markham via dev-security-policy
> Sent: Wednesday, May 31, 2017 7:24 AM
> To: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Re: Policy 2.5 Proposal: Clarify requirement for multi-factor auth
> >
> > "enforce multi-factor authentication for all accounts capable of
> > directly causing certificate issuance"
> >
> > to
> >
> > "enforce multi-factor authentication for all accounts capable of
> > causing certificate issuance or performing validation functions"

Can you give some examples of validation functions that need to be enforced by 
multifactor authentication?  There are some that I don't think can be done 
using multi-factor authentication, such as domain validation via email (the 
link to confirm the domain can't be protected by multi-factor auth).


> Implemented as specced.
> 
> Gerv
> 
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Email sub-CAs

2017-05-18 Thread Doug Beattie via dev-security-policy
Hi Gerv,

I'm still looking for audit guidance on subordinate CAs that have EKU of Server 
auth and/or Secure Mail along with name constraints.  Do these need to be 
audited?

I'm looking at this:  
https://github.com/mozilla/pkipolicy/blob/master/rootstore/policy.md

Section 1.1, item #2 implies yes, that these CAs are in scope of this policy 
and thus must be audited - correct me if I'm wrong if being in the policy means 
they need to be audited.

Section 5.3.1 and 5.3.2 imply no audit is needed

Prior versions of the policy (at least 1.3 and before), did not require audits 
for technically constrained CAs like the ones referenced above.  Further, it 
used to be OK if the "Name Constraints" applied for Secure Mail CAs was done 
via contractual methods, vs. in the CA certificate at a technical NC.  We have 
one remaining customer with a CA like this and we're not sure on how new policy 
requirements apply to this existing customer.  Your guidance is appreciated.

Doug


> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+doug.beattie=globalsign@lists.mozilla.org] On Behalf Of Doug
> Beattie via dev-security-policy
> Sent: Monday, May 8, 2017 12:47 PM
> To: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: RE: Email sub-CAs
> 
> Hi Gerv,
> 
> I wanted to get the latest Mozilla thoughts on the audit requirements for
> TCSCs based on the discussion we started last month.  I understand the BR
> requirement if the CA can issue server auth certificates, this was discussed
> here:
> 
> https://groups.google.com/forum/#!msg/mozilla.dev.security.policy/ZMUjQ6
> xHrDA/ySofsF_PAgAJ
> 
> For TCSCs that can issue secure email certs, what are the requirements in the
> new policy, 2.4?  I think they were excluded from audit requirement before,
> but in the latest Mozilla policy these CAs need to have a WT for CA audit even
> if they are Name Constrained.
> 
> So here my questions:
> 
> Was this an intentional change?
> 
> Is the same true for TCSCs that can issue server auth certificates (even NC 
> CAs
> need a webtrust for CA audit)?
> 
> Are previously issued TCSCs exempt, if not, when would the audit period for
> them start?
> 
> Do these CAs need to be publicly disclosed?
> 
> Related tickets:
>https://github.com/mozilla/pkipolicy/issues/36
> 
>https://github.com/mozilla/pkipolicy/issues/69
> 
> 
> 
> 
> 
> 
> 
> 
> 
> > -Original Message-
> > From: dev-security-policy [mailto:dev-security-policy-
> > bounces+doug.beattie=globalsign@lists.mozilla.org] On Behalf Of
> > douglas.beattie--- via dev-security-policy
> > Sent: Thursday, April 13, 2017 12:33 PM
> > To: mozilla-dev-security-pol...@lists.mozilla.org
> > Subject: Re: Email sub-CAs
> >
> > On Thursday, April 13, 2017 at 10:49:17 AM UTC-4, Gervase Markham
> wrote:
> > > On 13/04/17 14:23, Doug Beattie wrote:
> > > > In 3.2 the term Technically Constrained is not defined to be any
> > > > different than the BRs (or perhaps even less restrictive).
> > >
> > > You mean 2.3, right?
> >
> > Yes, 2.3.
> >
> > > I would say Inclusion section, bullet 9 gives the definition of
> > > technically constrained. For email certs, because of the bug
> > > described in issue #69, it basically just says that it has to have
> > > the id-kp-emailProtection EKU. It should say more, but it doesn't.
> > > That's problematic, because just having an EKU isn't really a
> > > technical constraint in the "TCSC" sense.
> > >
> > > > In 3.2
> > > > this is all I can find regarding CAs that are capable of signing
> > > > secure email certificates, section 9: "If the certificate includes
> > > > the id-kp-emailProtection extended key usage, then all end-entity
> > > > certificates MUST only include e-mail addresses or mailboxes that
> > > > the issuing CA has confirmed (via technical and/or business
> > > > controls) that the subordinate CA is authorized to use."
> > > >
> > > > There is no statement back to scope or corresponding audits.  Were
> > > > secure email capable CAs supposed to be disclosed and audited to
> > > > Mozilla under 2.3?
> > >
> > > If they did not include id-kp-serverAuth, I would not have faulted a
> > > CA for not disclosing them if they met the exclusion criteria for
> > > email certs as written.
> >
> > OK.
> >
> > > > and how it applies to Secure email, I don't see how TCSCs with
> > > > secure email EKU fall within the sco

RE: [FORGED] Re: Configuring Graduated Trust for Non-Browser Consumption

2017-05-16 Thread Doug Beattie via dev-security-policy
Thanks Rob and Ryan for pointing that out.  Will the web servers need to send 
down a group of cross certs and then let the client use the ones they need in 
order to chain up to a root in their local trust store since the web server 
might not know which roots it has?

From: Alex Gaynor [mailto:agay...@mozilla.com]
Sent: Tuesday, May 16, 2017 10:31 AM
To: Rob Stradling <rob.stradl...@comodo.com>
Cc: Doug Beattie <doug.beat...@globalsign.com>; r...@sleevi.com; Peter Gutmann 
<pgut...@cs.auckland.ac.nz>; Gervase Markham <g...@mozilla.org>; Nick Lamb 
<tialara...@gmail.com>; MozPol <mozilla-dev-security-pol...@lists.mozilla.org>; 
Cory Benfield <c...@lukasa.co.uk>
Subject: Re: [FORGED] Re: Configuring Graduated Trust for Non-Browser 
Consumption

While the internet is moderately good at handling a single cross-sign (modulo 
the challenges we had with 1024-bit root deprecation due to a bug in OpenSSL's 
path building -- now fixed), as we extend the chains, it seems evident to me 
that server operators are unlikely to configure their servers to serve a chain 
which works on all clients -- the likely result is clients will need AIA 
chasing. Most (all?) non-browsers do not implement AIA chasing. This isn't an 
objection, just a flag and a potential action item on the "non-browser" side of 
this.
Alex

On Tue, May 16, 2017 at 10:27 AM, Rob Stradling 
<rob.stradl...@comodo.com<mailto:rob.stradl...@comodo.com>> wrote:
On 16/05/17 14:45, Doug Beattie via dev-security-policy wrote:
Ryan,

If you look at the wide range of user agents accessing 
google.com<http://google.com> today you'd see many legacy applications and 
older versions of browsers and custom browsers built from variants of the 
commercial browsers.  By the time all/most users upgraded to new browsers, it 
would be time to change the roots out again and this will impact the ability 
for web site operators to enable TLS for all visitors.

Before we can implement a short Root usage policy we'd need to convince all 
browsers to follow a process for rapid updates of root stores.

Hi Doug.

Imagine a root cert A, valid for a short duration; and a root cert B, valid for 
a long duration.

Under Ryan's proposal, Mozilla would put A (but not B) in NSS, whereas other 
less agile root stores would contain B.

A doesn't have to be in every root store, because B can cross-certify A.  
(Let's call the cross-certificate A').

A widely compatible cert chain would therefore look like this:
B -> A' -> Intermediate -> Leaf

If you're already cross-certifying from an older root C, then an even more 
widely compatible cert chain would look like this:
C -> B' -> A' -> Intermediate -> Leaf


--
Rob Stradling
Senior Research & Development Scientist
COMODO - Creating Trust Online

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: [FORGED] Re: Configuring Graduated Trust for Non-Browser Consumption

2017-05-16 Thread Doug Beattie via dev-security-policy
Ryan,

If you look at the wide range of user agents accessing google.com today you'd 
see many legacy applications and older versions of browsers and custom browsers 
built from variants of the commercial browsers.  By the time all/most users 
upgraded to new browsers, it would be time to change the roots out again and 
this will impact the ability for web site operators to enable TLS for all 
visitors.  

Before we can implement a short Root usage policy we'd need to convince all 
browsers to follow a process for rapid updates of root stores.  GlobalSign 
visitors use Nokia, NetFront, SeaMonkey, Amazon Silk, Blackberry and others, 
and assume ecommerce sites have even more legacy user agents (at percentages 
they cannot ignore).  We'd need to be sure that these vendors change how they 
manage their root stores before we move to short use Roots (maybe some of them 
relay on the underlying operating system already and these are not all an 
issue).  

Mobile devices will perhaps be the most challenging as their OS support 
lifetime is relatively short but users hang onto them for longer.  For example, 
Android 4.1 and 4.2 account for about 7% of the Android market share:
  https://developer.android.com/about/dashboards/index.html 
Android browser has about 6% market share:
  https://www.netmarketshare.com/browser-market-share.aspx?qprid=1=1 
but Android 4.1 and 4.2 are no longer supported:

https://www.extremetech.com/mobile/197346-google-throws-nearly-a-billion-android-users-under-the-bus-refuses-to-patch-os-vulnerability
 

Sure, 6% of 7% is around .5%, so in itself not a huge driver, but add up the 
other unsupported Android versions and those of all other mobile devices and 
this will become more meaningful.

Under your proposal, how would you see mobile device manufacturers as well as 
OS and browser vendors supporting the requirement to keep updating root stores 
even after the end of support?

Doug

> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+doug.beattie=globalsign@lists.mozilla.org] On Behalf Of Ryan
> Sleevi via dev-security-policy
> Sent: Tuesday, May 16, 2017 7:48 AM
> To: Peter Gutmann 
> Cc: Nick Lamb ; MozPol  pol...@lists.mozilla.org>; Alex Gaynor ; Cory Benfield
> ; Ryan Sleevi ; Gervase Markham
> 
> Subject: Re: [FORGED] Re: Configuring Graduated Trust for Non-Browser
> Consumption
> 
> On Tue, May 16, 2017 at 7:19 AM, Peter Gutmann
> 
> wrote:
> 
> > Ryan Sleevi  writes:
> >
> > >Mozilla updates every six to eight weeks. And that works. That's all
> > >that matters for this discussion.
> >
> > Do all the world's CAs know this?
> 
> 
> Does that matter, if all participants in Mozilla's Root Program _could_ know
> this?
> 
> I can't help but feel you're raising concerns that aren't relevant. Perhaps I
> didn't explain sufficiently why even if a client takes a single copy of the
> Mozilla root store and *never updates after that*, things could still work for
> 20+ years for those clients, and with reduced risk for Mozilla users. I feel 
> like
> if that point had been clearer, perhaps you would understand why it could fly.
> 
> Perhaps you're confused and think the roots themselves have 5 year validity
> (e.g. notBefore to notAfter). That's also not what I said - I said bound the 
> time
> for inclusion of that root in Mozilla products. They're very different things,
> you see, and the latter doesn't prescribe the validity period of the root -
> precisely so it can support such broken 'legacy' cases without requiring too
> much of the world to adopt modern security practices.
> 
> That said, Mozilla's mission to ensure the Internet is a global public 
> resource
> that is safe would, among other things, entitle them to push this particular
> vision, since it would help make users safe. However, I merely proposed a
> smaller step in that.
> 
> Perhaps you could re-read the proposal with a fresh perspective, as I hope it
> might become clearer how it could address many of these issues. As it relates
> to the topic at hand, by limiting the lifetime of the roots themselves, it
> reduces the risk/need to impose additional contraints - there are fewer legacy
> roots, they're bounded in validity period, and things move onward towards
> distrust much easier. That does seem a net-positive for the ecosystem.
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: April CA Communication: Results

2017-05-15 Thread Doug Beattie via dev-security-policy

> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+doug.beattie=globalsign@lists.mozilla.org] On Behalf Of Kurt
> Roeckx via dev-security-policy
> Sent: Monday, May 15, 2017 9:41 AM
> To: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Re: April CA Communication: Results
> 
> On 2017-05-15 15:38, Kurt Roeckx wrote:
> > On 2017-05-15 15:26, Gervase Markham wrote:
> >> On 15/05/17 14:19, Doug Beattie wrote:
> >>> https://support.globalsign.com/customer/portal/articles/1216323
> >>
> >> Thanks, Doug. There's no date on that doc - are you able to say when
> >> it was written?
> >
> > It says: Last Updated: Aug 26, 2013 11:24AM EDT
> 
> And the http reply itself says:
> Last-Modified: Thu, 11 Dec 2014 14:02:12 GMT

Yes, it is certainly a bit dated.  Outlook 2013 and 2016 are not listed along 
with more recent versions of iMail and Thunderbird.

> 
> Kurt
> 
> 
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: April CA Communication: Results

2017-05-15 Thread Doug Beattie via dev-security-policy


> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+doug.beattie=globalsign@lists.mozilla.org] On Behalf Of
> Gervase Markham via dev-security-policy
> Sent: Monday, May 15, 2017 9:16 AM
> To: Jakob Bohm ; mozilla-dev-security-
> pol...@lists.mozilla.org
> Subject: Re: April CA Communication: Results
> 
> On 15/05/17 14:07, Jakob Bohm wrote:
> > 1. Microsoft's e-mail clients were very late to accept stronger
> >   signature algorithms for e-mails (including e-mails sent by users of
> >   non-problematic e-mail clients).  I believe Globalsign's page about
> >   SHA256-transition for customers provides a nice overview.
> 
> Link? Any docs about research people have done into the prevalance of
> SHA-1 for S/MIME, and which clients don't support SHA-256, would be very
> useful.
>
https://support.globalsign.com/customer/portal/articles/1216323

> Gerv
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Policy 2.5 Proposal: Indicate direction of travel with respect to permitted domain validation methods

2017-05-09 Thread Doug Beattie via dev-security-policy
Gerv,

I'm not clear on what you mean by CAs must use only the 10 Blessed Methods by 
21st July 2017.  

I'm assuming this is the latest official draft:

https://github.com/mozilla/pkipolicy/blob/master/rootstore/policy.md

Specifically, does this mean all new domain validations must conform to the 10 
methods, or that all new issuance must be based on domains validated with only 
these 10 methods?

Doug


> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+doug.beattie=globalsign@lists.mozilla.org] On Behalf Of
> Gervase Markham via dev-security-policy
> Sent: Tuesday, May 9, 2017 12:58 PM
> To: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Re: Policy 2.5 Proposal: Indicate direction of travel with respect to
> permitted domain validation methods
> 
> On 01/05/17 10:13, Gervase Markham wrote:
> > This would involve replacing section 2.2.3 of the policy with:
> 
> 
> 
> Incorporated as drafted. CAs should take note (from this change and from the
> CA Communication) that Mozilla's policy is moving in the direction of 
> requiring
> the 10 Blessed Methods alone, and that the deadline is 21st July 2017.
> 
> Gerv
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


  1   2   >