Re: Misissued/Suspicious Symantec Certificates

2017-02-24 Thread Peter Bowen via dev-security-policy
"auditing standards that underlie the accepted audit schemes found in
Section 8.1"

This is obviously a error in the BRs.  That language is taken from
Section 8.1 and there is no list of schemes in 8.1.

8.4 does have a list of schemes:
1. WebTrust for Certification Authorities v2.0;
2. A national scheme that audits conformance to ETSI TS 102 042/ ETSI
EN 319 411-1;
3. A scheme that audits conformance to ISO 21188:2006; or
4. If a Government CA is required by its Certificate Policy to use a
different internal audit scheme, it MAY use such scheme provided that
the audit either (a) encompasses all requirements of one of the above
schemes or (b) consists of comparable criteria that are available for
public review.

1. is slight problematic as no scheme exists by that name, but "Trust
Service Principles and Criteria for Certification Authorities Version
2.0" does exist, which is what I assume is meant.

If we assume that audit scheme, my understanding is that the "auditing
standards that underlie" the scheme is one of the following (which one
depends on the date of the audit and the licensure of the auditor):
(1) AT sec. 101 from SSAE No. 10/11/12 (AICPA)
(2) AT-C sec. 205 from SSAE No. 18 (AICPA)
(3) Section 5025 (CPA Canada)
(4) CSAE 3000 (CPA Canada)
(5) ISAE 3000 (IFAC)

There should be no lack of auditing standards that underlie the Trust
Service Principles and Criteria for Certification Authorities Version
2.0 audit scheme found in section 8.4.

Thanks,
Peter

On Thu, Feb 23, 2017 at 1:19 AM, Ryan Sleevi via dev-security-policy
 wrote:
> I'm sorry, I'm still a little confused about how to understand your
> response.
>
> I can't tell if you're discussing in the abstract - as in, you don't know
> how an Delegated Third Party would ever meet that definition, due to the
> absence of "auditing standards that underlie the accepted audit schemes
> found in Section 8.1" therefore you don't think what Symantec has been
> doing since 2010 is permitted by the Baseline Requirements at all, and they
> should have stopped five years ago. That implies you read through the links
> provided by Symantec so far of the four RAs that they assert were operating
> as Delegated Third Parties (which is the only way this could have been
> acceptable to begin with), but that you disagree that they're evidence of
> compliance with the restrictions on the Delegated Third Parties. Is this
> what you meant?
>
> Or if you mean something concrete - that is, that you literally are
> interested and curious, without any subtext. In that case, it implies you
> may not have checked the links in the message you were replying to yet, and
> this was more of an aside, rather than a direct question. If this was the
> case, do you think it's reasonably clear the question I'd asked of Steve?
>
> Or am I completely off the mark? I just want to make sure that the question
> I asked is clear and unambiguous, as well as making sure I'm not
> misunderstanding anything.
>
> On Wed, Feb 22, 2017 at 9:21 PM, Jeremy Rowley 
> wrote:
>
>> I am aware of the requirements but am interested in seeing how an RA that
>> doesn't have their own issuing cert structures the audit report. It
>> probably looks the same, but I've never seen one (unless that is the case
>> with the previously provided audit report).
>>
>> On Feb 22, 2017, at 8:48 PM, Ryan Sleevi  wrote:
>>
>>
>>
>> On Wed, Feb 22, 2017 at 8:36 PM, Jeremy Rowley > > wrote:
>>
>>> Webtrust doesn't have audit criteria for RAs so the audit request may
>>> produce interesting results. Or are you asking for the audit statement
>>> covering the root that the RA used to issue from? That should all be public
>>> in the Mozilla database at this point.
>>
>>
>> Hi Jeremy,
>>
>> I believe the previous questions already addressed this, but perhaps I've
>> misunderstood your concern.
>>
>> "Webtrust doesn't have audit criteria for RAs so the audit request may
>> produce interesting results."
>>
>> Quoting the Baseline Requirements, v.1.4.2 [1] , Section 8.4
>> "If the CA is not using one of the above procedures and the Delegated
>> Third Party is not an Enterprise RA, then the
>> CA SHALL obtain an audit report, issued under the auditing standards that
>> underlie the accepted audit schemes
>> found in Section 8.1, that provides an opinion whether the Delegated Third
>> Party’s performance complies with
>> either the Delegated Third Party’s practice statement or the CA’s
>> Certificate Policy and/or Certification Practice
>> Statement. If the opinion is that the Delegated Third Party does not
>> comply, then the CA SHALL not allow the
>> Delegated Third Party to continue performing delegated functions. "
>>
>> Note that Symantec has already provided this data for the four RA partners
>> involved for the 2015/2016 (varies) period, at [2]. Specifically, see the
>> response to Question 5 at [3].
>>
>> "Or are 

Re: Let's Encrypt appears to issue a certificate for a domain that doesn't exist

2017-02-22 Thread Peter Bowen via dev-security-policy
On Wed, Feb 22, 2017 at 7:35 PM, Richard Wang via dev-security-policy
 wrote:
> As I understand, the BR 4.2.1 required this:
>
> “The CA SHALL develop, maintain, and implement documented procedures that 
> identify and require additional verification activity for High Risk 
> Certificate Requests prior to the Certificate’s approval, as reasonably 
> necessary to ensure that such requests are properly verified under these 
> Requirements.”
>
> Please clarify this request, thanks.

Richard,

That sentence does not say that domain names including "apple",
"google", or any other string are High Risk Certificate Requests
(HRCR).   I could define HRCR as being those that contain domain names
that contain mixed script characters as defined in UTS #39 section
5.1.  "apple-id-2.com" is not mixed script so it is not a HRCR based
on this definition.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Trust Services roots

2017-02-22 Thread Peter Bowen via dev-security-policy
Ryan,

Both Gerv and I posted follow up questions almost two weeks ago.  I
know you have been busy with CT days.  When do you expect to have
answers available?

Thanks,
Peter

On Fri, Feb 10, 2017 at 2:01 AM, Gervase Markham via
dev-security-policy  wrote:
> Hi Ryan,
>
> On 09/02/17 19:55, Ryan Hurst wrote:
>> - The EV OID associated with this permission is associated with GlobalSign 
>> and not Google and,
>
> Which EV OID are you referring to, precisely?
>
>> - GlobalSign is active member in good standing with the respective root 
>> programs and,
>> - Google will not be issuing EV SSL certificates,
>> - Google will operate these roots under their own CP/CPS’s and associated 
>> OIDs,
>> - Google issuing a certificate with the GlobalSign OIDs would qualify as 
>> miss-issuance.
>>
>> That it would be acceptable for us not to undergo a EV SSL audit,
>> and that GlobalSign could keep the EV right for the associated subordinate
>> CA for the remaining validity period to facilitate the transition
>> (assuming continued compliance).
>
> Just to be clear: GlobalSign continues to operate at least one subCA
> under a root which Google has purchased, and that root is EV-enabled,
> and the sub-CA continues to do EV issuance (and is audited as such) but
> the root is no longer EV audited, and nor is the rest of the hierarchy?
>
>> When looking at this issue it is important to keep in mind Google has
>> operated a WebTrust audited subordinate CA under Symantec for quite a
>> long time. As part of this they have maintained audited facilities,
>> and procedures appropriate for offline key management, CRL/OCSP
>> generation, and other related activities. Based on this, and the
>> timing of both our audit, and key transfer all parties concluded it
>> would be sufficient to have the auditors provide an opinion letter
>> about the transfer of the keys and have those keys covered by the
>> subsequent annual audit.
>
> Can you tell us what the planned start/end dates for the audit period of
> that annual audit are/will be?
>
> Are the Google roots and/or the GlobalSign-acquired roots currently
> issuing EE certificates? Were they issuing certificates between 11th
> August 2016 and 8th December 2016?
>
> Gerv
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Let's Encrypt appears to issue a certificate for a domain that doesn't exist

2017-02-22 Thread Peter Bowen via dev-security-policy
Rather than what you suggest, I think the following could be high risk:

свiтова-пошта.info.
xn--i--7kcbgb7fdinng1f.info.

гooms17139.link.
xn--ooms17139-uzh.link.

мцяsц.lol.
xn--s-wtb4ab7b.lol.

сaентология.net.
xn--a-ftbfnnlhbvn2m.net.

aμ.net.
xn--a-mmb.net.

μc.net.
xn--c-lmb.net.

ωe.net.
xn--e-cnb.net.

аgentur.net.
xn--gentur-2nf.net.

ωomega.net.
xn--omega-gee.net.

phantфm.net.
xn--phantm-7rf.net.

रोले盧स.net.
xn--t2bes3ds6749n.net.



On Wed, Feb 22, 2017 at 7:55 PM, Richard Wang  wrote:
> I don't agree this.
> If "apple", "google", "Microsoft" is not a high risk domain, then I don’t 
> know which domain is high risk domain, maybe only "github".
>
> Best Regards,
>
> Richard
>
> -Original Message-
> From: Peter Bowen [mailto:pzbo...@gmail.com]
> Sent: Thursday, February 23, 2017 11:53 AM
> To: Richard Wang 
> Cc: r...@sleevi.com; mozilla-dev-security-pol...@lists.mozilla.org; Tony
> Zhaocheng Tan ; Gervase Markham 
> Subject: Re: Let's Encrypt appears to issue a certificate for a domain that
> doesn't exist
>
> On Wed, Feb 22, 2017 at 7:35 PM, Richard Wang via dev-security-policy
>  wrote:
>> As I understand, the BR 4.2.1 required this:
>>
>> “The CA SHALL develop, maintain, and implement documented procedures that
>> identify and require additional verification activity for High Risk
>> Certificate Requests prior to the Certificate’s approval, as reasonably
>> necessary to ensure that such requests are properly verified under these
>> Requirements.”
>>
>> Please clarify this request, thanks.
>
> Richard,
>
> That sentence does not say that domain names including "apple", "google", or
> any other string are High Risk Certificate Requests
> (HRCR).   I could define HRCR as being those that contain domain names
> that contain mixed script characters as defined in UTS #39 section 5.1.
> "apple-id-2.com" is not mixed script so it is not a HRCR based on this
> definition.
>
> Thanks,
> Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Trust Services roots

2017-02-09 Thread Peter Bowen via dev-security-policy
Ryan,

Thank you for the quick reply.  My comments and questions are inline.

On Thu, Feb 9, 2017 at 11:55 AM, Ryan Hurst via dev-security-policy
 wrote:
> Peter,
>
> Thank you very much for your, as always, thorough review.
>
> Let me start by saying I agree there is an opportunity for improving the 
> policies around how key transfers such your recent transfer and Google's are 
> handled.
>
> It is my hope we can, through our respective recent experiences performing 
> such transfers, help Mozilla revise their policy to provide better guidance 
> for such cases in the future.

Where I see opportunities below, I'm marking them with "Policy Suggestion".

> As for your specific questions, my responses follow:
>
> pzb: First, according to the GTS website, there is no audit using the 
> WebTrust Principles and Criteria for Certification Authorities – Extended 
> Validation SSL.  However the two roots in the Mozilla CA  program currently 
> are EV enabled and at least one subordinate CA under them is issuing EV 
> certificates.
>
> rmh: Prior to our final stage of the acquisition we contacted both Mozilla 
> and Microsoft about this particular situation.
>
> At this time, we do not have any interest in the issuance of EV SSL 
> certificates, however GlobalSign does. Based on our conversations with 
> representatives from both organizations we were told that since:
> - The EV OID associated with this permission is associated with GlobalSign 
> and not Google and,
> - GlobalSign is active member in good standing with the respective root 
> programs and,
> - Google will not be issuing EV SSL certificates,
> - Google will operate these roots under their own CP/CPS’s and associated 
> OIDs,
> - Google issuing a certificate with the GlobalSign OIDs would qualify as 
> miss-issuance.

Mozilla recognizes 2.23.140.1.1 as being a valid OID for EV
certificates for all EV-enabled roots
(https://bugzilla.mozilla.org/show_bug.cgi?id=1243923).

1) Do you consider it mis-issuance for Google to issue a certificate
containing the 2.23.140.1.1 OID?

Policy Suggestion A) When transferring a root that is EV enabled, it
should be clearly stated whether the recipient of the root is also
receiving the EV policy OID(s).

> That it would be acceptable for us not to undergo a EV SSL audit, and that 
> GlobalSign could keep the EV right for the associated subordinate CA for the 
> remaining validity period to facilitate the transition (assuming continued 
> compliance).
>
> As a former manager of a root program, this seems an appropriate position to 
> take. And as one who has been involved in several such root transfers I think 
> differences in intended use are common enough that they should be explicitly 
> handled by policy.
>
> pzb:  Second, according to the GTS CPS v1.3, "Between 11 August 2016 and 8 
> December 2016, Google Inc. operated these Roots according to Google Inc.’s 
> Certification Practice Statement."  The basic WebTrust for CA and WebTrust BR 
> audit reports for the period ending September 30, 2016 explicitly state they 
> are for "subordinate CA under external Root CA" and do not list the roots in 
> the GTS CPS at all.
>
> rmh: I believe this will be answered by my responses to your third and fourth 
> observations.

It was not.

2) Will Google be publishing an audit report for a period starting 11
August 2016 that covers the transferred GS roots?  If so, can you
estimate the end of period date?

> pzb: Third, the Google CPS says Google took control of these roots on August 
> 11, 2016.  The Mozilla CA policy explicitly says that a bug report must be 
> filed to request to be included in the Mozilla CA program.  It was not until 
> December 22, 2016 that Google requested inclusion as a CA in Mozilla's CA 
> program (https://bugzilla.mozilla.org/show_bug.cgi?id=1325532).  This does 
> not appear to align with Mozilla requirements for public disclosure.
>
> rmh: As has been mentioned, timing for a transaction like this is very 
> complicated. The process of identifying candidates that could meet our needs 
> took many months with several false starts with different organizations. That 
> said, prior to beginning this process we proactively reached out to both 
> Microsoft and Mozilla root programs to let them know we were beginning the 
> process. Once it looked like we would be able to come to an agreement with 
> GlobalSign we again reached out and notified both programs of our intent to 
> secure these specific keys. Then once the transaction was signed we again 
> notified the root programs that the deal was done.
>
> As you know the process to ensure a secure, audited and well structured key 
> migration is also non-trivial. Once this migration was performed we again 
> notified both root programs.
>
> Our intention was to notify all parties, including the public, shortly after 
> the transfer but it took some time for our auditors, for reasons unrelated to 
> our audit, 

Re: Google Trust Services roots

2017-02-09 Thread Peter Bowen via dev-security-policy
On Thu, Feb 9, 2017 at 9:56 PM, Richard Wang via dev-security-policy
 wrote:
> I can't see this sentence
>  " I highlight this because we (the community) see the occasional remark like 
> this; most commonly, it's directed at organizations in particular countries, 
> on the basis that we shouldn't trust "them" because they're in one of "those 
> countries". However, the Mozilla policy is structured to provide objective 
> criteria and assessments of that."
> has any relationship with this topic, please advise, thanks.

I think the point is that issues raised about CAs need to be grounded
in fact.  "Universal Trust Services wrote Y in their CPS but did not
do Y as demonstrated by Z" is something that can be evaluated
factually  "UTS wrote Y in their CPS but might not being doing Y"
without any evidence is not something that can be evaluated factually.

I agree with Ryan; we tend to see the second type of issue come up
more often with CAs from certain countries.  This sort of non-data
driven issue is not appropriate to raise.  Instead show what should
have happened and what did not.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Public disclosure of root ownership transfers (was: Re: Google Trust Services roots)

2017-02-09 Thread Peter Bowen via dev-security-policy
On Thu, Feb 9, 2017 at 7:41 AM, Gervase Markham via
dev-security-policy  wrote:
> On 09/02/17 14:32, Gijs Kruitbosch wrote:
>> Would Mozilla's root program consider changing this requirement so that
>> it *does* require public disclosure, or are there convincing reasons not
>> to? At first glance, it seems like 'guiding' CAs towards additional
>> transparency in the CA market/industry/... might be helpful to people
>> outside Mozilla's root program itself.
>
> This would require CAs and companies to disclose major product plans
> publicly well in advance of the time they would normally disclose them.
> I won't dig out the dates myself, or check the emails, but if you look
> for the following dates from publicly-available information:
>
> A) The date Google took control of the GlobalSign roots
> B) The date Google publicly announced GTS
>
> you will see there's quite a big delta. If you assume Google told
> Mozilla about event A) before it happened, then you can see the problem.

Google says they took control on 11 August 2016.

On 19 October 2016, Google publicly stated "Update on the Google PKI:
new roots were generated and web trust audits were performed, the
report on this is forthcoming,"
(https://cabforum.org/2016/10/19/2016-10-19-20-f2f-meeting-39-minutes/#Google)

Google didn't file with Mozilla until 22 December 2016, and I suspect
that was only because I happened to run across their staged website:
https://twitter.com/pzb/status/812103974220222465

I appreciate the business realities of pre-disclosure, but that is not
the case here.  There is no excuse for having taken control of
existing roots and not disclosing such once they disclosed that they
are intending to become a root CA.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Public disclosure of root ownership transfers (was: Re: Google Trust Services roots)

2017-02-13 Thread Peter Bowen via dev-security-policy
On Mon, Feb 13, 2017 at 4:14 AM, Gervase Markham via
dev-security-policy  wrote:
> On 10/02/17 12:40, Inigo Barreira wrote:
>> I see many "should" in this link. Basically those indicating "should notify
>> Mozilla" and "should follow the physical relocation section".
>
> It may be that this document does need redoing in formal policy
> language. In the mean time, anyone uncertain about its meaning should
> ask Kathleen.

Gerv,

In addition to updating it to follow formal policy language, I would
suggest adding it directly to the policy.  As it stands today there
are 79 pages in the wiki starting with "CA:".  It simply isn't
possible to know which ones are effectively part of the policy and
which are other random things.  I realize building and maintaining
long policies is time consuming, but it is important to be clear.  CAs
are routinely called out for unclear or incomplete CPs and CPSes, so I
think it is fair to ask Browsers to have clear and complete trust
store policies.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: (Possible) DigiCert EV Violation

2017-02-27 Thread Peter Bowen via dev-security-policy
On Mon, Feb 27, 2017 at 1:41 PM, Ryan Sleevi via dev-security-policy
 wrote:
> The EV Guidelines require certificates issued for .onion include the 
> cabf-TorServiceDescriptor extension, defined in the EV Guidelines, as part of 
> these certificates. This is required by Section 11.7.1 (1) of the EV 
> Guidelines, reading: "For a Certificate issued to a Domain Name with .onion 
> in the right-most label of the Domain Name, the CA SHALL confirm that, as of 
> the date the Certificate was issued, the Applicant’s control over the .onion 
> Domain Name in accordance with Appendix F. "

I don't see anything requiring this extension to be included in
certificates. (hat tip to Andrew Ayer for noticing the lack of
requirement)

> The intent was to prevent collisions in .onion names due to the use of a 
> truncated SHA-1 hash collision with distinct keys, as that would allow two 
> parties to respond on the hidden service address using the same key.
>
> Last week, a SHA-1 collision was announced.
>
> In examining the .onion precertificates DigiCert has logged, available at 
> https://crt.sh/?q=facebookcorewwwi.onion , I could not find a single one 
> bearing this extension, which suggests these are all misissued certificates 
> and violations of the EV Guidelines.
>
> During a past discussion of precertificates, at 
> https://groups.google.com/d/msg/mozilla.dev.security.policy/siHOXppxE9k/0PLPVcktBAAJ
>  ,  Mozilla did not discuss whether or not it considered precertificates 
> misissuance, although one module peer (hi! it's me!) suggested they were.
>
> This interpretation seems consistent with the discussions during the WoSign 
> issues, as some of those certificates examined were logged precertificates.
>
> Have I missed something in examining these certificates? Am I correct that 
> they appear to be violations?
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Notice of Intent to Deprecate and Remove: Trust in Symantec-issued Certificates

2017-03-23 Thread Peter Bowen via dev-security-policy
On Thu, Mar 23, 2017 at 12:54 PM, Jakob Bohm via dev-security-policy
 wrote:
>
> The above message (and one by Symantec) were posted to the
> mozilla.dev.security.policy newsgroup prior to becoming aware of
> Google's decision to move the discussion to its own private mailing
> list and procedures.  I would encourage everyone concerned to keep the
> public Mozilla newsgroup copied on all messages in this discussion,
> which seems to have extremely wide repercussions.

Jakob,

Maybe I missed it, but I don't think that Mozilla is involved in this
proposal.  The blink-dev mailing list has an open membership policy
and public anonymously accessible archives.  Obviously anyone can copy
m.d.s.p, as it doesn't have posting restrictions, but it seems
reasonable that Chrom(ium|e)-only discussions would be on a chromium
mailing list.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec: Next Steps

2017-03-24 Thread Peter Bowen via dev-security-policy
On Fri, Mar 24, 2017 at 9:06 AM, Ryan Sleevi via dev-security-policy
 wrote:
> (Wearing an individual hat)
>
> On Fri, Mar 24, 2017 at 10:35 AM, Jakob Bohm via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>>
>> One common scenario that a new wording should allow is a "fully
>> outsourced CA", where all the technical activities, including CA
>> private key storage, CRL/OCSP distribution, ensuring policy compliance
>> and domain/IP validation are outsourced to a single entity which is
>> fully audited as a CA operator, while the entity nominally responsible
>> for the CA acts more like an RA or reseller.
>>
>
> Can you highlight why you believe this is a common scenario? During that
> same conversation, only one party was identified that meets such a
> definition, and CAs otherwise did not highlight any of their customers or
> awareness of others.

To be fair, we didn't discuss this scenario.

The scenario raised was that CompanyX outsources _all_ CA activities
to CompanyY except for approving CPS changes, writing the management
assertion, and marketing the certificates.

What I believe Jakob is describing is one step less, where CompanyY
does some of the validation steps.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Next CA Communication

2017-03-17 Thread Peter Bowen via dev-security-policy
On Fri, Mar 17, 2017 at 8:30 AM, Gervase Markham via
dev-security-policy  wrote:
> The URL for the draft of the next CA Communication is here:
> https://mozilla-mozillacaprogram.cs54.force.com/Communications/CACommunicationSurveySample?CACommunicationId=a050S00G3K2
>
> Note that this is a _draft_ - the form parts will not work, and no CA
> should attempt to use this URL or the form to send in any responses.
>
> Please provide feedback in this group on whether the questions and
> actions are clear, whether they are appropriate, and whether anything
> else should or could be added.
>
> Some of these items are effectively new policy (such as the requirement
> to rev CP/CPS version numbers at least yearly); if they survive
> unscathed, we will update the policy doc to include them.

"+ Friendly name and SHA1 or SHA256 fingerprint of each root
certificate and intermediate certificate covered by the audit scope "

I think you unintentionally have this backwards.  Certificates in
scope for audits are those _issued_ by the CA being audited.  So if
ExampleCA issues a CA certificate naming ContosoCA as the subject,
then that certificate is in scope for Example CA but not for
ContosoCA.

I would also avoid the term "Friendly name" unless you define it, as
that is the name of Microsoft trust list attribute which does not
necessarily match anything in the certificate; for example one entry
in the Microsoft list is for a CA with1 distinguished name of
"CN=Class 1 Primary CA,O=Certplus,C=FR" and friendly name of "WoSign
1999".

I would replace this with:

+ Distinguished name and SHA-256 hash of the SubjectPublicKeyInfo of
each certificate issuer covered by the audit scope
+ Clear indication of which in-scope certificate issuers are Root CAs

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Next CA Communication

2017-03-20 Thread Peter Bowen via dev-security-policy
On Mon, Mar 20, 2017 at 4:52 PM Rob Stradling <rob.stradl...@comodo.com>
wrote:

> On 20/03/17 17:07, Peter Bowen via dev-security-policy wrote:
> 
> >> B) Your attention is drawn to the cablint and x509lint tools, which you
> >> may wish to incorporate into your certificate issuance pipeline to get
> >> early warning of circumstances when you are issuing certificates which
> >> do not meet the Baseline Requirements (cablint) or X509 standards
> >> (x509lint).
> >>
> >> https://github.com/kroeckx/x509lint
> >> XXX What's the URL for cablint?
> >
> > https://cabforum.org/pipermail/public/2017-March/010144.html
>
> Hi Peter.  I presume you meant...
>
> https://github.com/awslabs/certlin <https://github.com/awslabs/certlint>t



Yes. Not sure how that archive URL got there.

>
> --
> Rob Stradling
> Senior Research & Development Scientist
> COMODO - Creating Trust Online
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Next CA Communication

2017-03-20 Thread Peter Bowen via dev-security-policy
On Mon, Mar 20, 2017 at 10:43 AM, Jeremy Rowley via
dev-security-policy  wrote:
> A) Does your CA have an RA program, whereby non-Affiliates of your company
> perform aspects of certificate validation on your behalf under contract? If
> so, please tell us about the program, including:
>
> * How many companies are involved
> * Which of those companies do their own domain ownership validation
> * What measures you have in place to ensure this work is done to an
> appropriate standard
> [JR] This should be limited to SSL certs IMO. With client certs, you're going
> to get a lot more RAs that likely function under the standard or legal
> framework defining the certificate type.

What if the question was scoped to "RAs that can do independent
validation of domain control" or some such?  e.g. a classic "Enteprise
RA" set up where the CA's in-house RA confirms control of a public
suffix and then allows the Enterprise to internally confirm
certificate requests under the validated domain should not be counted
here.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Researcher Says API Flaw Exposed Symantec Certificates, Including Private Keys

2017-03-31 Thread Peter Bowen via dev-security-policy

> On Mar 31, 2017, at 6:01 PM, Daniel Baxter via dev-security-policy 
>  wrote:
> 
> On Saturday, April 1, 2017 at 6:27:27 AM UTC+11, Jakob Bohm wrote:
>> Oh, come on, if that's her job title, that's her job title, and at any
>> CA, that is actually an important job that /someone/ should have.
> 
> I meant the content of her reply, not her job title.

Hi there,

I’m Peter.  I am a Principal Security Engineer at Amazon and Vice President of 
Amazon Trust Services (the certification authority).  I’ve been participating 
in this group for several years, mostly in an individual capacity. One of the 
part of the Mozilla CA program I value the most is open and transparent 
communication.  To that end, I appreciate the direct and clear email from Tarah 
to the group.

As the mozilla.dev.security.policy name indicates, this is a netnews (i.e. 
Usenet) group which is gatewayed to an email list and Google Group.  It is part 
of the Mozilla Forums, which are primarily a place for technical discussions.  
I would hope that anyone looking for a formal statement from any organization 
whose employees participate in this group would reach out to the appropriate PR 
team.

I'm glad to see posts that help keep a high signal-to-noise ratio in this 
fourm, as long as they fall within the etiquette rules 
(https://www.mozilla.org/en-US/about/forums/etiquette/).

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec Issues List

2017-03-31 Thread Peter Bowen via dev-security-policy
On Fri, Mar 31, 2017 at 4:38 PM, Ryan Sleevi via dev-security-policy
 wrote:
> On Fri, Mar 31, 2017 at 2:39 PM, Gervase Markham wrote:
>
>> As we continue to consider how best to react to the most recent incident
>> involving Symantec, and given that there is a question of whether it is
>> part of a pattern of behaviour, it seemed best to produce an issues list
>> as we did with WoSign. This means Symantec has proper opportunity to
>> respond to issues raised and those responses can be documented in one
>> place and the clearest overayll picture can be seen by the community.
>
> (Wearing a Google hat)

(Wearing my normal personal non-work hat)

> In March of last year, Symantec provided us a list of five sub-CAs which
> they termed GeoRoots: Apple, Google, Unicredit, Aetna, NTT Docomo - and
> requested they be excluded from this requirement. We asked Symantec to
> provide current audit statements for each of these CAs.
>
> Symantec indicated that the audit information for these sub-CAs would be
> added to the CCADB. This was on 3/29.
>
> We then followed-up with Symantec, again, because as of 6/28, there were
> several outstanding issues with Symantec's disclosures:
>
> - Apple IST CA 3 was not covered by the general set of Apple audits
> - No audit information for Aetna was provided, and its CPS was dated in 2011
> - No audit information for Unicredit was provided
> - NTT Docomo (DKHS and DKHS CA2) were disclosed as being part of Symantec's
> audit
>
> Upon follow-up, Symantec provided Aetna's WebTrust for BRs audit. On it,
> there were 15 qualifications, some of which would have spanned the totality
> of operation.
>
> Regarding Unicredit, Google requested that Symantec place us in direct
> contact with Unicredit. We had several calls with Unicredit's management
> team regarding the issues, attempting to find a path to see if they would
> be able to complete a Baseline Requirements audit.
>
> I want to share these details so that a fuller picture of the GeoRoot
> issues can be noted. Particularly concerning is the seriousness of the
> Aetna issues and the failure to remedy them, and the failure to identify
> the NTT Docomo (DKHS) roots as part of Symantec's infrastructure.
(some portions of the quoted text omitted)

Ryan,

I haven't reviewed the audit reports myself, but I'll assume all you
wrote is true.  However, I think it is important to consider it in the
appropriate context.

The GeoRoot program was very similar to that offered by many CAs a few
years ago.  CyberTrust (then Verizon, now DigiCert) has the OmniRoot
program, Entrust has a root signing program[1], and GlobalSign Trusted
Root[2] are just a few examples.

In almost every case the transition to requiring complete unqualified
audits of the subordinates by a licensed practitioner was a rocky one.
See DigiCert's thread
(https://groups.google.com/d/msg/mozilla.dev.security.policy/tHUcqnWPt3o/U2U__7-UBQAJ)
about the OmniRoot program or look at the audits available for some of
the Entrust subordinates.

I'm not suggesting that the GeoRoot subordinate issues should not be
considered, but it seems the GeoRoot program was not notably
exceptional a few years ago.

Thanks,
Peter

[1] 
https://web-beta.archive.org/web/20140818191044/http://www.entrust.net/about/third-party-sub-ca.htm
[2] https://www.globalsign.com/en/certificate-authority-root-signing/
and 
https://web-beta.archive.org/web/20101008151742/http://globalsign.com/certificate-authority-root-signing/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Email sub-CAs

2017-04-15 Thread Peter Bowen via dev-security-policy
On Thu, Apr 13, 2017 at 9:33 AM, douglas.beattie--- via
dev-security-policy  wrote:
> On Thursday, April 13, 2017 at 10:49:17 AM UTC-4, Gervase Markham wrote:
>> On 13/04/17 14:23, Doug Beattie wrote:
>> > There is no statement back to scope or corresponding audits.  Were
>> > secure email capable CAs supposed to be disclosed and audited to
>> > Mozilla under 2.3?
>>
>> If they did not include id-kp-serverAuth, I would not have faulted a CA
>> for not disclosing them if they met the exclusion criteria for email
>> certs as written.
>
> OK.
>
>> > and how it applies to Secure email, I don't see how TCSCs with secure
>> > email EKU fall within the scope of the Mozilla Policy 2.3.  Can you
>> > help clarify?
>>
>> I think this is basically issue #69.
>> https://github.com/mozilla/pkipolicy/issues/69
>
> OK, I look forward to a conclusion on that.  I hope that name constraining a 
> secure email CA (either technically in the CA certificate or via business 
> controls) is sufficient to avoid WebTrust Audits.  If Public disclosure helps 
> get us there then that would be acceptable.

Should the Mozilla policy change to require disclosure of all CA
certificates issued by an unconstrained CA (but not necessarily
require audits, CP/CPS, etc)? This would help identify unintentional
gaps in policy.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Trust Services roots

2017-03-09 Thread Peter Bowen via dev-security-policy
On Wed, Mar 8, 2017 at 10:14 PM, Richard Wang  wrote:
> Why we setup one EV OID for all roots is that we use the same policy for all 
> EV SSL certificate no matter it is issued by which root. The policy OID is 
> unique ID
>
> If Google use the GlobalSign EV OID, and GlobalSign also use this EV OID, 
> this means two companies use the same policy?
>
> It is better to do a test: Google issue a EV SSL certificate from this 
> acquired root using the GlobalSign EV OID, then check every browser's UI 
> display info, to check if that info will confuse the browser users.

Richard,

I'll make this easier:

Go to https://good.sca1a.amazontrust.com/ and
https://good.sca0a.amazontrust.com/  in Safari and Microsoft IE/Edge.
Tell me which CA issued the certificates for those sites.  (Note that
we don't send SCTs on those sites right now, so they aren't treated as
EV in Chrome, and we are still pending for EV in Mozilla)

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Trust Services roots

2017-03-09 Thread Peter Bowen via dev-security-policy
That is the Starfield Services EV policy identifier, not the Starfield
EV policy identifier.  We clearly call out in section 1.1 of the our
CPS that Starfield Services Root Certificate Authority - G2 is covered
under the CPS.

On Thu, Mar 9, 2017 at 10:29 PM, Richard Wang  wrote:
> Good demo, thanks.
>
> I checked that you are using Startfield EV OID in Startfield name root and in 
> Amazon name root, means you are using the transferred root's EV OID. But I 
> checked your CPS that don't state this point, please advise, thanks.
>
>
> Best Regards,
>
> Richard
>
> -Original Message-
> From: Peter Bowen [mailto:pzbo...@gmail.com]
> Sent: Friday, March 10, 2017 2:16 PM
> To: Richard Wang 
> Cc: Ryan Sleevi ; Gervase Markham ; 
> mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Re: Google Trust Services roots
>
> On Wed, Mar 8, 2017 at 10:14 PM, Richard Wang  wrote:
>> Why we setup one EV OID for all roots is that we use the same policy
>> for all EV SSL certificate no matter it is issued by which root. The
>> policy OID is unique ID
>>
>> If Google use the GlobalSign EV OID, and GlobalSign also use this EV OID, 
>> this means two companies use the same policy?
>>
>> It is better to do a test: Google issue a EV SSL certificate from this 
>> acquired root using the GlobalSign EV OID, then check every browser's UI 
>> display info, to check if that info will confuse the browser users.
>
> Richard,
>
> I'll make this easier:
>
> Go to https://good.sca1a.amazontrust.com/ and 
> https://good.sca0a.amazontrust.com/  in Safari and Microsoft IE/Edge.
> Tell me which CA issued the certificates for those sites.  (Note that we 
> don't send SCTs on those sites right now, so they aren't treated as EV in 
> Chrome, and we are still pending for EV in Mozilla)
>
> Thanks,
> Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec: Next Steps

2017-03-08 Thread Peter Bowen via dev-security-policy
On Wed, Mar 8, 2017 at 6:50 AM, Ryan Sleevi  wrote:
>
> On Wed, Mar 8, 2017 at 9:23 AM, Peter Bowen wrote:
>
>> > Does this make it clearer the point I was trying to make, which is that
>> > they're functionally equivalent - due to the fact that both DTPs and
>> > sub-CAs
>> > have the issue of multi-party audit scopes?
>>
>> I agree that you suggest an approach that is probably functionally
>> equivalent, but what you describe is not how WebTrust audits work.
>
>
> Peter, does my recent clarification help align this? I think we are in
> violent agreement with respect to sub-CAs that you don't get to "pick and
> choose" the principles and criteria, but for the specific case of DTPs and
> their capabilities, was trying to describe how it could fit within the 'site
> visit' examination, due to the inability to rely on / use third-party audits
> as evidence for the basis of opinion forming.

By eliminating the DTP option, you will massively raise costs for CAs
that rely upon local translators and information gatherers.  I think a
much better proposal would be to require the CA perform the RA
activity contemplated by BR 3.2.2.4 and 3.2.2.5 and restrict DTPs to
Subject Identity Information validation.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Trust Services roots

2017-03-08 Thread Peter Bowen via dev-security-policy
Richard,

I'm afraid a few things are confused here.

First, a single CA Operator may have multiple roots in the browser
trust list.  Each root may list one or more certificate policies that
map to the EV policy.  Multiple roots that follow the same policy may
use the same policy IDs and different roots from the same operator may
use different policies.

For example, I see the following in the Microsoft trust list:

CN=CA 沃通根证书,O=WoSign CA Limited,C=CN
CN=Class 1 Primary CA,O=Certplus,C=FR
CN=Certification Authority of WoSign,O=WoSign CA Limited,C=CN
CN=CA WoSign ECC Root,O=WoSign CA Limited,C=CN
CN=Certification Authority of WoSign G2,O=WoSign CA Limited,C=CN
each of these has one EV mapped policy: 1.3.6.1.4.1.36305.2

CN=AffirmTrust Commercial,O=AffirmTrust,C=US has policy
1.3.6.1.4.1.34697.2.1 mapped to EV
CN=AffirmTrust Networking,O=AffirmTrust,C=US has policy
1.3.6.1.4.1.34697.2.2 mapped to EV
CN=AffirmTrust Premium,O=AffirmTrust,C=US has policy
1.3.6.1.4.1.34697.2.3 mapped to EV
CN=AffirmTrust Premium ECC,O=AffirmTrust,C=US has policy
1.3.6.1.4.1.34697.2.4 mapped to EV
All of these are from the same company but each has their own policy identifier.

The information on "Identified by " in Microsoft's browsers
comes from the "Friendly Name" field in the trust list. For example
the friendly name of CN=Class 1 Primary CA,O=Certplus,C=FR is "WoSign
1999".

For something like the AffirmTrust example, they could easily sell one
root along with the exclusive right to use that root's EV OID without
impacting their other OIDs.

Does that make sense?

Thanks,
Peter

On Wed, Mar 8, 2017 at 8:44 PM, Richard Wang via dev-security-policy
 wrote:
> I don’t think so, please check this page: 
> https://cabforum.org/object-registry/ that listed most CA’s EV OID, and all 
> browsers ask for the CA’s own EV OID when applying inclusion and EV enabled. 
> So, as I understand that the browser display EV green bar and display the 
> “Identified by CA name” is based on this CA’s EV OID.
>
>
>
> I don’t think Symantec have the reason to use GlobalSign EV OID in its EV SSL 
> certificate, why Symantec don’t use his own EV OID? If Symantec issued a EV 
> SSL using GlobalSign's EV OID, I think IE browser will display this EV SSL is 
> identified by GlobalSign, not by Symantec.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Trust Services roots

2017-03-10 Thread Peter Bowen via dev-security-policy
On Thu, Mar 9, 2017 at 11:02 PM, Jakob Bohm via dev-security-policy
 wrote:
>
> Of all these, Starfield seems to be the only case where a single CA
> name now refers to two different current CA operators (GoDaddy and
> Amazon).  All the others are cases of complete takeover.  None are
> cases where the name in the certificate is a still operating CA
> operator, but the root is actually operated by a different entity
> entirely.

There are a number of examples, but many of them are older and have
been removed from trust stores (usually due to key size):

Certplus - operated by both Docusign and Wosign
Starfield - Go Daddy and Amazon
TC TrustCenter - Symantec and Deutscher Sparkassen Verlag GmbH
(S-TRUST, DSV-Gruppe)
USERTRUST UTN-USERFirst - Symantec and Comodo
ValiCert - Go Daddy, SECOM, and RSA

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DigiCert BR violation

2017-03-13 Thread Peter Bowen via dev-security-policy
On Mon, Mar 13, 2017 at 6:08 PM, Nick Lamb via dev-security-policy
 wrote:
> On Monday, 13 March 2017 21:31:46 UTC, Ryan Sleevi  wrote:
>> Are you saying that there are one or more clients that require DigiCert to 
>> support Teletext strings?
>
> Can we stop saying Teletext? The X500 series standards are talking about 
> Teletex. One letter shorter.
>
> Teletext was invented by the BBC, to deliver pages of text and block graphics 
> in the blanking interval on analogue television transmissions. It brought joy 
> to millions of people (especially nerds) around the world for several decades 
> prior to analogue television going off the air.
>
> Teletex is an ITU standard, intended to supersede Fax but largely forgotten 
> because it turns out Internet email is what people actually wanted. Its text 
> encoding infested the X.500 series standards and thereby made dozens of 
> people miserable.

I thought teletex was there to make people who use reverse solidus
('\'), circumflex ('^'), grave accent ('`'), curly brackets ('{' and
'}') and tilde ('~') sad.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec: Next Steps

2017-03-08 Thread Peter Bowen via dev-security-policy
On Wed, Mar 8, 2017 at 5:08 AM, Ryan Sleevi <r...@sleevi.com> wrote:
>
>
> On Wed, Mar 8, 2017 at 12:57 AM, Peter Bowen via dev-security-policy
> <dev-security-policy@lists.mozilla.org> wrote:
>>
>> If the DTP is only performing the functions that Jakob lists, then
>> they only need an auditor's opinion covering those functions. In fact
>> there is no way for an auditor to audit functions that don't exist.
>> For example, consider the WebTrust for CA criteria called "Subordinate
>> CA Certificate Life Cycle Management".  If the only CA in scope for
>> the audit does not issue Subordinate CA Certificates, then that
>> criteria is not applicable.  Depending on the auditor, it might be
>> that the CA needs to write in some policy (public or private) "the CA
>> does not issue Subordinate CA Certificates."
>>
>> Many auditors vary how much they charge for their work based on the
>> expected effort required to compete the work.  I believe Jakob's point
>> is that an audit where all the criteria are just "we do not do X" is
>> very quick -- for example a DTP that does not have a HSM and does not
>> digitally sign things is going to be a much cheaper audit than one
>> that does have a HSM and signs things under multi-person control.
>
>
> So I agree with this - namely, that a DTP audit does not include the
> Principles and/or Criteria relevant for the operational aspects they don't
> control, because the auditor neither forms an opinion about the third-party
> operation. I think a good example, to continue with yours, if the issuing CA
> handles the HSM, and is already audited as such, then the auditor will not
> opine on another auditors work.
>
> So the scope of a DTP audit will be limited to the functions provided by the
> DTP.
>
> But the same is true for an externally operated sub-CA, for which the
> majority of services are provided for by the "issuing" CA, and the DTP
> performs the validation functions for this sub-CA.

Ah, but it is not true.  I had a very enlightening discussion with
representatives from the WebTrust Task Force at the CA/Browser Forum
meeting in Redmond.  CAs must be evaluated on all the WebTrust
criteria that are not marked optional in order to get a WebTrust seal
and the same auditor must do the whole audit.  So, sub-CA Foo
contracts with Bar to host the HSM for the sub-CA and handle the
issuing functions (and probably the revocation functions) and if Bar
is also a CA, Bar gets audited twice.  One time by Bar's auditor at
Bar's cost and then again by Foo's auditor at Foo's cost.

Note that WebTrust for CA criteria 6 says:

"The Certification Authority maintains effective controls to provide
reasonable assurance that Subscriber
information was properly authenticated (for the registration
activities performed by ABC-CA)."

Given this criteria, the auditor does not have to inspect each RA themselves.

Also note that the only optional criteria are:

2.1 Certificate Policy Management (if applicable)
2.3 CP and CPS Consistency (if applicable)
4.8 CA Key Escrow (if applicable)
5.1 CA-Provided Subscriber Key Generation Services (if supported)
5.2 CA-Provided Subscriber Key Storage and Recovery Services (if supported)
5.3 Integrated Circuit Card (ICC) Life Cycle Management (if supported)
6.2 Certificate Renewal (if supported)
6.7 Certificate Suspension (if supported)

The CA Key Lifecycle controls, including storage and usage, are not
optional, so each sub-CA must be audited on them.

> This is why I'm suggesting, from an audit scope, they're functionally
> equivalent approach, except one avoids the whole complexity of identifying
> where or not a DTP is something different-than a sub-CA, since the _intent_
> is true in both, which is that 100% of the capabilities related to issuance
> are appropriately audited - either by the DTP/sub-CA or by the issuing
> CA/managed CA provided
>
> Does this make it clearer the point I was trying to make, which is that
> they're functionally equivalent - due to the fact that both DTPs and sub-CAs
> have the issue of multi-party audit scopes?

I agree that you suggest an approach that is probably functionally
equivalent, but what you describe is not how WebTrust audits work.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Trust Services roots

2017-03-06 Thread Peter Bowen via dev-security-policy
Ryan,

I appreciate you finally sending responses.  I hope you appreciate
that they are clearly not adequate, in my opinion.  Please see the
comments inline.

On Mon, Mar 6, 2017 at 6:02 PM, Ryan Hurst  wrote:
> First, let me apologize for the delay in my response, I have had a draft of
> this letter in my inbox for a while and have just been unable to get back to
> it and finish it due to scheduling conflicts. I promise to address all other
> questions in a more prompt manner.
>
>
>> pzb: Mozilla recognizes 2.23.140.1.1 as being a valid OID for
>> EVcertificates for all EV-enabled roots
>> (https://bugzilla.mozilla.org/show_bug.cgi?id=1243923).
>
>
>> 1) Do you consider it mis-issuance for Google to issue a certificate
>> containing the 2.23.140.1.1 OID?
>
>> Policy Suggestion A) When transferring a root that is EV enabled, it
>> should be clearly stated whether the
>> recipient of the root is also receiving the EV policy OID(s).
>
>
> rmh: Yes. We believe that until we have:
>
> - The associated policies, procedures, and other associated work completed,
>
> - Have successfully completed an EV audit,
>
> - And have been approved by one or more of the various root programs as an
> EV issuer.
>
>
> That it would be an example of miss-issuance for us to issue such a
> certificate.

Given the EV-enabled status, this seems like a reasonable path forward.

>> pzb:  Second, according to the GTS CPS v1.3, "Between 11 August 2016 and 8
>> December 2016, Google Inc. operated these Roots according to Google Inc.’s
>> Certification Practice Statement."  The basic WebTrust for CA and WebTrust
>> BR audit reports for the period ending September 30, 2016 explicitly state
>> they are for "subordinate CA under external Root CA" and do not list the
>> roots in the GTS CPS at all.
>
>> rmh: I believe this will be answered by my responses to your third and
>> fourth observations.
>
>
>> It was not.
>
> rmh: I just attached two opinion letters from our auditors, I had previously
> provided these to the root programs directly but it took some time to get
> permission to release them publicly. One letter is covering the key
> generation ceremony of the new roots, and another covering the transfer of
> the keys to our facilities. In this second report you will find the
> following statement:
>
>
> ```
> In our opinion, as of November 17, 2016, Google Trust Services LLC
> Management’s Assertion, as referred to above, is fairly stated, in all
> material respects, based on Certification Practices Statement Management
> Criterion 2.2, Asset Classification and Management Criterion 3.2, and Key
> Storage, Backup and Recovery Criterion 4.2 of the WebTrust Principles and
> Criteria for Certification Authorities v2.0.
> ```
>
> Based on our conversations with the various root program operator's prior to
> our acquisition it has been our plan and understanding, that we can utilize
> these opinion letters to augment the webtrust audit with the material
> details, relating to these activities. It is our hope that this also
> addresses you specific concern here.
>
>> 2) Will Google be publishing an audit report for a period starting 11
>> August 2016 that covers the transferred GS roots?  If so, can you
>> estimate the end of period date?
>
> rmh: It is our belief, based on our conversations with the various root
> store operators, as well as our own auditors that the transfer itself is
> covered by the opinion letters. With that said our audit period is October
> 1st to the end of September. The associated report will be released between
> October and November, depending on our auditors schedules.

This does not resolve the concern.  The BRs require an "an unbroken
sequence of audit periods".  Given that GlobalSign clearly cannot make
any assertion about the roots after 11 August 2016, you would have a
gap from 11 August 2016 to 30 September 2016 in your sequence of audit
periods if your next report runs 1 October 2016 to 30 September 2017.

>> pzb: I think that this is the key issue.  In my reading, "root
>> certificates" are not members of the program.  Rather organizations
>> (legal entities) are members and each member has some number of root
>> certificates.
>
>> Google was not a member of the program and had not applied to be a
>> member of the program at the time they received the roots already in
>> the program.  This seems problematic to me.
>
>> Policy Suggestion B) Require that any organization wishing to become a
>> member of the program submit a bug with links to content demonstrating
>> compliance with the Mozilla policy.  Require that this be public prior
>> to taking control of any root in the program.
>
>> Policy Suggestion C) Recognize that root transfers are distinct from
>> the acquisition of a program member.  Acquisition of a program member
>> (meaning purchase of the company) is a distinctly different activity
>> from moving only a private key, as the prior business controls no
>> longer apply in the 

Re: Google Trust Services roots

2017-03-06 Thread Peter Bowen via dev-security-policy
One more question, in addition to the ones in my prior response:

On Mon, Mar 6, 2017 at 6:02 PM, Ryan Hurst  wrote:
> rmh: I just attached two opinion letters from our auditors, I had previously
> provided these to the root programs directly but it took some time to get
> permission to release them publicly. One letter is covering the key
> generation ceremony of the new roots, and another covering the transfer of
> the keys to our facilities. In this second report you will find the
> following statement:
>
> ```
> In our opinion, as of November 17, 2016, Google Trust Services LLC
> Management’s Assertion, as referred to above, is fairly stated, in all
> material respects, based on Certification Practices Statement Management
> Criterion 2.2, Asset Classification and Management Criterion 3.2, and Key
> Storage, Backup and Recovery Criterion 4.2 of the WebTrust Principles and
> Criteria for Certification Authorities v2.0.
> ```

According to the opinion letter:

"followed the CA key generation and security requirements in its:
o Google Internet Authority G2 CPS v1.4" (hyperlink omitted)

According to that CPS, "Key Pairs for the Google Internet Authority
are generated and installed in accordance with the contract between
Google and GeoTrust, Inc., the Root CA."

Are you asserting that the authority for the key generation process
for the new Google roots is "the contract between Google and GeoTrust,
Inc."?

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Maximum validity of pre-BR certificates

2017-03-04 Thread Peter Bowen via dev-security-policy
On Sat, Mar 4, 2017 at 12:22 PM, Daniel Cater via dev-security-policy
 wrote:
> On Saturday, 4 March 2017 20:14:09 UTC, Jeremy Rowley  wrote:
>> 1.0 is not the definitive version any more.  As of 2015‐04‐01, Section
>> 6.3.2 prohibits validity periods longer than 39 months.
>>
>
> Thanks for the prompt reply Jeremy. I realise this. My question relates to 
> what the situation was (be it a guideline, policy, or just common practice) 
> prior to version 1.0.
>
> The cablint message mentions 120 months and I was wondering where that number 
> came from.

Common practice.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec: Next Steps

2017-03-07 Thread Peter Bowen via dev-security-policy
On Tue, Mar 7, 2017 at 9:27 PM, Ryan Sleevi via dev-security-policy
 wrote:
> On Tue, Mar 7, 2017 at 11:23 PM, Jakob Bohm via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
]>
>> For example, an RA whose sole involvement is to receive a daily list of
>> company name/idno/address/authorized signatory for pending
>> applications, go down to the state hall of records and report back
>> which ones match/do not match official company records (to support EV
>> certification for that state) would only need auditing of that activity
>> and the security of the system used to exchange that list and report
>> with the CAs central validation team.
>
>
> Please provide a citation to the Baseline Requirements or Mozilla policy to
> support this statement. I would suggest Section 8.4 provides
> counter-evidence to this claim, and as such, because the argument rests on
> this claim, needs to be addressed before we might make further progress.

Section 8.4 says: " If the CA is not using one of the above procedures
and the Delegated Third Party is not an Enterprise RA, then the
CA SHALL obtain an audit report, issued under the auditing standards
that underlie the accepted audit schemes
found in Section 8.1, that provides an opinion whether the Delegated
Third Party’s performance complies with
either the Delegated Third Party’s practice statement or the CA’s
Certificate Policy and/or Certification Practice
Statement."

If the DTP is only performing the functions that Jakob lists, then
they only need an auditor's opinion covering those functions. In fact
there is no way for an auditor to audit functions that don't exist.
For example, consider the WebTrust for CA criteria called "Subordinate
CA Certificate Life Cycle Management".  If the only CA in scope for
the audit does not issue Subordinate CA Certificates, then that
criteria is not applicable.  Depending on the auditor, it might be
that the CA needs to write in some policy (public or private) "the CA
does not issue Subordinate CA Certificates."

Many auditors vary how much they charge for their work based on the
expected effort required to compete the work.  I believe Jakob's point
is that an audit where all the criteria are just "we do not do X" is
very quick -- for example a DTP that does not have a HSM and does not
digitally sign things is going to be a much cheaper audit than one
that does have a HSM and signs things under multi-person control.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec Issues List

2017-04-02 Thread Peter Bowen via dev-security-policy
On Sun, Apr 2, 2017 at 9:36 PM, Ryan Sleevi <r...@sleevi.com> wrote:
>
> On Sun, Apr 2, 2017 at 11:14 PM Peter Bowen via dev-security-policy
> <dev-security-policy@lists.mozilla.org> wrote:
>>
>> On Fri, Mar 31, 2017 at 11:39 AM, Gervase Markham via
>> dev-security-policy <dev-security-policy@lists.mozilla.org> wrote:
>> > As we continue to consider how best to react to the most recent incident
>> > involving Symantec, and given that there is a question of whether it is
>> > part of a pattern of behaviour, it seemed best to produce an issues list
>> > as we did with WoSign. This means Symantec has proper opportunity to
>> > respond to issues raised and those responses can be documented in one
>> > place and the clearest overayll picture can be seen by the community.
>> >
>> > So I have prepared:
>> > https://wiki.mozilla.org/CA:Symantec_Issues
>> >
>> > I will now be dropping Symantec an email asking them to begin the
>> > process of providing whatever comment, factual correction or input they
>> > feel appropriate.
>> >
>> > If anyone in this group feels they have an issue which it is appropriate
>> > to add to the list, please send me email with the details.
>>
>> Gerv,
>>
>> I'm afraid that Issue V: RA Program Audit Issues (2013 or earlier -
>> January 2017) has confused RAs with subordinate CAs.
>>
>> According to
>> https://bug1334377.bmoattachments.org/attachment.cgi?id=8843448,
>> Symantec has indicated that they have (had) four unconstrained third
>> party RAs: CrossCert, Certisign, Certisur, and Certsuperior.  These
>> appear to fall into what the BRs call "Delegated Third Parties".  No
>> audit report seems to mention any issue with these RAs.
>>
>> Separately Symantec owned CAs have issued CA-certificates to several
>> CAs that are not operated by Symantec.  These appear to include at
>> least Apple, Google, the US Government, Aetna, and Unicredit.  The
>> audit reports linked from Issue V appear to have qualifications
>> regarding these CA-certificates.
>>
>> There are notable differences between third party owned CAs and third
>> party operated RAs and the difference should be clearly noted.
>>
>> Thanks,
>> Peter
>> ___
>> dev-security-policy mailing list
>> dev-security-policy@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-security-policy
>
> Both
> https://www.symantec.com/content/en/us/about/media/repository/GeoTrust-WTBR-2015.pdf
> (Finding number 3) and
> https://www.symantec.com/content/en/us/about/media/repository/symantec-webtrust-audit-report.pdf
> (Finding number 1) call out Delegated Third Parties as lacking audits. This
> is called out separately from the matters related to sub-CAs, as
> "Furthermore".
>
> Given that at least some of the sub-CAs possessed and provided audits to
> Symantec, it does not seem to support your summary, but perhaps your point
> was misunderstood?

I think there are two parts:

1) There should be two different issues in the issues list -- one for
management of Subordinate CAs and one for management of unconstrained
RAs (i.e. Delegated Third Parties)

2) It is not clear that the audit reports for the GeoTrust brand roots
are calling out RAs as qualifications.  My read is that they were
considering the subordinate CAs as DTPs, not the RAs.  However I can
see the other interpretation as well.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Grace Period for Sub-CA Disclosure

2017-04-03 Thread Peter Bowen via dev-security-policy
On Mon, Apr 3, 2017 at 1:45 PM, Jakob Bohm via dev-security-policy
 wrote:
> On 03/04/2017 21:48, Ryan Sleevi wrote:
>>
>> On Mon, Apr 3, 2017 at 3:36 PM, Jakob Bohm via dev-security-policy <
>> dev-security-policy@lists.mozilla.org> wrote:
>>>
>>>
>>> The assumptions are:
>>>
>>> 0. All relevant root programs set similar/identical policies or they
>>>   get incorporatated into the CAB/F BRs on a future date.
>>>
>>
>> This is not correct at present.
>>
>
> It is a simply application of the rule that policies should not fall
> apart if others in the industry do the same.  It's related to the
> "Golden Rule".
>
>>
>>> 1. When the SubCA must be disclosed to all root programs upon the
>>>   *earlier* of issuance + grace period OR outside facility SubCA
>>>   receiving the certificate (no grace period).
>>>
>>
>> This is not correct as proposed.
>>
>
> It is intended to prevent SubCAs issued to "new" parties from
> meaningfully issuing trusted certificates before root programs have had
> a chance to check the contents of the disclosure (CP/CPS, point in time
> audit, whatever each root program requires).

I don't see this as part of the proposed requirement.  The requirement
is simply disclosure, not approval.

>>> 2. The SubCA must not issue any certificate (other than not-yet-used
>>>   SubCAs, OCSP certs and other such CA operation certs generated in the
>>>   same ceremony) until Disclosure to all root programs has been
>>>   completed.
>>>
>>
>> This is correct.
>>
>>
>>> 3. Disclosing to an operational and not-on-holiday root program team
>>>   (such as the the CCADB most of the time) indirectly makes the SubCA
>>>   certificate available to the SubCA operator, *technically* (not
>>>   legally) allowing that SubCA to (improperly) start issuing before
>>>   rule #2 is satisfied.
>>>
>>
>> And given that this disclosure (in the CCADB) satisfies #2, why is this an
>> issue?
>
>
> It is merely a step in the detailed logic argument that Ryan Sleevi
> requested.
>
> Note that no Browser or other client will trust certificates from the
> new SubCA until the new SubCA or its clients can send the browser the
> signed SubCA cert.  This technical point is also crucial for after-
> the-fact cross certificates.

This is a more interesting case.  Going back to the start:

"The CA with a certificate included in Mozilla's CA Certificate
Program MUST disclose this information before any such subordinate CA
is allowed to issue certificates."

This implies that the subordinate CA is not already issuing
certificates.  If a CA signs a certificate naming an existing CA as
the subject, then what?

If the Mozilla program member certifies a CA that is not a terminal CA
(e.g. not pathlen:0) and that CA then issues to another CA, how does
that certificate get into the CCADB?

>>> 5. SubCA Disclosure and processing of said disclosure should be done
>>>   nearly simultaneously to minimize the problem mentioned in 3.
>>>
>>
>> I believe you're suggesting simultaneously across all root programs, is
>> that correct? But that's not a requirement (and perhaps based on the
>> incorrect and incomplete understanding of point 1)
>
>
> Yes, across all root programs, that is the key point, see #0.
>
> Also, it is argued as a logical consequence of #3, #2, #0, i.e.
> assume another root program enacts similar rules.  Once the SubCA cert
> is disclosed on the CCADB for Mozilla and Chrome, the SubCA operator
> can download the SubCA cert from the CCADB and use it to make users of
> that other root program trust issued certificates before that other
> root program received the disclosure.

I see zero problem with the SubCA receiving the certificate
immediately from the issuing CA, even prior to disclosure in the
CCADB.  The proposed requirement is that the SubCA not issue prior to
confirming the disclosure has been made.

> By symmetry, if Mozilla has to shut down the CCADB for maintenance for
> 2 days, another root program might receive and publish the disclosure
> first, causing the same problem for users of Mozilla and Chrome
> products.

I'm not sure where you see the "problem for users" here.  This is no
different than what happens today for many CAs.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Grace Period for Sub-CA Disclosure

2017-04-03 Thread Peter Bowen via dev-security-policy
On Mon, Apr 3, 2017 at 12:36 PM, Jakob Bohm via dev-security-policy
 wrote:
> On 03/04/2017 19:24, Ryan Sleevi wrote:
>>
>> On Mon, Apr 3, 2017 at 12:58 PM, Jakob Bohm via dev-security-policy <
>> dev-security-policy@lists.mozilla.org> wrote:
>>>
>>>
>>> taking a holiday and not being able to process a disclosure of a new
>>> SubCA.
>>>
>>
>> Considering that the CCADB does not require any of these parties to
>> process
>> a disclosure, can you again explain why the proposed wording would not be
>> sufficient?
>>
>> I think you may be operating on incomplete/incorrect assumptions about
>> disclosure, and it would be useful to understand what you believe happens,
>> since that appears to have factored in to your suggestion. Given that the
>> proposal allows the CA to fully self-report (if they have access) or to
>> defer until they do have access, that does seem entirely appropriate and
>> relevant to allow for one week.
>>
>
> The assumptions are:
>
> 0. All relevant root programs set similar/identical policies or they
>   get incorporatated into the CAB/F BRs on a future date.

This discussion is only about Mozilla's program.

> 1. When the SubCA must be disclosed to all root programs upon the
>   *earlier* of issuance + grace period OR outside facility SubCA
>   receiving the certificate (no grace period).

Disclosure means uploading the CCADB with other data (e.g. which CPS applies).

> 2. The SubCA must not issue any certificate (other than not-yet-used
>   SubCAs, OCSP certs and other such CA operation certs generated in the
>   same ceremony) until Disclosure to all root programs has been
>   completed.

This is a good callout.  It isn't clear how to handle issuance of
certificates prior to disclosure.

> 3. Disclosing to an operational and not-on-holiday root program team
>   (such as the the CCADB most of the time) indirectly makes the SubCA
>   certificate available to the SubCA operator, *technically* (not
>   legally) allowing that SubCA to (improperly) start issuing before
>   rule #2 is satisfied.

I don't follow here.  The requirement is simply that the certificate
be uploaded prior to the CA issuing any certificates.  It doesn't
matter if the program team does anything with it.  It also has no
impact on whether the subordinate CA issues or does not issue -- the
subordinate CA controls the private key that can be used to create
signatures, not the root program team.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec Issues List

2017-04-02 Thread Peter Bowen via dev-security-policy
On Fri, Mar 31, 2017 at 11:39 AM, Gervase Markham via
dev-security-policy  wrote:
> As we continue to consider how best to react to the most recent incident
> involving Symantec, and given that there is a question of whether it is
> part of a pattern of behaviour, it seemed best to produce an issues list
> as we did with WoSign. This means Symantec has proper opportunity to
> respond to issues raised and those responses can be documented in one
> place and the clearest overayll picture can be seen by the community.
>
> So I have prepared:
> https://wiki.mozilla.org/CA:Symantec_Issues
>
> I will now be dropping Symantec an email asking them to begin the
> process of providing whatever comment, factual correction or input they
> feel appropriate.
>
> If anyone in this group feels they have an issue which it is appropriate
> to add to the list, please send me email with the details.

Gerv,

I believe Issue L is incorrectly dated.  As can be seen on crt.sh,
there are two CAs operated by the US federal Government which have
been repeatedly issued certificates by various CAs trusted by Mozilla:

https://crt.sh/?caid=1324 (Federal Bridge CA)
https://crt.sh/?caid=1410 (Federal Bridge CA 2013)

These two CAs have cross-certified each other and have been issued
several certificates by VeriSign/Symantec and Digital Signature
Trust/IdenTrust. The earliest date for VeriSign is 2011-02-03 and the
earliest date for DST is 2011-01-14.

I also think that Issue L should probably be combined with the GeoRoot
items.  Functionally they are the same issue: management and oversight
of external subordinate CAs.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Criticism of Google Re: Google Trust Services roots

2017-03-31 Thread Peter Bowen via dev-security-policy
On Fri, Mar 31, 2017 at 8:18 AM, Gervase Markham via
dev-security-policy  wrote:
> On 30/03/17 15:01, Peter Kurrasch wrote:
>> By "not new", are you referring to Google being the second(?)
>> instance where a company has purchased an individual root cert from
>> another company? It's fair enough to say that Google isn't the first
>> but I'm not aware of any commentary or airing of opposing viewpoints
>> as to the suitability of this practice going forward.
>
> As noted, I have no interest in banning this practice because I think
> the ecosystem effects would be negative.
>
>> Has Mozilla received any notification that other companies ‎intend to
>> acquire individual roots from another CA?
>
> Not to my knowledge, but they may have been communicating with Kathleen.
>
>> Also, does Mozilla have any policies (requirements?) regarding
>> individual root acquisition?
>
> https://wiki.mozilla.org/CA:RootTransferPolicy
> and
> https://github.com/mozilla/pkipolicy/issues/57
>
>> For example, how frequently should roots
>> be allowed to change hands? What would Mozilla's response be if
>> GalaxyTrust (an operator not in the program)
>> were to say that they are acquiring the HARICA root?
>
> From the above URL: "In addition, if the receiving company is new to the
> Mozilla root program, there must also be a public discussion regarding
> their admittance to the root program."
>
> Without completing the necessary steps, GalaxyTrust would not be admitted to
> the root program.

I've modified the quoted text a little to try to make this example
clearer, as I think the prior example conflated multiple things and
used language that did not help clarify the situation.

Is the revised example accurate?

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Final Decision by Google on Symantec

2017-07-31 Thread Peter Bowen via dev-security-policy
On Mon, Jul 31, 2017 at 7:17 AM, Jakob Bohm via dev-security-policy
 wrote:
> On 31/07/2017 16:06, Gervase Markham wrote:
>>
>> On 31/07/17 15:00, Jakob Bohm wrote:
>>>
>>> - Due to current Mozilla implementation bugs,
>>
>>
>> Reference, please?
>>
>
> I am referring to the fact that EV-trust is currently assigned to roots,
> not to SubCAs, at least as far as visible root store descriptions go.
>
> Since I know of no standard way for a SubCA certificate to state if it
> intended for EV certs or not, that would cause EV-trust to percolate
> into SubCAs that were never intended for this purpose by the root CA.

This is common to every EV implementation I know about, not just
Mozilla.  Therefore I would not call this a bug.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Final Decision by Google on Symantec

2017-07-29 Thread Peter Bowen via dev-security-policy
On Thu, Jul 27, 2017 at 11:14 PM, Gervase Markham via
dev-security-policy  wrote:
> Google have made a final decision on the various dates they plan to
> implement as part of the consensus plan in the Symantec matter. The
> message from blink-dev is included below.
>
[...]
>
> We now have two choices. We can accept the Google date for ourselves, or
> we can decide to implement something earlier. Implementing something
> earlier would involve us leading on compatibility risk, and so would
> need to get wider sign-off from within Mozilla, but nevertheless I would
> like to get the opinions of the m.d.s.p community.
>
> I would like to make a decision on this matter on or before July 31st,
> as Symantec have asked for dates to be nailed down by then in order for
> them to be on track with their Managed CA implementation timetable. If
> no alternative decision is taken and communicated here and to Symantec,
> the default will be that we will accept Google's final proposal as a
> consensus date.

Gerv,

I think there three more things that Mozilla needs to decide.

First, when the server authentication trust will bits be removed from
the existing roots.  This is of notable importance for non-Firefox
users of NSS.  Based on the Chrome email, it looks like they will
remove trust bits in their git repo around August 23, 2018.  When will
NSS remove the trust bits?

Second, how the dates apply to email protection certificates, if at
all.  Chrome only deals with server authentication certificates, so
their decision does not cover other types of certificates.  Will the
email protection trust bits be turned off at some point?

Third, what the requirements are for Symantec to submit new roots,
including any limit to how many may be submitted.
https://ccadb-public.secure.force.com/mozilla/IncludedCACertificateReport
shows that there are currently 20 Symantec roots included.  Would it
be reasonable for them to submit replacements on a 1:1 basis -- that
is 20 new roots?

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DigiCert-Symantec Announcement

2017-08-02 Thread Peter Bowen via dev-security-policy
On Wed, Aug 2, 2017 at 8:10 PM, Peter Gutmann via dev-security-policy
 wrote:
> Jeremy Rowley via dev-security-policy  
> writes:
>
>>Today, DigiCert and Symantec announced that DigiCert is acquiring the
>>Symantec CA assets, including the infrastructure, personnel, roots, and
>>platforms.
>
> I realise this is a bit off-topic for the list but someone has to bring up the
> elephant in the room: How does this affect the Google vs. Symantec situation?
> Is it pure coincidence that Symantec now re-emerges as DigiCert, presumably
> avoiding the sanctions since now things will chain up to DigiCert roots?

Peter,

On topic for this list is Mozilla policy.  Gerv's email was clear that
sale to DigiCert will not impact the plan, saying: "any change of
control of some or all of Symantec's roots
would not be grounds for a renegotiation of these dates."

So the sanctions are still intact.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DigiCert-Symantec Announcement

2017-08-02 Thread Peter Bowen via dev-security-policy
On Wed, Aug 2, 2017 at 2:12 PM, Jeremy Rowley via dev-security-policy
 wrote:
> Today, DigiCert and Symantec announced that DigiCert is acquiring the
> Symantec CA assets, including the infrastructure, personnel, roots, and
> platforms.  At the same time, DigiCert signed a Sub CA agreement wherein we
> will validate and issue all Symantec certs as of Dec 1, 2017.  We are
> committed to meeting the Mozilla and Google plans in transitioning away from
> the Symantec infrastructure. The deal is expected to close near the end of
> the year, after which we will be solely responsible for operation of the CA.
> From there, we will migrate customers and systems as necessary to
> consolidate platforms and operations while continuing to run all issuance
> and validation through DigiCert.  We will post updates and plans to the
> community as things change and progress.
>
> Thanks a ton for any thoughts you offer.

Jeremy,

A while ago I put together a list of all the certificates that are or
were included in trust stores that were known to be owned by Symantec
or companies that Symantec acquired.  The list is in Google Sheets at
https://docs.google.com/spreadsheets/d/1piCTtgMz1Uf3SHXoNEFYZKAjKGPJdRDGFuGehdzcvo8/edit?usp=sharing

Can you confirm that DigiCert will be "solely responsible for
operation" of all of these CAs once the deal closes?

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SRVNames in name constraints

2017-08-15 Thread Peter Bowen via dev-security-policy
On Tue, Aug 15, 2017 at 8:01 AM, Jeremy Rowley
 wrote:
> I realize use of underscore characters was been debated and explained at the
> CAB Forum, but I think it's pretty evident (based on the certs issued and
> responses to Ballot 202) that not all CAs believe certs for SRVNames are
> prohibited. I realize the rationale against underscores is that 5280
> requires a valid host name for DNS and X.509 does not necessarily permit
> underscores, but it's not explicitly stated. Ballot 202 went a long way
> towards clarification on when underscores are permitted, but that failed,
> creating all new confusion on the issue.  Any CA not paying careful
> attention to the discussion and looking at only the results, would probably
> believe SRVNames are permitted as long as the entry is in SAN:dNSName
> instead of otherName.

Jeremy,

I was assuming the definition of "SRVname" meant an otherName type
entry.  Obviously a dNSName of _xmpp.example.com would have name
constraints applied, so I don't think that there is an issue there.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SRVNames in name constraints

2017-08-15 Thread Peter Bowen via dev-security-policy
On Tue, Aug 15, 2017 at 4:20 AM, Gervase Markham via
dev-security-policy  wrote:
> On 06/07/17 16:56, Ryan Sleevi wrote:
>> Relevant to this group, id-kp-serverAuth (and perhaps id-kp-clientAuth)
>
> So what do we do? There are loads of "name-constrained" certs out there
> with id-kp-serverAuth but no constraints on SRVName. Does that mean they
> can issue for any SRVName they like? Is that a problem once we start
> allowing it?
>
> I've filed:
> https://github.com/mozilla/pkipolicy/issues/96
> on this issue in general.

Right now no CA is allowed to issue for SRVName.  Part of the
CA/Browser Forum ballot I had drafted a while ago had language that
said something like "If a CA certificate contains at least one DNSName
entry in NameConstraints and does not have any SRVName entries in
NameConstraints, then the CA MUST NOT issue any certificates
containing SRVname names."

However this is a morass, as it is defining what a CA can do based on
something outside the CA's scope.  I'm not sure how to deal with this,
to be honest.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certificates with improperly normalized IDNs

2017-08-10 Thread Peter Bowen via dev-security-policy
On Thu, Aug 10, 2017 at 2:31 PM, Jakob Bohm via dev-security-policy
 wrote:
> On 10/08/2017 22:22, Jonathan Rudenberg wrote:
>>
>> RFC 5280 section 7.2 and the associated IDNA RFC requires that
>> Internationalized Domain Names are normalized before encoding to punycode.
>>
>> Let’s Encrypt appears to have issued at least three certificates that have
>> at least one dnsName without the proper Unicode normalization applied.
>>
>> https://crt.sh/?id=187634027=cablint
>> https://crt.sh/?id=187628042=cablint
>> https://crt.sh/?id=173493962=cablint
>>
>> It’s also worth noting that RFC 3491 (referenced by RFC 5280 via RFC 3490)
>> requires normalization form KC, but RFC 5891 which replaces RFC 3491
>> requires normalization form C. I believe that the BRs and/or RFC 5280 should
>> be updated to reference RFC 5890 and by extension RFC 5891 instead.
>>
>> Jonathan
>>
>
> All 3 dnsName values exist in the DNS and point to the same server (IP
> address). Whois says that the two second level names are both registered
> to OOO "JilfondService" .
>
> This raises the question if CAs should be responsible for misissued
> domain names, or if they should be allowed to issue certificates to
> actually existing DNS names.
>
> I don't know if the bad punycode encodings are in the 2nd level names (a
> registrar/registry responsibility, both were from 2012 or before) or in
> the 3rd level names (locally created at an unknown date).
>
> An online utility based on the older RFC349x round trips all of these.
> So if the issue is only compatibility with a newer RFC not referenced from
> the current BRs, these would probably be OK under the current BRs and
> certLint needs to accept them.
>
> Note: The DNS names are:
>
> xn--80aqafgnbi.xn--b1addckdrqixje4a.xn--p1ai
> xn--80aqafgnbi.xn--f1awi.xn--p1ai
> xn-blcihca2aqinbjzlgp0hrd8c.xn--f1awi.xn--p1ai

These are not the names causing issues.

"xn--109-3veba6djs1bfxlfmx6c9g.xn--b1addckdrqixje4a.xn--p1ai" from
https://crt.sh/?id=187634027=cablint
"xn--109-3veba6djs1bfxlfmx6c9g.xn--f1awi.xn--p1ai" from
https://crt.sh/?id=187628042=cablint
"xn--109-3veba6djs1bfxlfmx6c9g.xn--f1awi.xn--p1ai" from
https://crt.sh/?id=173493962=cablint (same name as the prior cert)

It is the xn--109-3veba6djs1bfxlfmx6c9g label that is incorrect in all
three.  In all three the bad label is not in the registered domain or
any public suffix.

Directly decoded, this string is:

"\u0608\u061c\u0628\u0031\u0608\u0611\u0618\u061e\u0608\u0621\u0612\u0614\u0030\u061b\u0039\u061a\u0618\u061c"

However the string when normalized to NFC is:

"\u0608\u061c\u0628\u0031\u0608\u0618\u0611\u061e\u0608\u0621\u0612\u0614\u0030\u061b\u0039\u0618\u061a\u061c"

If you look carefully, you will see two different pairs of codepoints
that are swapped in the normalized string.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certificates with reserved IP addresses

2017-08-12 Thread Peter Bowen via dev-security-policy
Congratulations on finding something not caught by certlint.  It turns
out that cabtlint does zero checks for reserved IPs.  Something else
for my TODO list.

On Sat, Aug 12, 2017 at 6:52 PM, Jonathan Rudenberg via
dev-security-policy  wrote:
> Baseline Requirements section 7.1.4.2.1 prohibits ipAddress SANs from 
> containing IANA reserved IP addresses and any certificates containing them 
> should have been revoked by 2016-10-01.
>
> There are seven unexpired unrevoked certificates that are known to CT and 
> trusted by NSS containing reserved IP addresses.
>
> The full list can be found at: https://misissued.com/batch/7/
>
> DigiCert
> TI Trust Technologies Global CA (5)
> Cybertrust Japan Public CA G2 (1)
>
> PROCERT
> PSCProcert (1)
>
> It’s also worth noting that three of the "TI Trust Technologies” certificates 
> contain dnsNames with internal names, which are prohibited under the same BR 
> section.
>
> Jonathan
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certificates with improperly normalized IDNs

2017-08-11 Thread Peter Bowen via dev-security-policy
On Thu, Aug 10, 2017 at 1:22 PM, Jonathan Rudenberg via
dev-security-policy  wrote:
> RFC 5280 section 7.2 and the associated IDNA RFC requires that 
> Internationalized Domain Names are normalized before encoding to punycode.
>
> Let’s Encrypt appears to have issued at least three certificates that have at 
> least one dnsName without the proper Unicode normalization applied.
>
> It’s also worth noting that RFC 3491 (referenced by RFC 5280 via RFC 3490) 
> requires normalization form KC, but RFC 5891 which replaces RFC 3491 requires 
> normalization form C. I believe that the BRs and/or RFC 5280 should be 
> updated to reference RFC 5890 and by extension RFC 5891 instead.

I did some reading on Unicode normalization today, and it strongly
appears that any string that has been normalized to normalization form
KC is by definition also in normalization form C.  Normalization is
idempotent, so doing toNFKC(toNKFC()) will result in the same string
as just doing toNFKC() and toNFC(toNFC()) is the same as toNFC().
Additionally toNFKC is the same as toNFC(toK()).

This means that checking that a string matches the result of
toNFC(string) is a valid check regardless of whether using the 349* or
589* RFCs.  It does mean that Certlint will not catch strings that are
in NFC but not in NFKC.

Thanks,
Peter

P.S. I've yet to find a registered domain name not in NFC, and that
includes checking every name in the the zone files for all ICANN gTLDs
and a few ccTLDs
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: 2017.08.10 Let's Encrypt Unicode Normalization Compliance Incident

2017-08-13 Thread Peter Bowen via dev-security-policy
On Sun, Aug 13, 2017 at 5:59 PM, Matt Palmer via dev-security-policy
 wrote:
> On Fri, Aug 11, 2017 at 06:32:11PM +0200, Kurt Roeckx via dev-security-policy 
> wrote:
>> On Fri, Aug 11, 2017 at 11:48:50AM -0400, Ryan Sleevi via 
>> dev-security-policy wrote:
>> >
>> > Could you expand on what you mean by "cablint breaks" or "won't complete in
>> > a timely fashion"? That doesn't match my understanding of what it is or how
>> > it's written, so perhaps I'm misunderstanding what you're proposing?
>>
>> My understand is that it used to be very slow for crt.sh, but
>> that something was done to speed it up. I don't know if that change
>> was something crt.sh specific. I think it was changed to not
>> always restart, but have a process that checks multiple
>> certificates.
>
> I suspect you're referring to the problem of certlint calling out to an
> external program to do ASN.1 validation, which was fixed in
> https://github.com/awslabs/certlint/pull/38.  I believe the feedback from
> Rob was that it did, indeed, do Very Good Things to certlint performance.

I just benchmarked the current cablint code, using 2000 certs from CT
as a sample.  On a single thread of a Intel(R) Xeon(R) CPU E5-2670 v2
@ 2.50GHz, it processes 394.5 certificates per second.  This is 2.53ms
per certificate or 1.4 million certificates per hour.

Thank you Matt for that patch!  This was a _massive_ improvement over
the old design.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: WoSign new system passed Cure 53 system security audit

2017-07-13 Thread Peter Bowen via dev-security-policy
Richard,

I can only guess what Ryan is talking about as the report wasn't sent
to this group, but it is possible that the system described could not
meet the Baseline Requirements, as the BRs do require certain system
designs.  For example, two requirements are:

"Require that each individual in a Trusted Role use a unique
credential created by or assigned to that person in order to
authenticate to Certificate Systems" and "Enforce multi-factor
authentication for administrator access to Issuing Systems and
Certificate Management Systems"

If the system does not do these things, then it "cannot meet the BRs,
you would have to change that system to meet the BR" (quoting Ryan).

Please keep in mind that these are only guesses; there are numerous
other things that could be the report that could lead to the same
conclusion.

Thanks,
Peter

On Thu, Jul 13, 2017 at 5:04 PM, Richard Wang via dev-security-policy
 wrote:
> Hi Ryan,
>
> Thanks for your detail info.
>
> But I still CAN NOT understand why you say and confirm that the new system 
> cannot and does not comply with BR before we start to use it.
>
> We will do the BR audit soon.
>
> Best Regards,
>
> Richard
>
> On 14 Jul 2017, at 00:50, Ryan Sleevi 
> > wrote:
>
> You will fail #4. Because your system, as designed, cannot and does not 
> comply with the Baseline Requirements.
>
> As such, you will then
> (4.1) Update new system, developing new code and new integrations
> (4.2) Engage the auditor to come back on side
> (4.3) Hope you get it right this time
> (4.4) Generate a new root
> (4.5) Do the PITRA audit and hopefully pass
> (4.6) Hope that the security audit from #1 still applies to #4.1 [but because 
> the changes needed are large, it's hard to imagine]
> (5) Apply for the new root inclusion
>
> The system you had security audited in #1 cannot pass #4. That's why working 
> with an auditor to do a readiness assessment in conjunction with or before 
> the security assessment can help ensure you can meet the BRs, and then ensure 
> you can meet them securely.
>
> On Thu, Jul 13, 2017 at 11:04 AM, Richard Wang 
> > wrote:
> Hi Ryan,
>
> I really don't understand where the new system can't meet the BR, we don't 
> use the new system to issue one certificate, how it violate the BR?
>
> Our step is:
> (1) develop a new secure system in the new infrastructure, then do the new 
> system security audit, pass the security audit;
> (2) engage a WebTrust auditor onsite to generate the new root in the new 
> system;
> (3) use the new audited system to issue certificate;
> (4) do the PITRA audit and WebTrust audit;
> (5) apply the new root inclusion.
>  While we start to apply the new root application, we will follow the 
> requirements here: https://bugzilla.mozilla.org/show_bug.cgi?id=1311824
> to demonstrate we meet the 6 requirements.
>
> We will discard the old system and facilitates, so the right order should be 
> have-new-system first, then audit the new system, then apply the new root 
> inclusion. We can not use the old system to do the BR audit.
>
> Please advise, thanks.
>
>
> Best Regards,
>
> Richard
>
> On 13 Jul 2017, at 21:53, Ryan Sleevi 
> > wrote:
>
> Richard,
>
> That's great, but the system that passed the full security audit cannot meet 
> the BRs, you would have to change that system to meet the BRs, and then that 
> new system would no longer be what was audited.
>
> I would encourage you to address the items in the order that Mozilla posed 
> them - such as first systematically identifying and addressing the flaws 
> you've found, and then working with a qualified auditor to demonstrate both 
> remediation and that the resulting system is BR compliant. And then perform 
> the security audit. This helps ensure your end result is most aligned with 
> the desired state - and provides the public the necessary assurances that 
> WoSign, and their management, understand what's required of a publicly 
> trusted CA.
>
> On Wed, Jul 12, 2017 at 10:24 PM, Richard Wang 
> > wrote:
> Hi Ryan,
>
> We got confirmation from Cure 53 that new system passed the full security 
> audit. Please contact Cure 53 directly to verify this, thanks.
>
> We don't start the BR audit now.
>
> Best Regards,
>
> Richard
>
> On 12 Jul 2017, at 22:09, Ryan Sleevi 
> > wrote:
>
>
>
> On Tue, Jul 11, 2017 at 8:18 PM, Richard Wang 
> > wrote:
> Hi all,
>
> Your reported BR issues is from StartCom, not WoSign, we don't use the new 
> system to issue any certificate now since the new root is not generated.
> PLEASE DO NOT mix it, thanks.
>
> Best Regards,
>
> Richard
>
> No, the BR non-compliance is demonstrated from the report provided to 
> browsers - that is, the full report 

Re: [EXT] Symantec Update on SubCA Proposal

2017-07-21 Thread Peter Bowen via dev-security-policy
Steve,

I think this level of public detail is very helpful when it comes to
understanding the proposal.

On Thu, Jul 20, 2017 at 8:00 AM, Steve Medin via dev-security-policy
 wrote:
> 1)  December 1, 2017 is the earliest credible date that any RFP 
> respondent can provide the Managed CA solution proposed by Google, assuming a 
> start date of August 1, 2017. Only one RFP respondent initially proposed a 
> schedule targeting August 8, 2017 (assuming a start date of June 12, 2017). 
> We did not deem this proposal to be credible, however, based on the lack of 
> specificity around our RFP evaluation criteria, as compared to all other RFP 
> responses which provided detailed responses to all aspects of the RFP, and we 
> have received no subsequent information from this bidder to increase our 
> confidence.

You note that this assumes a start date of June 12.   A later email
from Rick Andrews says "Our proposed dates assume we are able to
finalize negotiation of contracts with the selected Managed CA
partner(s), [...] by no later than July 31, 2017."

Presumably the June 12 date is long gone.  However if one assumes the
delta of 57 days from start to delivery stands, this would put
delivery at September 26, 2017.  This is two months sooner than the
December 1 date.  This seems like a pretty big difference.  Given you
are asking to delay the timeline based on other RFP respondents being
unable to hit earlier dates, it seems prudent to ask whether the you
attempted to investigate the proposal from the bidder who proposed
August 8.

Given that one of the requirements stated by Google is that the SubCA
operator had to have roots that have been in the Google trust store
for several years, it seems unusual that any eligible respondent would
not be "credible" out of the gate.

Did you ask them to provide more information and details to help
determine if it was a "credible" offer?

> 2)  We are using several selection criteria for evaluating RFP responses, 
> including the depth of plan to address key technical integration and 
> operational requirements, the timeframe to execute, the ability to handle the 
> scope, volume, language, and customer support requirements both for ongoing 
> issuance and for one-time replacement of certificates issued prior to June 1, 
> 2016, compliance program and posture, and the ability to meet uptime, 
> interface performance, and other SLAs. Certain RFP respondents have 
> distinguished themselves based on the quality and depth of their integration 
> planning assumptions, requirements and activities, which have directly 
> influenced the dates we have proposed for the SubCA proposal.
>
> 3)  The RFP was first released on May 26, 2017. The first round of bidder 
> responses was first received on June 12, 2017.

In the 
https://groups.google.com/a/chromium.org/d/msg/blink-dev/eUAKwjihhBs/ovLalSBRBQAJ
message, it was implied that Symantec was aware of the SubCA plan and
dates since at least May 12.  Given the plan to sign an agreement by
July 31, the August 8 date seems rather impossible. Did Symantec push
back on the August 8 date at that point?

In the original email that started this subthread, you said, "Some of
the prospective Managed CAs have proposed supporting only a portion of
our volume (some by customer segment, others by geographic focus), so
we are also evaluating options that involve working with multiple
Managed CAs."

Have you considered a staggered date system for different classes of
certificates.  For example, I would assume that certificates that
don't contain subject identity information would have less work for
migration integration than EV certificates.  Given that it is common
practice to have a different SubCA for different certificates types,
could you hit an earlier date for non-EV certificates and then later
have the EV SubCA ready?

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


SRVNames in name constraints

2017-07-03 Thread Peter Bowen via dev-security-policy
In reviewing the Mozilla CA policy, I noticed one bug that is probably
my fault.  It says:

"name constraints which do not allow Subject Alternative Names (SANs)
of any of the following types: dNSName, iPAddress, SRVName,
rfc822Name"

SRVName is not yet allowed by the CA/Browser Forum Baseline
Requirements (BRs), so I highly doubt any CA has issued a
cross-certificate containing constraints on SRVName-type names.  Until
the Forum allows such issuance, I think this requirement should be
changed to remove SRVName from the list.  If the Forum does allow such
in the future, adding this back can be revisited at such time.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SRVNames in name constraints

2017-07-03 Thread Peter Bowen via dev-security-policy
We still need to get the policy changed, even with the ballot.  As
written right now, all name constrained certificates are no longer
considered constrained.

On Mon, Jul 3, 2017 at 9:42 AM, Jeremy Rowley
<jeremy.row...@digicert.com> wrote:
> Isn't this ballot ready to go?  If we start the review period now, it'll be
> passed by the time the Mozilla policy is updated.
>
> -Original Message-
> From: dev-security-policy
> [mailto:dev-security-policy-bounces+jeremy.rowley=digicert.com@lists.mozilla
> .org] On Behalf Of Peter Bowen via dev-security-policy
> Sent: Monday, July 3, 2017 10:30 AM
> To: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: SRVNames in name constraints
>
> In reviewing the Mozilla CA policy, I noticed one bug that is probably my
> fault.  It says:
>
> "name constraints which do not allow Subject Alternative Names (SANs) of any
> of the following types: dNSName, iPAddress, SRVName, rfc822Name"
>
> SRVName is not yet allowed by the CA/Browser Forum Baseline Requirements
> (BRs), so I highly doubt any CA has issued a cross-certificate containing
> constraints on SRVName-type names.  Until the Forum allows such issuance, I
> think this requirement should be changed to remove SRVName from the list.
> If the Forum does allow such in the future, adding this back can be
> revisited at such time.
>
> Thanks,
> Peter
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SRVNames in name constraints

2017-07-05 Thread Peter Bowen via dev-security-policy

> On Jul 5, 2017, at 4:23 AM, Gervase Markham via dev-security-policy 
>  wrote:
> 
> On 03/07/17 17:44, Peter Bowen wrote:
>> We still need to get the policy changed, even with the ballot.  As
>> written right now, all name constrained certificates are no longer
>> considered constrained.
> 
> I'm not sure what you mean... What's the issue you are raising here?

Right now (Policy v2.5) says:

Intermediate certificates which have at least one valid, unrevoked chain up to 
such a CA certificate and which are not technically constrained to prevent 
issuance of working server or email certificates. Such technical constraints 
could consist of either:

an Extended Key Usage (EKU) extension which does not contain any of these 
KeyPurposeIds: anyExtendedKeyUsage, id-kp-serverAuth, id-kp-emailProtection; or:
name constraints which do not allow Subject Alternative Names (SANs) of any of 
the following types: dNSName, iPAddress, SRVName, rfc822Name
The second bullet says “any”.  As the rule for name constraints is that if they 
are not present for a type, then any name is allowed, you have to include name 
constraints for all four types.  The issue comes down to the definition of 
“working server” certificates.  Mozilla does not use either rfc822names or 
SRVName for name validation for server authentication, but you could have a 
valid server certificate that has only these names.  Is NSS/Firefox code 
considered a “technical constraint”?  If not, then all technically constrained 
CA certificates need to have constraints on SRVName and rfc822Name type General 
Names in addition to what they have now.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certificates with metadata-only subject fields

2017-08-09 Thread Peter Bowen via dev-security-policy
The point of certlint was to help identify issues.  While I appreciate
it getting broad usage, I don't think pushing for revocation of every
certificate that trips any of the Error level checks is productive.
This reminds of me of people trawling a database of known
vulnerabilities then reporting them to the vendors and asking for a
reward, which happens all too often in bug bounty programs.

I think it would be much more valuable to have a "score card" by CA
Operator that shows absolute defects and defect rate.

Thanks,
Peter

On Wed, Aug 9, 2017 at 2:21 PM, Jeremy Rowley via dev-security-policy
 wrote:
> And this is exactly why we need separate tiers of revocation. Here, there is 
> zero risk to the end user.  I do think it should be fixed and remediated, but 
> revoking all these certs within 24 hours seems unnecessarily harsh.  I think 
> there was a post about this a while ago, but I haven't been able to find it.  
> If someone remembers where it was, I'd appreciate it.
>
> -Original Message-
> From: dev-security-policy 
> [mailto:dev-security-policy-bounces+jeremy.rowley=digicert@lists.mozilla.org]
>  On Behalf Of Jonathan Rudenberg via dev-security-policy
> Sent: Wednesday, August 9, 2017 10:08 AM
> To: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Certificates with metadata-only subject fields
>
> Baseline Requirements section 7.1.4.2.2(j) says:
>
>> All other optional attributes, when present within the subject field, MUST 
>> contain information that has been verified by the CA. Optional attributes 
>> MUST NOT contain metadata such as ‘.’, ‘‐‘, and ‘ ‘ (i.e. space) characters, 
>> and/or any other indication that the value is absent, incomplete, or not 
>> applicable.
>
> There are 522 unexpired unrevoked certificates known to CT issued after 
> 2015-11-01 that are trusted by NSS for server authentication and have at 
> least one subject field that only contains ASCII punctuation characters.
>
> The full list can be found here: https://misissued.com/batch/5/
>
> Since there are so many, I have included a list of the CCADB owner, 
> intermediate commonName, and count of certificates for the 311 certificates 
> in this batch that were issued in the last 365 days so that the relevant CAs 
> can add the appropriate technical controls and policy to comply with this 
> requirement in the future. Please let me know if there is any additional 
> information that would be useful.
>
> Jonathan
>
> —
>
> DigiCert (131)
> Cybertrust Japan Public CA G3 (64)
> DigiCert SHA2 Extended Validation Server CA (36)
> DigiCert SHA2 High Assurance Server CA (12)
> TERENA SSL CA 3 (7)
> DigiCert SHA2 Secure Server CA (6)
> Cybertrust Japan EV CA G2 (6)
>
> GlobalSign (62)
> GlobalSign Organization Validation CA - SHA256 - G2 (46)
> GlobalSign Extended Validation CA - SHA256 - G2 (8)
> GlobalSign Extended Validation CA - SHA256 - G3 (8)
>
> Symantec / VeriSign (35)
> Symantec Class 3 Secure Server CA - G4 (32)
> Symantec Class 3 EV SSL CA - G3 (2)
> Wells Fargo Certificate Authority WS1 (1)
>
> Symantec / GeoTrust (34)
> GeoTrust SSL CA - G3 (25)
> GeoTrust SHA256 SSL CA (5)
> RapidSSL SHA256 CA (2)
> GeoTrust Extended Validation SHA256 SSL CA (2)
>
> Comodo (19)
> COMODO RSA Organization Validation Secure Server CA (11)
> COMODO RSA Extended Validation Secure Server CA (8)
>
> Symantec / Thawte (17)
> thawte SSL CA - G2 (12)
> thawte SHA256 SSL CA (3)
> thawte EV SSL CA - G3 (2)
>
> T-Systems International GmbH (Deutsche Telekom) (6)
> Zertifizierungsstelle FH Duesseldorf - G02 (3)
> TeleSec ServerPass Class 2 CA (2)
> Helmholtz-Zentrum fuer Infektionsforschung (1)
>
> QuoVadis (3)
> QuoVadis EV SSL ICA G1 (2)
> QuoVadis Global SSL ICA G2 (1)
>
> SECOM Trust Systems Co. Ltd. (2)
> NII Open Domain CA - G4 (2)
>
> SwissSign AG (1)
> SwissSign Server Gold CA 2014 - G22 (1)
>
> Entrust (1)
> Entrust Certification Authority - L1K (1) 
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certificates with invalidly long serial numbers

2017-08-07 Thread Peter Bowen via dev-security-policy
On Mon, Aug 7, 2017 at 12:53 AM, Franck Leroy via dev-security-policy
 wrote:
> Hello
>
> I checked only one but I think they are all the same.
>
> The integer value of the serial number is 20 octets, but when encoded into 
> DER a starting 00 may be necessary to mark the integer as a positive value :
>
>0 1606: SEQUENCE {
>4 1070:   SEQUENCE {
>83: [0] {
>   101:   INTEGER 2
>  :   }
>   13   21: INTEGER
>  :   00 A5 45 35 99 1C E2 8B 6D D9 BC 1E 94 48 CC 86
>  :   7C 6B 59 9E B3
>
> So the serialNumber (integer) value is 20 octets long but lenght can be more 
> depending on the encoding representation.
>
> Here is ASCII (common representation when stored into a database: 
> "A54535991CE28B6DD9BC1E9448CC867C6B599EB3" it is 40 octets long, VARCHAR(40) 
> is needed.

The text from 5280 says:

" CAs MUST force the serialNumber to be a non-negative integer, that
   is, the sign bit in the DER encoding of the INTEGER value MUST be
   zero.  This can be done by adding a leading (leftmost) `00'H octet if
   necessary.  This removes a potential ambiguity in mapping between a
   string of octets and an integer value.

   As noted in Section 4.1.2.2, serial numbers can be expected to
   contain long integers.  Certificate users MUST be able to handle
   serialNumber values up to 20 octets in length.  Conforming CAs MUST
   NOT use serialNumber values longer than 20 octets."

This makes it somewhat whether the `00'H octet is to be included in
the 20 octet limit or not. While I can see how one might view it
differently, I think the correct interpretation is to include the
leading `00'H octet in the count.  This is because
CertificateSerialNumber is defined as being an INTEGER, which means
"octet" is not applicable.  If it was defined as OCTET STRING, similar
to how KeyIdentifier is defined, then octet could be seen as applying
to the unencoded value.  However, given this is an INTEGER, the only
way to get octets is to encode and this requires the leading bit to be
zero for non-negative values.

That being said, I think that it is reasonable to add "DER encoding of
Serial must be 20 octets or less including any leading 00 octets" to
the list of ambiguities that CAs must fix by date X, rather than
something that requires revocation.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certificates with invalidly long serial numbers

2017-08-07 Thread Peter Bowen via dev-security-policy
(inserted missed word; off to get coffee now)

On Mon, Aug 7, 2017 at 7:54 AM, Peter Bowen  wrote:
> On Mon, Aug 7, 2017 at 12:53 AM, Franck Leroy via dev-security-policy
>  wrote:
>> Hello
>>
>> I checked only one but I think they are all the same.
>>
>> The integer value of the serial number is 20 octets, but when encoded into 
>> DER a starting 00 may be necessary to mark the integer as a positive value :
>>
>>0 1606: SEQUENCE {
>>4 1070:   SEQUENCE {
>>83: [0] {
>>   101:   INTEGER 2
>>  :   }
>>   13   21: INTEGER
>>  :   00 A5 45 35 99 1C E2 8B 6D D9 BC 1E 94 48 CC 86
>>  :   7C 6B 59 9E B3
>>
>> So the serialNumber (integer) value is 20 octets long but lenght can be more 
>> depending on the encoding representation.
>>
>> Here is ASCII (common representation when stored into a database: 
>> "A54535991CE28B6DD9BC1E9448CC867C6B599EB3" it is 40 octets long, VARCHAR(40) 
>> is needed.
>
> The text from 5280 says:
>
> " CAs MUST force the serialNumber to be a non-negative integer, that
>is, the sign bit in the DER encoding of the INTEGER value MUST be
>zero.  This can be done by adding a leading (leftmost) `00'H octet if
>necessary.  This removes a potential ambiguity in mapping between a
>string of octets and an integer value.
>
>As noted in Section 4.1.2.2, serial numbers can be expected to
>contain long integers.  Certificate users MUST be able to handle
>serialNumber values up to 20 octets in length.  Conforming CAs MUST
>NOT use serialNumber values longer than 20 octets."
>
> This makes it somewhat unclear whether the `00'H octet is to be included in
> the 20 octet limit or not. While I can see how one might view it
> differently, I think the correct interpretation is to include the
> leading `00'H octet in the count.  This is because
> CertificateSerialNumber is defined as being an INTEGER, which means
> "octet" is not applicable.  If it was defined as OCTET STRING, similar
> to how KeyIdentifier is defined, then octet could be seen as applying
> to the unencoded value.  However, given this is an INTEGER, the only
> way to get octets is to encode and this requires the leading bit to be
> zero for non-negative values.
>
> That being said, I think that it is reasonable to add "DER encoding of
> Serial must be 20 octets or less including any leading 00 octets" to
> the list of ambiguities that CAs must fix by date X, rather than
> something that requires revocation.
>
> Thanks,
> Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: BR compliance of legacy certs at root inclusion time

2017-08-20 Thread Peter Bowen via dev-security-policy
On Fri, Aug 18, 2017 at 8:47 AM, Ryan Sleevi via dev-security-policy
 wrote:
> On Fri, Aug 18, 2017 at 11:02 AM, Gervase Markham via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
>> Sometimes, CAs apply for inclusion with new, clean roots. Other times,
>> CAs apply to include roots which already have a history of issuance. The
>> previous certs issued by that CA aren't always all BR-compliant. Which
>> is in one sense understandable, because up to this point the CA has not
>> been bound by the BRs. Heck, the CA may never even have heard of the BRs
>> until they come to apply - although this seems less likely than it would
>> once have been.
>>
>> What should our policy be regarding BR compliance for certificates
>> issued by a root requesting inclusion, which were issued before the date
>> of their request? Do we:
>>
>> A) Require all certs be BR-compliant going forward, but grandfather in
>>the old ones; or
>> B) Require that any non-BR-compliant old certs be revoked; or
>> C) Require that any seriously (TBD) non-BR-compliant old certs be
>>revoked; or
>> D) something else?
>>
>
> D) Require that the CA create a new root certificate to be included within
> Mozilla products, and which all future BR-compliant certificates will be
> issued from this new root. In the event this CA has an existing root
> included within one or more software products, this CA may cross-certify
> their new root with their old root, thus ensuring their newly-issued
> certificates (which are BR compliant) work with such legacy software.
>
> This ensures that all included CAs operate from a 'clean slate' with no
> baggage or risk. It also ensures that the slate always starts from "BR
> compliant" and continues forward.
>
> However, some (new) CAs may rightfully point out that existing, 'legacy'
> CAs have not had this standard applied to them, and have operated in a
> manner that is not BR compliant in the past.
>
> To reduce and/or eliminate the risk from existing CAs, particularly those
> with long and storied histories of misissuance, which similar present
> unknowns to the community (roots that may have been included for >5 years,
> thus prior to the BR effective date), require the same of existing roots
> who cannot demonstrate that they have had BR audits from the moment of
> their inclusion. That is, require 'legacy' CAs to create and stand up new
> roots, which will be certified by their existing roots, and transition all
> new certificate issuance to these new 'roots' (which will appear to be
> cross-signed/intermediates, at first). Within 39 months, Mozilla will be
> able to remove all 'legacy' roots for purposes of website authentication,
> adding these 'clean' roots in their stead, without any disruption to the
> community. Note that this is separable from D, and represents an effort to
> holistically clean up and reduce risk.
>
> The transition period at present cannot be less than 39 months (the maximum
> validity of a newly issued certificate), plus whatever time is afforded to
> CAs to transition (presumably, on the order of 6 months should be
> sufficient). In the future, it would also be worth considering reducing the
> maximum validity of certificates, such that such rollovers can be completed
> in a more timely fashion, thus keeping the ecosystem in a constant 'clean'
> state.

>From the perspective of being "clean" from a given NSS version this,
makes sense.  However the reality for most situations is there is
demand to support applications and devices with trust stores that have
not been updated for a while.  This could be something as similar as
Firefox ESR or it could be a some device with an older trust store.
Assuming there is a need to have the same certificate chain work in
both scenarios, the TLS server may need to send a chain with multiple
root to root cross-certificates.

To get a feel for how long a not looping path might be, I recently
pulled trust stores for dozens of versions of Windows, Netscape,
Mozilla, and Java.  I then used unexpired cross-certificates from CT
to group these trust anchors into unique clusters or disconnected
graphs.  The results are available as gists.

https://gist.github.com/pzb/cd10fbfffd7cb25bb57c38c3865f18f2 is just
the roots in each unique disconnected graph.  Having the entries there
does not imply that all have cross-signed each other, rather than
there is a path from each pair of roots to a common node.  For
example, Root A and Root B might each have a subordinate CA that have
each cross-certified the same, third subordinate.

https://gist.github.com/pzb/ffab25cbe7d32c616792a5dec3711315 is the
same data with all the unexpired subordinate cross-certificates
included.

Note that the clustering does not take into account anything besides
expiration; for example it is possible that two paths to a common node
have conflicting constraints.

Considering we already see paths like:

OU=Class 3 Public 

Re: Configuring Graduated Trust for Non-Browser Consumption

2017-05-16 Thread Peter Bowen via dev-security-policy

> On May 16, 2017, at 7:42 AM, Jakob Bohm via dev-security-policy 
>  wrote:
> 
> On 13/05/2017 00:48, Ryan Sleevi wrote:
>> 
>> And in the original message, what was requested was
>> "If Mozilla is interested in doing a substantial public service, this
>> situation could be improved by having Mozilla and MDSP define a static
>> configuration format that expresses the graduated trust rules as data, not
>> code."
>> 
>> Mozilla does express such graduated trust rules as data, not code, when
>> possible. This is available with in the certdata.txt module data as
>> expressive records using the NSS vendor attributes.
>> 
>> Not all such requirements can be expressed as data, not code, but when
>> possible, Mozilla does. That consuming applications do not make use of that
>> information is something that consuming applications should deal with.
>> 
> 
> I suggest you read and understand the OP in this thread, which is
> *entirely* about using the Mozilla Root Store outside Mozilla code.
> 
> Yet you keep posting noise about using the Mozilla store with Mozilla
> code such as NSS, with Mozilla internal database formats, etc. etc.
> 
> Just above you commented "Not all such requirements can be expressed as
> code", which is completely backwards thinking when the request is for
> putting all additional conditions in an open database in a *stable*
> data format that can be easily and fully consumed by non-Mozilla code.

Jakob,

What I think Ryan has been trying to express is his view that this request is 
not possible.  A *stable* data format is unable to express future graduated 
trust rules.

To see why Ryan likely has this view, consider the authroot.stl file used by 
Microsoft Windows.  The structure is essentially a certificate plus a set of 
properties.  The properties are name value pairs.  The challenge in using this 
file is that the list of properties keeps extending.  New property names are 
added on a fairly routine basis.  For example, the last update added 
NOT_BEFORE_FILETIME and NOT_BEFORE_ENHKEY_USAGE.  This is great — we now know 
that certain roots have one or both of these properties with both represent 
some sort of restriction.  However we have zero clue what they mean or how to 
process them.

Now consider certdata.txt, the Mozilla trust store format.  It is similarly 
extensible, after all it is just a serialization of a PKCS#11 token.  PKCS#11 
has objects which each have attributes.  Mozilla certdata.txt could take the 
exact same path as authroot.stl and just add attributes for each new rule.  
Imagine a new attribute on CKO_NSS_TRUST class objects called 
CKA_NAME_CONSTRAINTS.  This is contains DER encoded NameConstraints.  If this 
were suddenly added, what would existing libraries do?  Probably just ignore 
it, because they don’t query for CKA_NAME_CONSTRAINTS.  Taking this to an 
extreme, certain objects could even having attributes like 
CKA_CONSTRAINT_METHOD with a value that is the name of a function.

While this would be stable and would express all the rules, it isn’t clear that 
such is valuable to anyone because you need the matching code to query 
attributes and do something with the value.  It also can lead to a false sense 
of security because using a new certdata.txt with an old library will not 
actually implement the trust changes.

Does this help explain the problem?

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Configuring Graduated Trust for Non-Browser Consumption

2017-05-16 Thread Peter Bowen via dev-security-policy
On Tue, May 16, 2017 at 10:04 AM, Jakob Bohm via dev-security-policy
 wrote:
> On 16/05/2017 18:10, Peter Bowen wrote:
>>
>> On Tue, May 16, 2017 at 9:00 AM, Jakob Bohm via dev-security-policy
>>  wrote:
>>>
>>> Your post above is the first response actually saying what is wrong
>>> with the Microsoft format and the first post saying all the
>>> restrictions are actually in the certdata.txt file, and not just in the
>>> binary file used by the the NSS library.
>>
>>
>> What "binary file" are you referring to?  NSS is distributed as source
>> and I'm unaware of any binary file used by the NSS library for trust
>> decisions.
>>
>
> Source code for Mozilla products presumably includes some binary files
> (such as PNG files), so why not a binary database file that becomes
> that data that end users can view (and partially edit) in the Mozilla
> product dialogs.  Existence of a file named "generate_certdata.py",
> which is not easily grokked also confused me into thinking that
> certdata.txt was some kind of extracted subset.
>
> Anyway, having now looked closer at the file contents (which does look
> like computer output), I have been unable to find a line that actually
> expresses any of the already established "gradual trusts".
>
> Could you please point out where in certdata.txt the following are
> expressed, as I couldn't find it in a quick scan:
>
> 1. The date restrictions on WoSign-issued certificates.
>
> 2. The EV trust bit for some CAs.

These are not included in certdata.txt for the reasons described
earlier -- they are application-only things, not Mozilla platform
things.  I know it is non-obvious, but there are two parts of
processing certificates in many applications:

1) The certificate is passed to the platform library (along with some
other data, like name to validate) and a result is returned.
2) Then the application makes further decisions.

This is not only true for Chrome but also Firefox.  EV information is
decided by the application.  See
https://dxr.mozilla.org/mozilla-central/source/security/certverifier/ExtendedValidation.cpp
for information about deciding on EV.  See
https://dxr.mozilla.org/mozilla-central/source/security/certverifier/NSSCertDBTrustDomain.cpp#898
for additional checks (outside NSS) added by Firefox.

This could be moved into NSS, but there hasn't been demand to do so at
this point.  It could also be added as unused attributes in
certdata.txt (which is the master), but no one has volunteered
extended this to support the additional info and add the necessary
tests to ensure that it doesn't go stale.

My experience is that Mozilla is very open to taking patches and will
help contributors get things into acceptable form, so I'm sure they
would be happy to take patches if there is demand for such.  It is
fairly important that someone who is going to use the attributes put
together the patch, otherwise it may prove to be useless.  For
example, I could easily create a patch that add a CKA_TRUST_FILTER
attribute that is designed to be fed into a case statement to indicate
the filter to be applied.  Based on the code, it looks like I probably
needs a "cnnic" case, a "wosign" case, and a "globalsignr2" case.
This meets my needs, but it might not need your needs.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Configuring Graduated Trust for Non-Browser Consumption

2017-05-16 Thread Peter Bowen via dev-security-policy
On Tue, May 16, 2017 at 10:52 AM, Jakob Bohm via dev-security-policy
 wrote:
> On 16/05/2017 19:36, Peter Bowen wrote:
>>
>> My experience is that Mozilla is very open to taking patches and will
>> help contributors get things into acceptable form, so I'm sure they
>> would be happy to take patches if there is demand for such.  It is
>> fairly important that someone who is going to use the attributes put
>> together the patch, otherwise it may prove to be useless.  For
>> example, I could easily create a patch that add a CKA_TRUST_FILTER
>> attribute that is designed to be fed into a case statement to indicate
>> the filter to be applied.  Based on the code, it looks like I probably
>> needs a "cnnic" case, a "wosign" case, and a "globalsignr2" case.
>> This meets my needs, but it might not need your needs.
>>
>
> Ok, can you point me to any "graduated trust" actually present in
> certdata.txt ?

See the CKA_TRUST_SERVER_AUTH, CKA_TRUST_EMAIL_PROTECTION,
CKA_TRUST_CODE_SIGNING, and CKA_TRUST_STEP_UP_APPROVED attributes in
CKO_NSS_TRUST class objects.  They all represent non-binary trust of
roots, similar to that contained in the OpenSSL X509_AUX structure
mentioned much earlier in the thread.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Configuring Graduated Trust for Non-Browser Consumption

2017-05-16 Thread Peter Bowen via dev-security-policy
On Tue, May 16, 2017 at 9:00 AM, Jakob Bohm via dev-security-policy
 wrote:
> Your post above is the first response actually saying what is wrong
> with the Microsoft format and the first post saying all the
> restrictions are actually in the certdata.txt file, and not just in the
> binary file used by the the NSS library.

What "binary file" are you referring to?  NSS is distributed as source
and I'm unaware of any binary file used by the NSS library for trust
decisions.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Configuring Graduated Trust for Non-Browser Consumption

2017-05-13 Thread Peter Bowen via dev-security-policy

> On May 12, 2017, at 3:48 PM, Ryan Sleevi via dev-security-policy 
>  wrote:
> 
> On Fri, May 12, 2017 at 6:02 PM, Jakob Bohm via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>> 
>> This SubThread (going back to Kurt Roeckx's post at 08:06 UTC) is about
>> suggesting a good format for sharing this info across libraries though.
>> Discussing that on a list dedicated to a single library (such as NSS or
>> OpenSSL) would be pointless.
>> 
> 
> And in the original message, what was requested was
> "If Mozilla is interested in doing a substantial public service, this
> situation could be improved by having Mozilla and MDSP define a static
> configuration format that expresses the graduated trust rules as data, not
> code."
> 
> Mozilla does express such graduated trust rules as data, not code, when
> possible. This is available with in the certdata.txt module data as
> expressive records using the NSS vendor attributes.
> 
> Not all such requirements can be expressed as code, not data, but when
> possible, Mozilla does. That consuming applications do not make use of that
> information is something that consuming applications should deal with.

One thing that doesn’t happen today but would likely be broadly compatible 
would be to replace certain self-signed root certs in the trust store with 
certs that appear to be cross-signed with restrictions.  They could in reality 
have fixed values in the signature section, but this would allow adding 
constraints directly in certificate structure.  Examples could include adding 
or modifying extensions such as extendedKeyUsage, nameConstraints, or  private 
key usage period.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: New undisclosed intermediates

2017-06-08 Thread Peter Bowen via dev-security-policy
On Thu, Jun 8, 2017 at 7:09 PM, Jonathan Rudenberg via
dev-security-policy  wrote:
>
>> On Jun 8, 2017, at 20:43, Ben Wilson via dev-security-policy 
>>  wrote:
>>
>> I don't believe that disclosure of root certificates is the responsibility
>> of a CA that has cross-certified a key.  For instance, the CCADB interface
>> talks in terms of "Intermediate CAs".  Root CAs are the responsibility of
>> browsers to upload.  I don't even have access to upload a "root"
>> certificate.
>
> I think the Mozilla Root Store policy is pretty clear on this point:
>
>> All certificates that are capable of being used to issue new certificates, 
>> and which directly or transitively chain to a certificate included in 
>> Mozilla’s CA Certificate Program, MUST be operated in accordance with this 
>> policy and MUST either be technically constrained or be publicly disclosed 
>> and audited.
>
> The self-signed certificates in the present set are all in scope for the 
> disclosure policy because they are capable of being used to issue new 
> certificates and chain to a certificate included in Mozilla’s CA Certificate 
> Program. From the perspective of the Mozilla root store they look like 
> intermediates because they can be used as intermediates in a valid path to a 
> root certificate trusted by Mozilla.

There are two important things about self-issued certificates:

1) They cannot expand the scope of what is allowed.
Cross-certificates can create alternative paths with different
restrictions.  Self-issued certificates do not provide alternative
paths that may have fewer constraints.

2) There is no way for a "parent" CA to prevent them from existing.
Even if the only cross-sign has a path length constraint of zero, the
"child" CA can issue self-issued certificates all day long.  If they
are self-signed there is no real value in disclosing them, given #1.

I think that it is reasonable to say that self-signed certificates are
out of scope.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: New undisclosed intermediates

2017-06-08 Thread Peter Bowen via dev-security-policy
On Thu, Jun 8, 2017 at 7:02 PM, Matthew Hardeman via
dev-security-policy  wrote:
> On Thursday, June 8, 2017 at 7:44:08 PM UTC-5, Ben Wilson wrote:
>> I don't believe that disclosure of root certificates is the responsibility
>> of a CA that has cross-certified a key.  For instance, the CCADB interface
>> talks in terms of "Intermediate CAs".  Root CAs are the responsibility of
>> browsers to upload.  I don't even have access to upload a "root"
>> certificate.
>
> At least in terms of intention of disclosing the intermediates, I don't think 
> you've made a fair assessment of the situation.
>
> The responsibility to disclose must fall upon the signer.  Not the one who 
> was signed.
>
> Cross-signature certificates are, effectively, intermediates granting an 
> alternate / enhanced validation path to trust to a distinct, separate 
> hierarchy.
>
> While IdenTrust signs Let's Encrypt's intermediates rather than a cross-sign 
> of their root, the principle is ultimately the same.  The browser programs 
> clearly wish to have those who are positioned to grant trust accountable for 
> any such trust that they grant.
>
> It's one question if the other root is already in the trust store, but 
> imagine it's some large enterprise root that's been running, perhaps under 
> appropriate audits but maybe not, cross-signed by a widely trusted program 
> participant.
>
> Perhaps the text needs clarifying, but I find it hard to believe that any of 
> the browser programs is of the opinion that you can cross-sign someone else's 
> root cert and not disclose that.

I don't think that is the question at hand.  I think Ben means
"self-signed" or "self-issued" when he says "root" certificate.

I agree with Ben that self-signed certificates should be out of scope.
Self-issued certificates that are not self-signed probably should be
in scope.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Root Store Policy 2.5: Call For Review and Phase-In Periods

2017-06-21 Thread Peter Bowen via dev-security-policy
On Wed, Jun 21, 2017 at 7:15 AM, Gervase Markham via
dev-security-policy  wrote:
> On 21/06/17 13:13, Doug Beattie wrote:
>>> Do they have audits of any sort?
>>
>> There had not been any audit requirements for EKU technically
>> constrained CAs, so no, there are no audits.
>
> In your view, having an EKU limiting the intermediate to just SSL or to
> just email makes it a technically constrained CA, and therefore not
> subject to audit under any root program?
>
> I ask because Microsoft's policy at http://aka.ms/auditreqs says:
>
> "Microsoft requires that every CA submit evidence of a Qualifying Audit
> on an annual basis for the CA and any non-limited root within its PKI
> chain."
>
> In your view, are these two intermediates, which are constrained only by
> having the email and client auth EKUs, "limited" or "non-limited"?

What is probably not obvious is that there is a very specific
definition of non-limited with respect to the Microsoft policy.  The
definition is unfortunately contained in the contract, which is
confidential, but the definition makes it clear that these CAs are out
of scope for audits.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: ETSI auditors still not performing full annual audits?

2017-06-19 Thread Peter Bowen via dev-security-policy
On Mon, Jun 19, 2017 at 12:14 PM, Kathleen Wilson via
dev-security-policy  wrote:
> I just filed https://bugzilla.mozilla.org/show_bug.cgi?id=1374381 about an 
> audit statement that I received for SwissSign. I have copied the bug 
> description below, because I am concerned that there still may be ETSI 
> auditors (and CAs?) who do not understand the audit requirements, see below.
>
> ~~~
> SwissSign provided their annual audit statement:
> https://bug1142323.bmoattachments.org/attachment.cgi?id=8853299
>
> Problems noted in it:
> -- "Agreed-upon procedures engagement" - special words for audits - does not 
> necessarily encompass the full scope
> -- "surveillance certification audits" - does not necessarily mean a full 
> audit (which the BRs require annually)
> -- "point in time audit" -- this means that the auditor's evaluation only 
> covered that point in time (note a period in time)
> -- "only intended for the client" -- Doesn't meet Mozilla's requirement for 
> public-facing audit statement.
> -- "We were not engaged to and did not conduct an examination, the objective 
> of which would be the expression of an opinion on the Application for 
> Extended Validation (EV) Certificate. Accordingly, we do not express such an 
> opinion. Had we performed additional procedures, other matters might have 
> come to our attention that would have been reported to you." -- some of the 
> included root certs are enabled for EV treatment, so need an EV audit as well.
>
>
> According to section 8.1 of the CA/Browser Forum's Baseline Requirements:
> "Certificates that are capable of being used to issue new certificates MUST 
> ... be ... fully audited in line with all remaining requirements from this 
> section.
> ...
> The period during which the CA issues Certificates SHALL be divided into an 
> unbroken sequence of audit periods. An audit period MUST NOT exceed one year 
> in duration."
>
> So, a full period-in-time audit is required every year.
>
> After I voiced concern 
> (https://bugzilla.mozilla.org/show_bug.cgi?id=1142323#c27) the CA provided an 
> updated audit statement to address the concerns I had raised in the bug:
> https://bugzilla.mozilla.org/attachment.cgi?id=8867948
> I do not understand how the audit statement can magically change from 
> point-in-time to a period-in-time.
> ~~~
>
> I will greatly appreciate thoughtful and constructive input into this 
> discussion about what to do about this SwissSign audit situation, and if this 
> is an indicator that ETSI auditors are still not performing full annual 
> audits that satisfy the CA/Browser Forum's Baseline Requirements.

Kathleen,

It seems there is some confusion. The document presented would appear
to be a Verified Accountant Letter (as defined in the EV Guidelines)
and can used as part of the process to validate a request for an EV
certificate.  It is not an audit report and is not something normally
submitted to browsers.

I suspect someone simply attached the wrong document to an email or
uploaded the wrong document.  This makes no sense to be part of an
audit report.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Unknown Intermediates

2017-06-23 Thread Peter Bowen via dev-security-policy
On Fri, Jun 23, 2017 at 6:17 AM, Rob Stradling via dev-security-policy
 wrote:
> On 23/06/17 14:10, Kurt Roeckx via dev-security-policy wrote:
>>
>> On 2017-06-23 14:59, Rob Stradling wrote:
>>>
>>> Reasons:
>>>- Some are only trusted by the old Adobe CDS program.
>>>- Some are only trusted for Microsoft Kernel Mode Code Signing.
>>>- Some are very old roots that are no longer trusted.
>>
>>
>> I wonder if Google's daedalus would like to see some of those.
>
>
> Daedalus only accepts expired certs.  Most of these haven't expired.
>
> If there's interest, I could add these to our Dodo log.

For those three, I would be interested in seeing them.  I wonder if
any match submariner as well.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla Policy and CCADB Disclosure scope

2017-05-22 Thread Peter Bowen via dev-security-policy
On Fri, May 19, 2017 at 6:47 AM, Gervase Markham via
dev-security-policy  wrote:
> We need to have a discussion about the appropriate scope for:
>
> 1) the applicability of Mozilla's root policy
> 2) required disclosure in the CCADB
>
> The two questions are related, with 2) obviously being a subset of 1).
> It's also possible we might decide that for some certificates, some
> subset of the Mozilla policy applies, but not all of it.
>
> I'm not even sure how best to frame this discussion, so let's have a go
> from this angle, and if it runs into the weeds, we can try again another
> way.
>
> The goal of scoping the Mozilla policy is, to my mind, to have Mozilla
> policy sufficiently broadly applicable that it covers all
> publicly-trusted certs and also doesn't leave unregulated sufficiently
> large number of untrusted certs inside publicly-trusted hierarchies that
> it will hold back forward progress on standards and security.
>
> The goal of CCADB disclosure is to see what's going on inside the WebPKI
> in sufficient detail that we don't miss important things. Yes, that's vague.
>
> Here follow a list of scenarios for certificate issuance. Which of these
> situations should be in full Mozilla policy scope, which should be in
> partial scope (if any), and which of those should require CCADB
> disclosure? Are there scenarios I've missed?

You seem to be assuming each of A-I have a path length constraint of
0, as your scenarios don't include CA-certs below each category.

> A) Unconstrained intermediate
>   AA) EE below
> B) Intermediate constrained to id-kp-serverAuth
>   BB) EE below
> C) Intermediate constrained to id-kp-emailProtection
>   CC) EE below
> D) Intermediate constrained to anyEKU
>   DD) EE below
> E) Intermediate usage-constrained some other way
>   EE) EE below
> F) Intermediate name-constrained (dnsName/ipAddress)
>   FF) EE below
> G) Intermediate name-constrained (rfc822Name)
>   GG) EE below
> H) Intermediate name-constrained (srvName)
>   HH) EE below
> I) Intermediate name-constrained some other way
>   II) EE below
>
> If a certificate were to only be partially in scope, one could imagine
> it being exempt from one or more of the following sections of the
> Mozilla policy:
>
> * BR Compliance (2.3)
> * Audit (3.1) and auditors (3.2)
> * CP and CPS (3.3)
> * CCADB (4)
> * Revocation (6)

I would say that any CA-certificate signed by a CA that does not have
name constraints and not constrained to things outside the set
{id-kp-serverAuth, id-kp-emailProtection, anyEKU} should be disclosed.
This would mean that the top level of all constrained hierarchies is
disclosed but subordinate CAs further down the tree and EE certs are
not.  I think that this is a reasonable trade off of privacy vs
disclosure.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla Policy and CCADB Disclosure scope

2017-05-22 Thread Peter Bowen via dev-security-policy
On Mon, May 22, 2017 at 1:02 PM, Matthew Hardeman via
dev-security-policy  wrote:
> On Monday, May 22, 2017 at 2:43:14 PM UTC-5, Peter Bowen wrote:
>
>>
>> I would say that any CA-certificate signed by a CA that does not have
>> name constraints and not constrained to things outside the set
>> {id-kp-serverAuth, id-kp-emailProtection, anyEKU} should be disclosed.
>> This would mean that the top level of all constrained hierarchies is
>> disclosed but subordinate CAs further down the tree and EE certs are
>> not.  I think that this is a reasonable trade off of privacy vs
>> disclosure.
>
> I would agree that those you've identified as "should be disclosed" 
> definitely should be disclosed.  I am concerned, however, that SOME of the 
> remaining certificates beyond those should probably also be disclosed.  For 
> safety sake, it may be better to start with an assumption that all CA and 
> SubCA certificates require full disclosure to CCADB and then define 
> particular specific rule sets for those which don't require that level.

Right now the list excludes anything with a certain set of name
constraints and anything that has EKU constraints outside the in-scope
set.  I'm suggesting that the first "layer" of CA certs always should
be disclosed.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla Policy and CCADB Disclosure scope

2017-05-22 Thread Peter Bowen via dev-security-policy
On Mon, May 22, 2017 at 12:21 PM, Ryan Sleevi via dev-security-policy
 wrote:
> Consider, on one extreme, if every of the Top 1 sites used TCSCs to
> issue their leaves. A policy, such as deprecating SHA-1, would be
> substantially harder, as now there's a communication overhead of O(1 +
> every root CA) rather than O(# of root store CAs).

Why do you need to add 10,000 communication points?  A TCSC is, by
definition, a subordinate CA.  The WebPKI is not a single PKi, is a
set of parallel PKIs which do not share a common anchor.  The browser
to CA relationship is between the browser vendor and each root CA.
This is O(root CA operator) not even O(every root CA).  If a root CA
issues 10,000 subordinate CAs, then they better have a compliance plan
in place to have assurance that all of them will do the necessary
things.

> It may be that the benefits of TCSCs are worth such risk - after all, the
> Web Platform and the evolution of its related specs (URL, Fetch, HTML)
> deals with this problem routinely. But it's also worth noting the
> incredible difficulty and friction of deprecating insecure, dangerous APIs
> - and the difficulty in SHA-1 (or commonNames) for "enterprise" PKIs - and
> as such, may represent a significant slowdown in progress, and a
> corresponding significant increase in user-exposed risk.
>
> This is why it may be more useful to take a principled approach, and to, on
> a case by case basis, evaluate the risk of reducing requirements for TCSCs
> (which are already required to abide by the BRs, and simply exempted from
> auditing requirements - and this is independent of any Mozilla
> dispensations), both in the short-term and in the "If every site used this"
> long-term.

It seems this discussion is painting TCSCs with a broad brush.  I
don't see anything in this discussion that makes the TCSC relationship
any different from any other subordinate CA.  Both can be operated
either by the same organization that operates the root CA or an
unrelated organization.  The Apple and Google subordinate CAs are
clearly not TCSCs but raise the same concerns.  If there were 10,000
subordinates all with WebTrust audits, you would have the exact same
problem.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Plan for Symantec posted

2017-05-24 Thread Peter Bowen via dev-security-policy
On Mon, May 22, 2017 at 9:33 AM, Gervase Markham via
dev-security-policy  wrote:
> On 19/05/17 21:04, Kathleen Wilson wrote:
>> - What validity periods should be allowed for SSL certs being issued
>> in the old PKI (until the new PKI is ready)?
>
> Symantec is required only to be issuing in the new PKI by 2017-08-08 -
> in around ten weeks time. In the mean time, there is no restriction
> beyond the normal one on the length they can issue. This makes sense,
> because if certs issued yesterday will expire 39 months from yesterday,
> then certs issued in 10 weeks will only expire 10 weeks after that - not
> much difference.

Can you clarify the meaning of "new PKI"?  I can see two reasonable
interpretations:

1) The systems and processes used to issue end-entity certificates
(server authentication and email protection) must be distinct from the
existing systems.  This implies that a new set of subordinate CAs
under the existing Symantec-owned roots would meet the requirements.
These new subordinate CAs could be owned and operated by either
Symantec or owned and operated by a third party who has their own
WebTrust audit.

2) The new PKI includes both new offline CAs that meet the
requirements to be Root CAs and new subordinate CAs that issue
end-entity certificates. the The new root CAs could be cross-signed by
existing CAs (regardless of owner), but the new subordinate CAs must
not be directly signed by any Symantec-owned root CA that currently
exists.

Can you also clarify the expectations with regards to the existing
roots?  You say "only to be issuing in the new PKI".  Does Mozilla
intend to require that all CAs that chain to a specific set of roots
cease issuing all server authentication and email protection after a
certain date, unless they are also under one of the "new" roots?  If
so, will issuance be allowed from CAs that chain to the "old" roots
once certain actions take place (e.g. removed from the trust stores in
all supported versions of Mozilla products)?

>> - I'm not sold on the idea of requiring Symantec to use third-party
>> CAs to perform validation/issuance on Symantec's behalf. The most
>> serious concerns that I have with Symantec's old PKI is with their
>> third-party subCAs and third-party RAs. I don't have particular
>> concern about Symantec doing the validation/issuance in-house. So, I
>> think it would be better/safer for Symantec to staff up to do the
>> validation/re-validation in-house rather than using third parties. If
>> the concern is about regaining trust, then add auditing to this.
>
> Of course, if we don't require something but Google do (or vice versa)
> then Symantec will need to do it anyway. But I will investigate in
> discussions whether some scheme like this might be acceptable to both
> the other two sides and might lead to a quicker migration timetable to
> the new PKI.

Google has proposed adding some indication to certificates of whether
the information validation was performed by Symantec or another party.
If Mozilla does not require a third-party to perform validation, would
it make sense to have a concept of validations performed by the "new"
RA and validations performed by the "old" RA or validations performed
in the scope of Symantec audits versus validations performed in the
scope of another audit?

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec: Update

2017-05-19 Thread Peter Bowen via dev-security-policy
On Fri, May 19, 2017 at 7:25 AM, Gervase Markham via
dev-security-policy  wrote:
> On 15/05/17 21:06, Michael Casadevall wrote:
>
>>> Are there any RA's left for Symantec?
>>
>> TBH, I'm not sure. I think Gervase asked for clarification on this
>> point, but its hard to keep track of who could issue as an RA. I know
>> quite a few got killed, but I'm not sure if there are any other subCAs
>> based off re-reading posts in this thread.
>
> Symantec say they have closed their RA program, only Apple and Google
> are left in their GeoRoot program, and they have no other programs which
> allow third parties to have issuance capability.

This is not accurate.  They have indicated that the SSP customers have
some level of issuance capability.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Add definition of "mis-issuance"

2017-06-01 Thread Peter Bowen via dev-security-policy
On Thu, Jun 1, 2017 at 5:49 AM, Ryan Sleevi via dev-security-policy
 wrote:
> On Thu, Jun 1, 2017 at 4:35 AM, Gervase Markham via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
>> On 31/05/17 18:02, Matthew Hardeman wrote:
>> > Perhaps some reference to technologically incorrect syntax (i.e. an
>> incorrectly encoded certificate) being a mis-issuance?
>>
>> Well, if it's so badly encoded Firefox doesn't recognise it, we don't
>> care too much (apart from how it speaks to incompetence). If Firefox
>> does recognise it, then I'm not sure "misissuance" is the right word if
>> all the data is correct.
>>
>
> I would encourage you to reconsider this, or perhaps I've misunderstood
> your position. To the extent that Mozilla's mission includes "The
> effectiveness of the Internet as a public resource depends upon
> interoperability (protocols, data formats, content) ", the
> well-formedness and encoding directly affects Mozilla users (sites working
> in Vendors A, B, C but not Mozilla) and the broader ecosystem (sites
> Mozilla users are protected from that vendors A, B, C are not).
>
> I think considering this in the context of "CA problematic practices" may
> help make this clearer - they are all things that speak to either
> incompetence or confusion (and a generous dose of Hanlon's Razor) - but
> their compatibility issues presented both complexity and risk to Mozilla
> users.
>
> So I would definitely encourage that improper application of the protocols
> and data formats constitutes misissuance, as they directly affect
> interoperability and indirectly affect security :)

I think the policy needs to be carefully thought out here, as there is
no limitation to what can be signed with the key used to sign
certificates.   What is a malformed certificate to one person might be
a valid document to someone else.  Maybe you could disallow signing
things that are not valid ASN.1 DER?

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Add definition of "mis-issuance"

2017-06-02 Thread Peter Bowen via dev-security-policy
On Fri, Jun 2, 2017 at 4:27 AM, Ryan Sleevi <r...@sleevi.com> wrote:
>
>
> On Thu, Jun 1, 2017 at 10:19 PM, Peter Bowen via dev-security-policy
> <dev-security-policy@lists.mozilla.org> wrote:
>>
>> On Thu, Jun 1, 2017 at 5:49 AM, Ryan Sleevi via dev-security-policy
>> > So I would definitely encourage that improper application of the
>> > protocols
>> > and data formats constitutes misissuance, as they directly affect
>> > interoperability and indirectly affect security :)
>>
>> I think the policy needs to be carefully thought out here, as there is
>> no limitation to what can be signed with the key used to sign
>> certificates.   What is a malformed certificate to one person might be
>> a valid document to someone else.  Maybe you could disallow signing
>> things that are not valid ASN.1 DER?
>
>
> I suspect you're raising a concern since a CA can use a SIGNED{ToBeSigned}
> construct from RFC 6025[1] to express a signature over a structure defined
> by "ToBeSigned", and wanting to distinguish that, for example, a certificate
> is not a CRL, as they're distinguished from their ToBeSigned construct. I
> would argue here that any signatures produced / structures provided should
> have an appropriate protocol or data format definition to justify the
> application of that signature, and that it would be misissuance in the
> absence of that support. Logically, I'm suggesting it's misissuance to, for
> example, expose a prehash signing oracle using a CA key, or to sign
> arbitrary data if it's not encoded 'like' a certificate (without having an
> equivalent appropriate standard defining what the CA is signing)

Yes, my concern is that this could make SIGNED{ToBeSigned} considered
misissuance if ToBeSigned is not a TBSCertificate.  For example, if I
could sign an ASN.1 sequence which had the following syntax:

TBSNotCertificate ::= {
   notACertificateUTF8String,
   COMPONENTS OF TBSCertificate
}

Someone could argue that this is mis-issuance because the resulting
"certificate" is clearly corrupt, as it fails to start with an
INTEGER.  On the other hand, I think that this is clearly not
mis-issuance of a certificate, as there is no sane implementation that
would accept this as a certificate.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Make it clear that Mozilla policy has wider scope than the BRs

2017-06-02 Thread Peter Bowen via dev-security-policy
On Fri, Jun 2, 2017 at 8:50 AM, Gervase Markham via
dev-security-policy  wrote:
> On 02/06/17 12:24, Kurt Roeckx wrote:
>> Should that be "all certificates" instead of "all SSL certificates"?
>
> No; the Baseline Requirements apply only to SSL certificates.

Should Mozilla include a clear definition of "SSL certificates" in the
policy?  And should it be based on technical attributes rather than
intent of the issuer?

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Add definition of "mis-issuance"

2017-06-02 Thread Peter Bowen via dev-security-policy
On Fri, Jun 2, 2017 at 8:12 AM, Ryan Sleevi wrote:
> On Fri, Jun 2, 2017 at 10:09 AM Jakob Bohm wrote:
>
>> On 02/06/2017 15:54, Ryan Sleevi wrote:
>> > On Fri, Jun 2, 2017 at 9:33 AM, Peter Bowen wrote:
>> >
>> >> Yes, my concern is that this could make SIGNED{ToBeSigned} considered
>> >> misissuance if ToBeSigned is not a TBSCertificate.  For example, if I
>> >> could sign an ASN.1 sequence which had the following syntax:
>> >>
>> >> TBSNotCertificate ::= {
>> >> notACertificateUTF8String,
>> >> COMPONENTS OF TBSCertificate
>> >> }
>> >>
>> >> Someone could argue that this is mis-issuance because the resulting
>> >> "certificate" is clearly corrupt, as it fails to start with an
>> >> INTEGER.  On the other hand, I think that this is clearly not
>> >> mis-issuance of a certificate, as there is no sane implementation that
>> >> would accept this as a certificate.
>> >>
>> >
>> > Would it be a misissuance of a certificate? Hard to argue, I think.
>> >
>> > Would it be a misuse of key? I would argue yes, unless the
>> > TBSNotCertificate is specified/accepted for use in the CA side (e.g. IETF
>> > WD, at the least).
>> >
>> >
>> > The general principle I was trying to capture was one of "Only sign these
>> > defined structures, and only do so in a manner conforming to their
>> > appropriate encoding, and only do so after validating all the necessary
>> > information. Anything else is 'misissuance' - of a certificate, a CRL, an
>> > OCSP response, or a Signed-Thingy"
>> >
>>
>> Thing is, that there are still serious work involving the definition of
>> new CA-signed things, such as the recent (2017) paper on a super-
>> compressed CRL-equivalent format (available as a Firefox plugin).
>
>
> This does ny rely on CA signatures - but also perfectly demonstrates the
> point - that these things should be getting widely reviewed before
> implementing.
>>
>> Banning those by policy would be as bad as banning the first OCSP
>> responder because it was not yet on the old list {Certificate, CRL}.
>
>
> This argument presumes technical competence of CAs, for which collectively
> there is no demonstrable evidence.
>
> Functionally, this is identical to banning the "any other method" for
> domain validation. Yes, it allowed flexibility - but at the extreme cost to
> security.
>
> If there are new and compelling thing to sign, the community can review and
> the policy be updated. I cannot understand the argument against this basic
> security sanity check.
>
>
>>
>> Hence my suggested phrasing of "Anything that resembles a certificate"
>> (my actual wording a few posts up was more precise of cause).
>
>
> Yes, and I think that wording is insufficient and dangerous, despite your
> understandable goals, for the reasons I outlined.
>
> There is little objective technical or security reason to distinguish the
> thing that is signed - it should be a closed set (whitelists, not
> blacklists), just like algorithms, keysizes, or validation methods - due to
> the significant risk to security and stability.

Back in November 2016, I suggested that we try to create stricter
rules around CAs:
https://cabforum.org/pipermail/public/2016-November/008966.html and
https://groups.google.com/d/msg/mozilla.dev.security.policy/UqjD1Rff4pg/8sYO2uzNBwAJ.
It generated some discussion but I never pushed things forward.  Maybe
the following portion should be part of Mozilla policy?

Private Keys which are CA private keys must only be used to generate signatures
that meet the following requirements:

1. The signature must be over a SHA-256, SHA-384, or SHA-512 hash
2. The data being signed must be one of the following:
  * CA Certificate (a signed TBSCertificate, as defined in [RFC
5280](https://tools.ietf.org/html/rfc5280), with a
id-ce-basicConstraints extension with the cA component set to true)
  * End-entity Certificate (a signed TBSCertificate, as defined in
[RFC 5280](https://tools.ietf.org/html/rfc5280), that is not a CA
Certificate)
  * Certificate Revocation Lists (a signed TBSCertList as defined in
[RFC 5280](https://tools.ietf.org/html/rfc5280))
  * OCSP response (a signed ResponseData as defined in [RFC
6960](https://tools.ietf.org/html/rfc6960))
  * Precertificate (as defined in draft-ietf-trans-rfc6962-bis)
3. Data that does not meet the above requirements must not be signed

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla requirements of Symantec

2017-06-08 Thread Peter Bowen via dev-security-policy
On Thu, Jun 8, 2017 at 9:38 AM, Jakob Bohm via dev-security-policy
 wrote:
>
> As the linked proposal was worded (I am not on Blink mailing lists), it
> seemed obvious that the original timeline was:
>
>   Later: Once the new roots are generally accepted, Symantec can actually
> issue from the new SubCAs.
>
>   Long term: CRL and OCSP management for the managed SubCAs remain with the
> third party CAs.  This continues until the managed SubCAs expire or are
> revoked.

I don't see this last part in the proposal.  Instead the proposal
appears to specifically contemplate the SubCAs being transferred to
Symantec once the new roots are accepted in the required trust stores.

Additionally, there is no policy, as far as I know, that governs
transfer of non-Root CAs.  This is possibly a gap, but an existing
one.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: New undisclosed intermediates

2017-06-09 Thread Peter Bowen via dev-security-policy
On Fri, Jun 9, 2017 at 9:11 AM, Matthew Hardeman via
dev-security-policy  wrote:
> For these self-signed roots which have a certificate subject and key which 
> match to a different certificate which is in a trusted path (like an 
> intermediate to a trusted root), the concern is that the mere existence of 
> the certificate speaks to a signature produced by a private key which DOES 
> have the privileged status of extending the trust of the Web PKI.
>
> The question then is whether that signature was properly accounted for, 
> audited, etc.
>
> Additionally, if said root is in active use, are the issuances descending 
> from _that_ self-signed root being audited?  If not, that's a problem, 
> because those certificates could just be served up with the same-subject, 
> same-key trusted intermediate and chain to publicly trusted roots, all 
> without having been actually issued from the trusted intermediate.

I think there is some confusion here.  Certificates do not sign
certificates.  The existence of mulitiple self-signed certificates
with the same {subject, public key} combination does not imply there
are multiple issuers.  Further, audits does not audit root
_certificates_, they audit CA operations.  The audit will look at
practices for signing certificates but you cannot audit an object
itself.

Additionally, there is nothing that says a CA operator may not have
multiple issuers that have the same private key and use the same
issuer name.  The only requirement is that they avoid serial number
collision and that the CRL contain the union of both revocations.

The mere existence of multiple self-signed certificates does not
change any of this.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec: Draft Proposal

2017-05-05 Thread Peter Bowen via dev-security-policy
On Fri, May 5, 2017 at 9:02 AM, Gervase Markham via
dev-security-policy  wrote:
> On 04/05/17 21:58, Ryan Sleevi wrote:
>
> I asked Symantec what fields CrossCert had control over. Their answer is
> here on page 3:
> https://bug1334377.bmoattachments.org/attachment.cgi?id=8838825
> It says CrossCert (and so, presumably, the other RAs in the program) had
> no control over the CP field, which is (AIUI) the one they'd need to
> change in order to add an EV OID. If I've got this wrong, please tell me
> ASAP.

Note footnote (1): "These attributes and extensions are static values
configured in the certificate profile"

We know that the RAs could use different certificate profiles, as
certificates they approved had varying issuers, and "Issuer DN" has
the same "No(1)" that CP has in the table in the doc you linked.  I
don't see any indication of what profiles each RA was allowed to use.
It could be that Symantec provided one or more profiles to the RA that
contained EV OIDs.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec: Draft Proposal

2017-05-05 Thread Peter Bowen via dev-security-policy
On Fri, May 5, 2017 at 9:18 AM, Gervase Markham  wrote:
> On 05/05/17 17:09, Peter Bowen wrote:
>> We know that the RAs could use different certificate profiles, as
>> certificates they approved had varying issuers, and "Issuer DN" has
>> the same "No(1)" that CP has in the table in the doc you linked.  I
>> don't see any indication of what profiles each RA was allowed to use.
>> It could be that Symantec provided one or more profiles to the RA that
>> contained EV OIDs.
>
> So the question to Symantec is: "did any of the RAs in your program have
> EV issuance capability? If not, given that they had issuance capability
> from intermediates which chained up to EV-enabled roots, what technical
> controls prevented them from having this capability?" Is that right?

I do not see answers to those questions in any of the documents
Symantec has attached to the bug.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Fix definition of constraints for id-kp-emailProtection

2017-05-05 Thread Peter Bowen via dev-security-policy
On Fri, May 5, 2017 at 11:44 AM, Dimitris Zacharopoulos via
dev-security-policy  wrote:
>
> Looking at https://github.com/mozilla/pkipolicy/issues/69
>
> do you have a proposed language that takes all comments into account? From
> what I understand, the Subordinate CA Certificate to be considered
> Technically Constrained only for S/MIME:
>
>  * MUST include an EKU that has the id-kp-emailProtection value AND
>  * MUST include a nameConstraints extension with
>  o a permittedSubtrees with
>  + rfc822Name entries scoped in the Domain (@example.com) or
>Domain Namespace (@example.com, @.example.com) controlled by
>an Organization and
>  + dirName entries scoped in the Organizational name and location
>  o an excludedSubtrees with
>  + a zero‐length dNSName
>  + an iPAddress GeneralName of 8 zero octets (covering the IPv4
>address range of 0.0.0.0/0)
>  + an iPAddress GeneralName of 32 zero octets (covering the
>IPv6 address range of ::0/0)

Why do we need to address dNSName and iPAddress if the only EKU is
id-kp-emailProtection?

Can we simplify this to just requiring at least one rfc822Name entry
in the permittedSubtrees?

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Email sub-CAs

2017-05-05 Thread Peter Bowen via dev-security-policy
(Resending as the attached file was too large)

On Fri, May 5, 2017 at 10:46 AM, Peter Bowen  wrote:
> On Thu, Apr 20, 2017 at 3:01 AM, Gervase Markham via
> dev-security-policy  wrote:
>> On 15/04/17 17:05, Peter Bowen wrote:
>>> Should the Mozilla policy change to require disclosure of all CA
>>> certificates issued by an unconstrained CA (but not necessarily
>>> require audits, CP/CPS, etc)? This would help identify unintentional
>>> gaps in policy.
>>
>> https://github.com/mozilla/pkipolicy/issues/73
>>
>> I think I understand your point but if you could expand a bit in the
>> bug, that would be most welcome.
>
> Right now the policy does not require disclosure of CA-certificates
> that the CA deems are technically constrained.  We have seen numerous
> cases where the CA misunderstood the rules or where the rules had
> unintentional gaps an disclosing the certificate as constrained will
> allow discovery of these problems.  For example the current policy
> says "an Extended Key Usage (EKU) extension which does not contain
> either of the id-kp-serverAuth and id-kp-emailProtection EKUs" which
> means a certificate that has EKU extension with only the
> anyExtendedKeyUsage KeyPurposeId fall outside of the scope.  This is
> obviously wrong, but would not be discovered today.
>
> The flow chart at https://imagebin.ca/v/3LRcaKW9t2Qt shows my proposal for 
> disclosure; it is a
> revised version from the one I posted to the CA/Browser Forum list and
> depends on the same higher level workflow
> (https://cabforum.org/pipermail/public/attachments/20170430/0e692c4d/attachment-0002.png
> ).
>
> Thanks,
> Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Fix definition of constraints for id-kp-emailProtection

2017-05-05 Thread Peter Bowen via dev-security-policy
On Fri, May 5, 2017 at 11:58 AM, Dimitris Zacharopoulos via
dev-security-policy <dev-security-policy@lists.mozilla.org> wrote:
>
>
> On 5/5/2017 9:49 μμ, Peter Bowen via dev-security-policy wrote:
>>
>> On Fri, May 5, 2017 at 11:44 AM, Dimitris Zacharopoulos via
>> dev-security-policy <dev-security-policy@lists.mozilla.org> wrote:
>>>
>>> Looking at https://github.com/mozilla/pkipolicy/issues/69
>>>
>>> do you have a proposed language that takes all comments into account?
>>> From
>>> what I understand, the Subordinate CA Certificate to be considered
>>> Technically Constrained only for S/MIME:
>>>
>>>   * MUST include an EKU that has the id-kp-emailProtection value AND
>>>   * MUST include a nameConstraints extension with
>>>   o a permittedSubtrees with
>>>   + rfc822Name entries scoped in the Domain (@example.com) or
>>> Domain Namespace (@example.com, @.example.com) controlled by
>>> an Organization and
>>>   + dirName entries scoped in the Organizational name and
>>> location
>>>   o an excludedSubtrees with
>>>   + a zero‐length dNSName
>>>   + an iPAddress GeneralName of 8 zero octets (covering the IPv4
>>> address range of 0.0.0.0/0)
>>>   + an iPAddress GeneralName of 32 zero octets (covering the
>>> IPv6 address range of ::0/0)
>>
>> Why do we need to address dNSName and iPAddress if the only EKU is
>> id-kp-emailProtection?
>>
>> Can we simplify this to just requiring at least one rfc822Name entry
>> in the permittedSubtrees?
>
>
> I would be fine with this but there may be implementations that ignore the
> EKU at the Intermediate CA level.

I've only ever heard of people saying that adding EKU at the
intermediate level breaks things, not that things ignore it.

> So, if we want to align with both the CA/B
> Forum BRs section 7.1.5 and the Mozilla Policy for S/MIME, perhaps we should
> keep the excludedSubtrees.

The BRs cover serverAuth.  If you look at
https://imagebin.ca/v/3LRcaKW9t2Qt, you will see that TCSC will end up
being two independent tests.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Fix definition of constraints for id-kp-emailProtection

2017-05-05 Thread Peter Bowen via dev-security-policy
On Fri, May 5, 2017 at 2:21 PM, Jakob Bohm via dev-security-policy
 wrote:
> On 05/05/2017 22:45, Dimitris Zacharopoulos wrote:
>>
>>
>>
>> On 5/5/2017 10:58 μμ, Peter Bowen wrote:
>>>
>>
>> I don't know if all implementations doing path validation, use the EKUs
>> at the CA level but it seems that the most popular applications use it.
>>
>
> The issue would be implementations that only check the EE cert for
> their desired EKU (such as ServerAuth checking for a TLS client or
> EmailProtection checking for a mail client).  In other words, relying
> parties whose software would accept a chain such as
>
> root CA (no EKUs) => SubCA (EmailProtection) => EE cert (ServerAuth).

This is the Mozilla policy and Mozilla does not do that, so I think we
should be fine there.

>>> If you look at
>>> https://imagebin.ca/v/3LRcaKW9t2Qt, you will see that TCSC will end up
>>> being two independent tests.
>>>
>
>
> One other question: Does your proposal allow a TCSC that covers both
> ServerAuth and EmailProtection for the domains of the same organization?
>
> Or put another way, would your proposed language force an organization
> wanting to run under its own TCSC(s) to obtain two TCSCs, one for their
> S/MIME needs and another for their TLS needs?

Yes, it allows a single TCSC that does both.  The little three diamond
symbol means parallel, so both legs are evaluated at the same time.
If both get to "Goto B", then it is a single TCSC that can issue both
serverAuth and emailProtection certs.

Also note that there is no check for pathlen:0 on the TCSC, so it
could be a policy CA that has multiple issuing CAs below it.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Changing CCADB domains

2017-05-04 Thread Peter Bowen via dev-security-policy
On Wed, May 3, 2017 at 10:52 AM, Kathleen Wilson via
dev-security-policy  wrote:
> All,
>
> I think it is time for us to change the domains that we are using for the 
> CCADB as follows.
>
> Change the links for...
>
> 1)  CAs to login to the CCADB
> from
> https://mozillacacommunity.force.com/
> to
> https://ccadb.force.com/
>
> 2) all published reports
> from
> https://mozillacaprogram.secure.force.com/
> to
> https://ccadb.secure.force.com/
>
>
> We asked Salesforce for a temporary redirect from the old to the new URLs, 
> but that was declined because we're not paying for premium support for the 
> CCADB. (Other than this change, I do not currently see the need for us to pay 
> for premium support.)

Is it also a "premium" feature to use custom domain names?  I think it
would probably make sense to use ccadb.org (which seems to belong to
Mozilla) rather than force.com.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DigiCert-Symantec Announcement

2017-09-20 Thread Peter Bowen via dev-security-policy
On Tue, Sep 19, 2017 at 8:39 PM, Jeremy Rowley via dev-security-policy
 wrote:
>
> The current end-state plan for root cross-signing is provided at 
> https://bugzilla.mozilla.org/show_bug.cgi?id=1401384. The diagrams there show 
> all of the existing sub CAs along with the new Sub CAs and root signings 
> planned for post-close. Some of these don’t have names so they are lumped in 
> a general “Intermediate” box.
>
> The Global G2 root will become the transition root to DigiCert for customers 
> who can’t move fully to an operational DigiCert roots prior to September 
> 2018. Any customers that require a specific root can use the transition root 
> for as long as they want, realizing that path validation may be an issue as 
> Symantec roots are removed by platform operators. Although we cannot 
> currently move to a single root because of the lack of EV support and trust 
> in non-Mozilla platforms, we can move to the existing three roots in an 
> orderly fashion.
>
> If the agreement closes prior to Dec 1, the Managed CA will never exist. 
> Instead, all issuance will occur through one of the three primary DigiCert 
> roots mentioned above with the exception of customers required to use a 
> Symantec root for certain platforms or pinning. The cross-signed Global root 
> will be only transitory, meaning we’d hope customers would migrate to the 
> DigiCert roots once the systems requiring a specific Symantec roots are 
> deprecated or as path validation errors arise.

Jeremy,

Am I correct that a key input into this plan was the Mozilla plan to
fully remove the Symantec roots from the trust store before then end
of 2018?  Google seemed to suggest they would keep trusting them for a
longer period with a restriction on which subordinate CAs are trusted.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DigiCert-Symantec Announcement

2017-09-21 Thread Peter Bowen via dev-security-policy
On Thu, Sep 21, 2017 at 7:17 PM, Ryan Sleevi via dev-security-policy
 wrote:
> I think we can divide the discussion into two parts, similar to the
> previous mail: How to effectively transition Symantec customers with
> minimum disruption, whether acting as the Managed CA or as the future
> operator of Symantec’s PKI, and how to effectively transition DigiCert’s
> infrastructure. This is a slightly different order than your e-mail
> message, but given the time sensitivity of the Symantec transition, it
> seems more effective to discuss that first.
>
> I think there may have been some confusion on the Managed CA side. It’s
> excellent that DigiCert plans to transition Symantec customers to DigiCert
> roots, as that helps with an expedient reduction in risk, but the plan
> outlined may create some of the compatibility risks that I was trying to
> highlight. In the discussions of the proposed remediations, one of the big
> concerns we heard raised by both Symantec and site operators was related to
> pinning - both in the Web and in mobile applications. We also heard about
> embedded or legacy devices, and their needs for particular chains.
>
> It sounds like this plan may have been based on a concern that I’d tried to
> address in the previous message. That is, the removal of the existing
> Symantec roots defines a policy goal - the elimination in trust in these
> legacy roots, due to the unknown scope of issues. However, that goal could
> be achieved by a number of technical means - for example, ‘whitelisting’ a
> set of Managed CAs (as proposed by Chrome), or replacing the existing
> Symantec roots with these new Managed CA roots in a 1:1 swap. Both of these
> approaches achieve the same policy objective, while reducing the
> compatibility risk.

Ryan,

As an existing Symantec customer, I'm not clear that this really
addresses the challenges we face.

So far we have found several different failure modes.  We hope that
any solution deployed will assure that these don't trigger.

First, we found that some clients have a limited set of roots in their
trust store.   The "VeriSign Class 3 Public Primary Certification
Authority - G5" root with SPKI SHA-256 hash of
25b41b506e4930952823a6eb9f1d31def645ea38a5c6c6a96d71957e384df058 is
the only root trusted by some clients. They do, somewhat
unfortunately, check the certificate issuer, issuer key id, and
signature, so they changing any will break things.  However they don't
update their trust store.  So the (DN, key id, public key) tuple needs
to be in the chain for years to come.

Second, we have found that some applications use the system trust
store but implement additional checks on the built and validated
chain.  The most common case is  checking that at least one public key
in the chain matches a list of keys the application has internally.

As there is an assumption that the current root (DN, public key)
tuples will be replaced relatively soon by some trust store
maintainers, there needs to be a way that that both of these cases can
work.  The only way I can see this working long term on both devices
with updated trust stores as well as devices that have not updated the
trust store is to do a little bit of hackery and create new (DN,
public key) tuples with the existing public key.  This way apps with
pinning will work on systems with old trust stores and one systems
with updated trust stores.

As a specific example, again using the Class 3 G5 root, today a chain
looks like:

1) End-entity info
2) 
spkisha256:f67d22cd39d2445f96e16e094eae756af49791685007c76e4b66f154b7f35ec6,KeyID:5F:60:CF:61:90:55:DF:84:43:14:8A:60:2A:B2:F5:7A:F4:43:18:EF,
DN:CN=Symantec Class 3 Secure Server CA - G4, OU=Symantec Trust
Network, O=Symantec Corporation, C=US,
3) spkisha256:25b41b506e4930952823a6eb9f1d31def645ea38a5c6c6a96d71957e384df058,
KeyID:7F:D3:65:A7:C2:DD:EC:BB:F0:30:09:F3:43:39:FA:02:AF:33:31:33,
DN:CN=VeriSign Class 3 Public Primary Certification Authority - G5,
OU=(c) 2006 VeriSign, Inc. - For authorized use only, OU=VeriSign
Trust Network, O=VeriSign\, Inc., C=US

If there is a desire to (a) remove the Class 3 G5 root and (b) keep
the pin to its key working, the only solution I can see is to create a
new root that uses the same key.  This would result in a chain that
looks something like:

1) End-entity info
2b) spkisha256:,KeyID:, DN:CN=New Server Issuing CA, O=DigiCert, C=US,
3b) spkisha256:25b41b506e4930952823a6eb9f1d31def645ea38a5c6c6a96d71957e384df058,
KeyID:6c:e5:3f:7b:45:1f:66:b4:e6:7c:70:05:86:19:79:4f:a6,
DN:CN=VeriSign Class 3 Public Primary Certification Authority - G5,
OU=DigiCert Compatibility Root, OU=(c) 2006 VeriSign, Inc. - For
authorized use only, OU=VeriSign Trust Network, O=VeriSign\, Inc.,
C=US
3) spkisha256:25b41b506e4930952823a6eb9f1d31def645ea38a5c6c6a96d71957e384df058,
KeyID:7F:D3:65:A7:C2:DD:EC:BB:F0:30:09:F3:43:39:FA:02:AF:33:31:33,
DN:CN=VeriSign Class 3 Public Primary Certification Authority - G5,

Re: DigiCert-Symantec Announcement

2017-09-22 Thread Peter Bowen via dev-security-policy
On Fri, Sep 22, 2017 at 6:22 AM, Nick Lamb via dev-security-policy
 wrote:
> On Friday, 22 September 2017 05:01:03 UTC+1, Peter Bowen  wrote:
>> I realize this is somewhat more complex than what you, Ryan, or Jeremy
>> proposed, but it the only way I see root pins working across both
>> "old" and "new" trust stores.
>
> I would suggest that a better way to spend the remaining time would be 
> remedial work so that your business isn't dependant on a single third party 
> happening to make choices that are compatible with your existing processes. 
> Trust agility should be built into existing processes and systems, where it 
> doesn't exist today it must be retro-fitted, systems which can't be 
> retrofitted are an ongoing risk to the company's ability to deliver.
>
> Trust agility doesn't have to mean you give up all control, but if you were 
> in a situation where the business trusted roots from Symantec, Comodo and 
> say, GlobalSign then you would have an obvious path forwards in today's 
> scenario without also needing to trust dozens of organisations you've no 
> contact with.
>
> I know the Mozilla organisation has made this mistake itself in the past, and 
> I'm sure Google has too, but I don't want too much sympathy here to get in 
> the way of actually making us safer.

Nick,

I agree with pretty much everything you said :)

However, as you point out, many organisations have run into problems
in this area.  As a community, we saw similar issues come up during
the SHA-1 deprecation phase and seemed surprised.  I want to try to
make sure there is not surprise, especially when it comes to
configurations that are not obvious.

For example, on some mobile platforms it is common to have the app
enforce pinning but the OS handle chain building and validation.  This
can have poor interaction if the OS were to update the trust store as
the returned chain may no longer have the pinned CA.

Consider what Jeremy drew:

GeoTrust Primary Certification Authority -> DigiCert Global G2 -> (new
issuing CA) -> (end entity)

If the platform trusts DigiCert Global G2, then the chain that is
returned to the application will be:

DigiCert Global G2 -> (new issuing CA) -> (end entity)

In this case, any application pinned to GeoTrust will fail.

Even if it was a new Root:

GeoTrust Primary Certification Authority -> DigiCert GeoTrust G2 ->
(new issuing CA) -> (end entity)

The same problem will occur if the OS updates the trust store but the
application does not update.

One notable thing is that the server operator, application vendor, OS
vendor, and CA may be four unrelated parties.  If the application is
expected to work with "new" and "old" OS versions, this will take some
careful work if the keys in the built chain change over time.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Public trust of VISA's CA

2017-09-20 Thread Peter Bowen via dev-security-policy
On Wed, Sep 20, 2017 at 12:37 AM, Martin Rublik via
dev-security-policy  wrote:
> On Tue, Sep 19, 2017 at 5:22 PM, Alex Gaynor via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
>> https://crt.sh/mozilla-certvalidations?group=version=896972 is a very
>> informative graph for me -- this is the number of validations performed by
>> Firefox for certs under this CA. It looks like at the absolute peak, there
>> were 1000 validations in a day. That's very little value for our users, in
>> return for an awful lot of risk.
>>
>> Alex
>
>
> Hi,
>
> I agree that 1000 validations in a day is not much, or better to say really
> low number. Anyway I was wondering what should be a minimum value or
> whether this number is a good metric at all. I went through the Mozilla
> validations telemetrics and there are more CAs with similliar number of
> validations.

Note that Firefox 55 had a regression on how it does chain building
(https://bugzilla.mozilla.org/show_bug.cgi?id=1400913) that causes it
prefer the longest chain rather than the shortest chain.  This means,
for Root CAs that are cross-signed, Firefox 55 will frequently
attribute to the wrong bucket.  The total on the buckets does not
change, but the validations per day did shift.  For example, Firefox
55 shows "AddTrust External CA Root" is a super popular root while
prior versions had "COMODO RSA Certification Authority" as a top root.
"Go Daddy Class 2 CA" and "Go Daddy Root Certificate Authority - G2"
also flipped in Firefox 55.

This does not impact the Visa bucket, as far as I know, as the Visa
root is not cross-signed by any other root.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: New Version Notification for draft-belyavskiy-certificate-limitation-policy-04.txt

2017-10-07 Thread Peter Bowen via dev-security-policy
On Tue, Sep 12, 2017 at 5:59 AM, Dmitry Belyavsky via
dev-security-policy  wrote:
> Here is the new version of the draft updated according to the discussion on
> mozilla-dev-security list.

Given that RFC 5914 already defines a TrustAnchorList and
TrustAnchorInfo object and that the Trust Anchor List object is
explicitly contemplated as being included in a signed CMS message,
would it not make more sense to start from 5914 and define new
extensions encode constraints not currently defined?

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla’s Plan for Symantec Roots

2017-10-16 Thread Peter Bowen via dev-security-policy
On Mon, Oct 16, 2017 at 10:32 AM, Gervase Markham via
dev-security-policy  wrote:
> As per previous discussions and
> https://wiki.mozilla.org/CA:Symantec_Issues, a consensus proposal[0] was
> reached among multiple browser makers for a graduated distrust of
> Symantec roots.
>
> Here is Mozilla’s planned timeline for the graduated distrust of
> Symantec roots (subject to change):
>
> * October 2018 (Firefox 63): Removal/distrust of Symantec roots, with
> caveats described below.
>
> However, there are some subCAs of the Symantec roots that are
> independently operated by companies whose operations have not been
> called into question, and they will experience significant hardship if
> we do not provide a longer transition period for them. For both
> technical and non-technical reasons, a year is an extremely unrealistic
> timeframe for these subCAs to transition to having their certificates
> cross-signed by another CA. For example, the subCA may have implemented
> a host of pinning solutions in their products that would fail with
> non-Symantec-chaining certificates, or the subCA may have large numbers
> of devices that would need to be tested for interoperability with any
> potential future vendor. And, of course contractual negotiations may
> take a significant amount of time.

This pattern also exists for companies that have endpoints which have
clients which are pinned to the Symantec-owned roots.  These endpoints
may also be used by browser clients. It was my understanding that the
intent was existing roots would cross sign new managed CAs that would
be used for transition.

> Add code to Firefox to disable the root such that only certain subCAs
> will continue to function. So, the final dis-trust of Symantec roots may
> actually involve letting one or two of the root certs remain in
> Mozilla’s trust store, but having special code to distrust all but
> specified subCAs. We would document the information here:
> https://wiki.mozilla.org/CA/Additional_Trust_Changes
> And Mozilla would add tooling to the CCADB to track these special subCAs
> to ensure proper CP/CPS/audits until they have been migrated and
> disabled, and the root certs removed. Mozilla will need to also follow
> up with these subCAs to ensure they are moving away from these root
> certificates and are getting cross-signed by more than one CA in order
> to avoid repeating this situation.

Will the new managed CAs, which will operated by DigiCert under
CP/CPS/Audit independent from the current Symantec ones, also be
included on the list of subCAs that will continue to function?

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CAs not compliant with CAA CP/CPS requirement

2017-09-08 Thread Peter Bowen via dev-security-policy
On Fri, Sep 8, 2017 at 12:24 PM, Andrew Ayer via dev-security-policy
 wrote:
> The BRs state:
>
> "Effective as of 8 September 2017, section 4.2 of a CA's Certificate
> Policy and/or Certification Practice Statement (section 4.1 for CAs
> still conforming to RFC 2527) SHALL state the CA's policy or practice
> on processing CAA Records for Fully Qualified Domain Names; that policy
> shall be consistent with these Requirements. It shall clearly specify
> the set of Issuer Domain Names that the CA recognises in CAA 'issue' or
> 'issuewild' records as permitting it to issue. The CA SHALL log all
> actions taken, if any, consistent with its processing practice."
>
> Since it is now 8 September 2017, I decided to spot check the CP/CPSes
> of some CAs.
>
> At time of writing, the latest published CP/CPSes of the following CAs
> are not compliant with the above provision of the BRs:
>
> Amazon (https://www.amazontrust.com/repository/) - Does not check CAA
>
>
> It would be nice to hear confirmation from the non-compliant CAs that they
> really are checking CAA as required, and if so, why they overlooked the
> requirement to update their CP/CPS.

Amazon Trust Services is checking CAA prior to issuance of
certificates.  We provided the domain list in our responses to the
last Mozilla communication and will be updating our externally
published policy and practice documentation to match shortly.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CAA Certificate Problem Report

2017-09-09 Thread Peter Bowen via dev-security-policy
On Sat, Sep 9, 2017 at 3:57 AM, Jonathan Rudenberg
<jonat...@titanous.com> wrote:
>
>> On Sep 9, 2017, at 06:19, Peter Bowen via dev-security-policy 
>> <dev-security-policy@lists.mozilla.org> wrote:
>>
>> In all three of these cases, the "domain's zone does not have a DNSSEC
>> validation chain to the ICANN root" -- I requested SOA, DNSKEY, NS,
>> and CAA records types for each zone and in no case did I get a
>> response that had a valid DNSSEC chain to the ICANN root.
>
> This comes down to what exactly “does not have a valid DNSSEC chain” means.
>
> I had assumed that given the reference to DNSSEC in the BRs that the relevant 
> DNSSEC RFCs were incorporated by reference via RFC 6844 and that DNSSEC 
> validation is required. However, this is not entirely the case, using DNSSEC 
> for CAA lookups is only RECOMMENDED in section 4.1 and explicitly “not 
> required.” Which means this is all pretty pointless. The existence or 
> non-existence of DNSSEC records doesn’t matter if there is no requirement to 
> use them.
>
> Given this context, I think that your interpretation of this clause is not 
> problematic since there is no requirement anywhere to use DNSSEC.
>
> I think this should probably be taken to the CAB Forum for a ballot to either:
>
> 1) purge this reference to DNSSEC from the BRs making it entirely optional 
> instead of just having this pointless check; or
> 2) add a requirement to the BRs that DNSSEC validation be used from the ICANN 
> root for CAA lookups and then tweak the relevant clause to only allow lookup 
> failures if there is a valid non-existence proof of DNSSEC records in the 
> chain that allows an insecure lookup.
>
> None of my comments in this thread should be interpreted as support for 
> DNSSEC :)

My recollection from the discussion that led to the ballot was that
this line in the BRs was specifically to create a special hard fail if
the zone was properly signed but the server returned an error when
looking up CAA records.

As a big of background, in order to be properly signed, the zone must
have unexpired signatures over at least the SOA record (as this is the
minimal allowed signature when using NSEC3 with Opt-Out).
Additionally this case never exists with zones signed using NSEC or
NSEC3 without opt-out, as they will provide a either a denial of
existence or a signature that disclaims CAA record type existence.

So this bullet in the BRs only triggers when:
- SOA record has a valid signature
- There is a DNSKEY for the zone that matches the DS record in the parent zone
- The DS record in the parent zone is signed
- The above three are true for all zones back to the root zone
- The request for a CAA record for the QNAME returns an error
- The request for DNSSEC information for the QNAME succeeds
- The DNSSEC information does not provide information on the name
(e.g. is for records before and after but the opt-out flag is set)

In all of these are present, the CA may not issue.  If the DNSSEC
information is valid and says there is a CAA record in the type
bitmaps but the server returned an error for CAA records, then the CA
must not issue.

I don't think your tests cover either of these cases.  I think any
other case allows issuance as it follows the path of no CAA record.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CAA Certificate Problem Report

2017-09-09 Thread Peter Bowen via dev-security-policy
On Sat, Sep 9, 2017 at 11:50 AM, Andrew Ayer <a...@andrewayer.name> wrote:
> On Sat, 9 Sep 2017 08:49:01 -0700
> Peter Bowen via dev-security-policy
> <dev-security-policy@lists.mozilla.org> wrote:
>
>> On Sat, Sep 9, 2017 at 3:57 AM, Jonathan Rudenberg
>> <jonat...@titanous.com> wrote:
>> >
>> >> On Sep 9, 2017, at 06:19, Peter Bowen via dev-security-policy
>> >> <dev-security-policy@lists.mozilla.org> wrote:
>> >>
>> >> In all three of these cases, the "domain's zone does not have a
>> >> DNSSEC validation chain to the ICANN root" -- I requested SOA,
>> >> DNSKEY, NS, and CAA records types for each zone and in no case did
>> >> I get a response that had a valid DNSSEC chain to the ICANN root.
>> >
>> > This comes down to what exactly ___does not have a valid DNSSEC
>> > chain___ means.
>> >
>> > I had assumed that given the reference to DNSSEC in the BRs that
>> > the relevant DNSSEC RFCs were incorporated by reference via RFC
>> > 6844 and that DNSSEC validation is required. However, this is not
>> > entirely the case, using DNSSEC for CAA lookups is only RECOMMENDED
>> > in section 4.1 and explicitly ___not required.___ Which means this is
>> > all pretty pointless. The existence or non-existence of DNSSEC
>> > records doesn___t matter if there is no requirement to use them.
>> >
>> > Given this context, I think that your interpretation of this clause
>> > is not problematic since there is no requirement anywhere to use
>> > DNSSEC.
>> >
>> > I think this should probably be taken to the CAB Forum for a ballot
>> > to either:
>> >
>> > 1) purge this reference to DNSSEC from the BRs making it entirely
>> > optional instead of just having this pointless check; or
>> > 2) add a requirement to the BRs that DNSSEC validation be used from
>> > the ICANN root for CAA lookups and then tweak the relevant clause
>> > to only allow lookup failures if there is a valid non-existence
>> > proof of DNSSEC records in the chain that allows an insecure lookup.
>> >
>> > None of my comments in this thread should be interpreted as support
>> > for DNSSEC :)
>>
>> My recollection from the discussion that led to the ballot was that
>> this line in the BRs was specifically to create a special hard fail if
>> the zone was properly signed but the server returned an error when
>> looking up CAA records.
>
> Your recollection is not consistent with the most recent cabfpub thread
> on the topic: https://cabforum.org/pipermail/public/2017-August/011800.html
>
>> As a big of background, in order to be properly signed [...]
>
> The BRs do not say that the zone has to be "properly signed" for this
> line to trigger.  Nor do they require a "valid chain" of signatures
> from particular records in the zone to the root, as you suggested in
> another email.
>
> Rather, the BRs say the line triggers if there is "a DNSSEC validation
> chain to the ICANN root."  A "validation chain" doesn't mean signatures,
> but rather the information needed to validate the zone.  "Validation
> chain" is not the precise term that DNSSEC uses, but the synonymous term
> "authentication chain" is defined by RFC 4033 (incorporated by reference
> from RFC 6844) as follows:
>
> An alternating sequence of DNS public key
> (DNSKEY) RRsets and Delegation Signer (DS) RRsets forms a chain of
> signed data, with each link in the chain vouching for the next.  A
> DNSKEY RR is used to verify the signature covering a DS RR and
> allows the DS RR to be authenticated.  The DS RR contains a hash
> of another DNSKEY RR and this new DNSKEY RR is authenticated by
> matching the hash in the DS RR.  This new DNSKEY RR in turn
> authenticates another DNSKEY RRset and, in turn, some DNSKEY RR in
> this set may be used to authenticate another DS RR, and so forth
> until the chain finally ends with a DNSKEY RR whose corresponding
> private key signs the desired DNS data.  For example, the root
> DNSKEY RRset can be used to authenticate the DS RRset for
> "example."  The "example." DS RRset contains a hash that matches
> some "example." DNSKEY, and this DNSKEY's corresponding private
> key signs the "example." DNSKEY RRset.  Private key counterparts
> of the "example." DNSKEY RRset sign data records such as
> 

Re: CAA Certificate Problem Report

2017-09-09 Thread Peter Bowen via dev-security-policy
On Sat, Sep 9, 2017 at 1:50 PM, Andrew Ayer  wrote:
>
> drill is buggy and insecure.  Obviously, such implementations can
> be found.  Note that drill is just a "debugging/query" tool, not a
> resolver you would actually use in production.  You'll find that the
> production-grade resolver from that family (unbound) correctly reports
> an error when you try to query the CAA record for
> refused.caatestsuite-dnssec.com: https://unboundtest.com/

Just as I received this, I finished testing with unbound, to see what
it does.  See the results below.  For your blackhole, servfail, and
refused cases it clearly says insecure, not bogus.

[ec2-user@ip-10-0-0-18 ~]$ unbound-host -h
Usage: unbound-host [-vdhr46] [-c class] [-t type] hostname
 [-y key] [-f keyfile] [-F namedkeyfile]
 [-C configfile]
  Queries the DNS for information.
  The hostname is looked up for IP4, IP6 and mail.
  If an ip-address is given a reverse lookup is done.
  Use the -v option to see DNSSEC security information.
-t type what type to look for.
-c class what class to look for, if not class IN.
-y 'keystring' specify trust anchor, DS or DNSKEY, like
-y 'example.com DS 31560 5 1 1CFED8478...'
-D DNSSEC enable with default root anchor
from /usr/local/etc/unbound/root.key
-f keyfile read trust anchors from file, with lines as -y.
-F keyfile read named.conf-style trust anchors.
-C config use the specified unbound.conf (none read by default)
-r read forwarder information from /etc/resolv.conf
  breaks validation if the forwarder does not do DNSSEC.
-v be more verbose, shows nodata and security.
-d debug, traces the action, -d -d shows more.
-4 use ipv4 network, avoid ipv6.
-6 use ipv6 network, avoid ipv4.
-h show this usage help.
Version 1.6.5
BSD licensed, see LICENSE in source package for details.
Report bugs to unbound-b...@nlnetlabs.nl
[ec2-user@ip-10-0-0-18 ~]$ unbound-host -v -t CAA -D -f
/usr/local/etc/unbound/root.key expired.caatestsuite-dnssec.com.
expired.caatestsuite-dnssec.com. has no CAA record (BOGUS (security failure))
validation failure :
signature expired from 96.126.110.12 for key
expired.caatestsuite-dnssec.com. while building chain of trust
[ec2-user@ip-10-0-0-18 ~]$ unbound-host -v -t CAA -D -f
/usr/local/etc/unbound/root.key missing.caatestsuite-dnssec.com.
missing.caatestsuite-dnssec.com. has no CAA record (BOGUS (security failure))
validation failure : no
signatures from 96.126.110.12 for key missing.caatestsuite-dnssec.com.
while building chain of trust
[ec2-user@ip-10-0-0-18 ~]$ unbound-host -v -t CAA -D -f
/usr/local/etc/unbound/root.key blackhole.caatestsuite-dnssec.com.
Host blackhole.caatestsuite-dnssec.com. not found: 2(SERVFAIL). (insecure)
[ec2-user@ip-10-0-0-18 ~]$ unbound-host -v -t CAA -D -f
/usr/local/etc/unbound/root.key servfail.caatestsuite-dnssec.com.
Host servfail.caatestsuite-dnssec.com. not found: 2(SERVFAIL). (insecure)
[ec2-user@ip-10-0-0-18 ~]$ unbound-host -v -t CAA -D -f
/usr/local/etc/unbound/root.key refused.caatestsuite-dnssec.com.
Host refused.caatestsuite-dnssec.com. not found: 2(SERVFAIL). (insecure)
[ec2-user@ip-10-0-0-18 ~]$ unbound-host -v -t NS -D -f
/usr/local/etc/unbound/root.key blackhole.caatestsuite-dnssec.com.
Host blackhole.caatestsuite-dnssec.com. not found: 2(SERVFAIL). (insecure)
[ec2-user@ip-10-0-0-18 ~]$ unbound-host -v -t NS -D -f
/usr/local/etc/unbound/root.key servfail.caatestsuite-dnssec.com.
Host servfail.caatestsuite-dnssec.com. not found: 2(SERVFAIL). (insecure)
[ec2-user@ip-10-0-0-18 ~]$ unbound-host -v -t NS -D -f
/usr/local/etc/unbound/root.key refused.caatestsuite-dnssec.com.
Host refused.caatestsuite-dnssec.com. not found: 2(SERVFAIL). (insecure)
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CAA Certificate Problem Report

2017-09-09 Thread Peter Bowen via dev-security-policy
On Sat, Sep 9, 2017 at 1:59 PM, Andrew Ayer <a...@andrewayer.name> wrote:
> On Sat, 9 Sep 2017 13:53:52 -0700
> Peter Bowen via dev-security-policy
> <dev-security-policy@lists.mozilla.org> wrote:
>
>> On Sat, Sep 9, 2017 at 1:50 PM, Andrew Ayer <a...@andrewayer.name>
>> wrote:
>> >
>> > drill is buggy and insecure.  Obviously, such implementations can
>> > be found.  Note that drill is just a "debugging/query" tool, not a
>> > resolver you would actually use in production.  You'll find that the
>> > production-grade resolver from that family (unbound) correctly
>> > reports an error when you try to query the CAA record for
>> > refused.caatestsuite-dnssec.com: https://unboundtest.com/
>>
>> Just as I received this, I finished testing with unbound, to see what
>> it does.  See the results below.  For your blackhole, servfail, and
>> refused cases it clearly says insecure, not bogus.
>
> That is very clearly against RFC4033, which says defines Insecure as:
>
> The validating resolver has a trust anchor, a chain
> trust, and, at some delegation point, signed proof of the
> non-existence of a DS record.  This indicates that subsequent
> branches in the tree are provably insecure.  A validating resolver
> may have a local policy to mark parts of the domain space as
> insecure.
>
> There is no "signed proof of the non-existence of a DS record" for
> blackhole, servfail, and refused, so it cannot possibly be insecure.

I just found another tool that does checks and has a similar but
distinct response set:

https://portfolio.sidnlabs.nl/check/expired.caatestsuite-dnssec.com/CAA (bogus)
https://portfolio.sidnlabs.nl/check/missing.caatestsuite-dnssec.com/CAA (bogus)
https://portfolio.sidnlabs.nl/check/blackhole.caatestsuite-dnssec.com/CAA
(error)
https://portfolio.sidnlabs.nl/check/servfail.caatestsuite-dnssec.com/CAA (error)
https://portfolio.sidnlabs.nl/check/refused.caatestsuite-dnssec.com/CAA (error)
https://portfolio.sidnlabs.nl/check/sigfail.verteiltesysteme.net/A (bogus)
https://portfolio.sidnlabs.nl/check/bogus.d4a16n3.rootcanary.net/A (insecure)
https://portfolio.sidnlabs.nl/check/www.google.com/A (insecure)
https://portfolio.sidnlabs.nl/check/www.dnssec-failed.org/A (bogus)

Given that there does not seem to be a consistent definition on how
"broken" DNSSEC should be handled, I think it is reasonable that CAs
should be given benefit of the doubt on the broken DNSSEC tests.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CAA Certificate Problem Report

2017-09-09 Thread Peter Bowen via dev-security-policy
> Certificate 3 contains a single DNS identifier for
> refused.caatestsuite-dnssec.com
> Attempts to query the CAA record for this DNS name result in a REFUSED DNS
> response.  Since there is a DNSSEC validation chain from this zone to the
> ICANN root, CAs are not permitted to treat the lookup failure as permission
> to issue.
>
>
> Certificate 4 contains a single DNS identifier for
> missing.caatestsuite-dnssec.com  .
> This DNS name has no CAA records, but the zone is missing RRSIG records.
> Since there is a DNSSEC validation chain from this zone to the ICANN root,
> the DNS lookup should fail and this failure cannot be treated by the CA as
> permission to issue.
>
> Certificate 6 contains a single DNS identifier for
> blackhole.caatestsuite-dnssec.com 
> .  All DNS requests for this DNS name will be dropped, causing a lookup
> failure.  Since there is a DNSSEC validation chain from this zone to the
> ICANN root, CAs are not permitted to treat the lookup failure as permission
> to issue.

Based on my own queries, I do not believe the statement that there is
"a DNSSEC validation chain from this zone to the ICANN root" is
correct for these.

All of these names have NS records in the parent zone, indicating they
are zones themselves:

refused.caatestsuite-dnssec.com. 60 IN NS nsrefused.caatestsuite-dnssec.com.
blackhole.caatestsuite-dnssec.com. 60 IN NS nsblackhole.caatestsuite-dnssec.com.
missing.caatestsuite-dnssec.com. 60 IN NS ns0.caatestsuite-dnssec.com.
missing.caatestsuite-dnssec.com. 60 IN NS ns1.caatestsuite-dnssec.com.

In all three of these cases, the "domain's zone does not have a DNSSEC
validation chain to the ICANN root" -- I requested SOA, DNSKEY, NS,
and CAA records types for each zone and in no case did I get a
response that had a valid DNSSEC chain to the ICANN root.

This leads me to believe these tests are incorrect and I agree with
Jeremy's conclusion for these.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Verisign signed speedport.ip ?

2017-12-09 Thread Peter Bowen via dev-security-policy
On Sat, Dec 9, 2017 at 11:42 AM, Lewis Resmond via dev-security-policy
 wrote:
> I was researching about some older routers by Telekom, and I found out that 
> some of them had SSL certificates for their (LAN) configuration interface, 
> issued by Verisign for the fake-domain "speedport.ip".
>
> They (all?) are logged here: https://crt.sh/?q=speedport.ip
>
> I wonder, since this domain and even the TLD is non-existing, how could 
> Verisign sign these? Isn't this violating the rules, if they sign anything 
> just because a router factory tells them to do so?
>
> Although they are all expired since several years, I am interested how this 
> could happen, and if such incidents of signing non-existing domains could 
> still happen today.

Before the CA/Browser Forum Baseline Requirements were created, this
was not explicitly forbidden.  Since approximately July 1, 2012 no new
certificates have been allowed for unqualified names or names for
which the TLD does not exist in the IANA root zone.

So, to answer your questions:

Q: How could Verisign sign these?
A: These were all issued prior to the Baseline Requirements coming into effect

Q: Could [...] such incidents of signing non-existing domains could
still happen today?
A: Not like this.  All Domain Names in certificates now must be Fully
Qualified Domains and the CA must validate that the FQDN falls in a
valid namespace.  It is allowable for me to get a certificate for
nonexistent.home.peterbowen.org, even though that FQDN does not exist,
as I am the registrant of peterbowen.org.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Bit encoding (AW: Policy 2.6 Proposal: Add prohibition on CA key generation to policy)

2018-05-15 Thread Peter Bowen via dev-security-policy
I don't think that is true.  Remember for OV/IV/EV certificates, the
Subscriber is the natural person or Legal Entity identified in the
certificate Subject.  If the Subscriber is using the certificate on a
CDN, it is probably better to have the CDN generate the key rather
than the Subscriber.  The key is never being passed around, in PKCS#12
format or otherwise, even though the Subscriber isn't generating the
key.

On Tue, May 15, 2018 at 9:17 PM, Tim Hollebeek via dev-security-policy
 wrote:
> My only objection is that this will cause key generation to shift to partners 
> and
> affiliates, who will almost certainly do an even worse job.
>
> If you want to ban key generation by anyone but the end entity, ban key
> generation by anyone but the end entity.
>
> -Tim
>
>> -Original Message-
>> From: dev-security-policy [mailto:dev-security-policy-
>> bounces+tim.hollebeek=digicert@lists.mozilla.org] On Behalf Of Wayne
>> Thayer via dev-security-policy
>> Sent: Tuesday, May 15, 2018 4:10 PM
>> To: Dimitris Zacharopoulos 
>> Cc: mozilla-dev-security-policy 
>> 
>> Subject: Re: Bit encoding (AW: Policy 2.6 Proposal: Add prohibition on CA key
>> generation to policy)
>>
>> I'm coming to the conclusion that this discussion is about "security 
>> theater"[1].
>> As long as we allow CAs to generate S/MIME key pairs, there are gaping holes
>> in the PKCS#12 requirements, the most obvious being that a CA can just
>> transfer the private key to the user in pem format! Are there any objections 
>> to
>> dropping the PKCS#12 requirements altogether and just forbidding key
>> generation for TLS certificates as follows?
>>
>> CAs MUST NOT generate the key pairs for end-entity certificates that have an
>> EKU extension containing the KeyPurposeIds id-kp-serverAuth or
>> anyExtendedKeyUsage.
>>
>> - Wayne
>>
>> [1] https://en.wikipedia.org/wiki/Security_theater
>>
>> On Tue, May 15, 2018 at 10:23 AM Dimitris Zacharopoulos 
>> wrote:
>>
>> >
>> >
>> > On 15/5/2018 6:51 μμ, Wayne Thayer via dev-security-policy wrote:
>> >
>> > Did you consider any changes based on Jakob’s comments?  If the
>> > PKCS#12 is distributed via secure channels, how strong does the password
>> need to be?
>> >
>> >
>> >
>> >
>> >
>> > I think this depends on our threat model, which to be fair is not
>> > something we've defined. If we're only concerned with protecting the
>> > delivery of the
>> > PKCS#12 file to the user, then this makes sense. If we're also
>> > concerned with protection of the file while in possession of the user,
>> > then a strong password makes sense regardless of the delivery mechanism.
>> >
>> >
>> > I think once the key material is securely delivered to the user, it is
>> > no longer under the CA's control and we shouldn't assume that it is.
>> > The user might change the passphrase of the PKCS#12 file to whatever,
>> > or store the private key without any encryption.
>> >
>> >
>> > Dimitris.
>> >
>> ___
>> dev-security-policy mailing list
>> dev-security-policy@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-security-policy
>
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla’s Plan for Symantec Roots

2017-10-27 Thread Peter Bowen via dev-security-policy
On Tue, Oct 17, 2017 at 2:06 AM, Gervase Markham  wrote:
> On 16/10/17 20:22, Peter Bowen wrote:
>> Will the new managed CAs, which will operated by DigiCert under
>> CP/CPS/Audit independent from the current Symantec ones, also be
>> included on the list of subCAs that will continue to function?
>
> AIUI we are still working out the exact configuration of the new PKI but
> my understanding is that the new managed CAs will be issued by DigiCert
> roots and cross-signed by old Symantec roots. Therefore, they will be
> trusted in Firefox using a chain up to the DigiCert roots.

Gerv,

I'm hoping you can clarify the Mozilla position a little, given a hypothetical.

For this, please assume that DigiCert is the owner and operator of the
VeriSign, Thawte, and GeoTrust branded roots currently included in NSS
and that they became the owner and operator on 15 November 2017 (i.e.
unquestionably before 1 December 2017).

If DigiCert generates a new online issuing CA on 20 March 2018 and
cross-signs it using their VeriSign Class 3 Public Primary
Certification Authority - G5 offline root CA, will certificates from
this new issuing CA be trusted by Firefox?  If so, what are the
parameters of trust, for example not trusted until the new CA is
whitelisted by Mozilla or only trusted until a certain date?

What about the same scenario except the new issuing CA is generated on
30 June 2019?

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certificates with shared private keys by gaming software (EA origin, Blizzard battle.net)

2017-12-25 Thread Peter Bowen via dev-security-policy
On Mon, Dec 25, 2017 at 7:10 AM, Adrian R. via dev-security-policy
 wrote:
> since it's a webserver running on the local machine and is using that 
> certificate key/pair, i think that someone more capable than me can easily 
> extract the key from it.
>
> From my point of view as an observer it's plainly obvious that the private 
> key must be on my local machine too, even if i haven't actually got to the 
> key itself yet.

The problem is that this is not true.  I've not investigated this
software at all, but there are two designs I have seen in other
software:

1) TCP Proxy: A pure TCP proxy could be forwarding all the packets to
another host which has the key.

2) "Keyless" SSL: https://www.cloudflare.com/ssl/keyless-ssl/ - they
key is on a different host from the content

I'm sure there are other designs which would end up with the same
result: 127.0.0.1 does not have the private key.  Given this,
conjecture that there "must" be a private key compromise seems
exaggerated.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Updating Root Inclusion Criteria

2018-01-17 Thread Peter Bowen via dev-security-policy
On Tue, Jan 16, 2018 at 3:45 PM, Wayne Thayer via dev-security-policy
 wrote:
> I would like to open a discussion about the criteria by which Mozilla
> decides which CAs we should allow to apply for inclusion in our root store.
>
> Section 2.1 of Mozilla’s current Root Store Policy states:
>
> CAs whose certificates are included in Mozilla's root program MUST:
>> 1.provide some service relevant to typical users of our software
>> products;
>>
>
> Further non-normative guidance for which organizations may apply to the CA
> program is documented in the ‘Who May Apply’ section of the application
> process at https://wiki.mozilla.org/CA/Application_Process . The original
> intent of this provision in the policy and the guidance was to discourage a
> large number of organizations from applying to the program solely for the
> purpose of avoiding the difficulties of distributing private roots for
> their own internal use.
>
> Recently, we’ve encountered a number of examples that cause us to question
> the usefulness of the currently-vague statement(s) we have that define
> which CAs to accept, along a number of different axes:
>
[snip]
>
> There are many potential options for resolving this issue. Ideally, we
> would like to establish some objective criteria that can be measured and
> applied fairly. It’s possible that this could require us to define
> different categories of CAs, each with different inclusion criteria. Or it
> could be that we should remove the existing ‘relevance’ requirement and
> inclusion guidelines and accept any applicant who can meet all of our other
> requirements.
>
> With this background, I would like to encourage everyone to provide
> constructive input on this topic.

Wayne,

In the interest of transparency, I would like to add one more example
to your list:

* Amazon Trust Services is a current program member.  Amazon applied
independently but then subsequently bought a root from Go Daddy
(obvious disclosure: Wayne was VP at Go Daddy at the time).  So far
there is no public path to bring Amazon a public key/CSR you generate
on you own server and have Amazon issue a certificate containing that
public key.  The primary path to getting a certificate issued by
Amazon is to use AWS Certificate Manager.  That being said, we have
issued certificates to hundreds of thousands of domains and Mozilla
telemetry data shows they are being widely used by users of Mozilla
software products.

Thanks,
Peter

P.S. I'm very much looking forward to the Firefox ESR 60 release, as
that will mark Amazon inclusion for EV in all Mozilla products.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Retirement of RSA-2048

2018-01-20 Thread Peter Bowen via dev-security-policy
On Sat, Jan 20, 2018 at 8:31 AM, James Burton via dev-security-policy
 wrote:
> Approximate date of retirement of RSA-2048?

This is a very broad question, as you don't specify the usage.  If you
look at the US National Institute of Standards and Technology's SP
800-57 part 1 rev 4
(http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-57pt1r4.pdf),
they discuss the difference between "applying" and "processing".
Applying would usually be either encrypting or signing and processing
would usually be decrypting or verifying.

Given that RSA is used by Mozilla products for signing long term data
(intermediate CA certificates, for example), encrypting data (for
example, encrypting email), as part of key exchange (in TLS), and for
signing for instant authentication (signature during a TLS handshake),
the appropriate retirement date may vary.

That being said, the NIST publication above uses the assumption that
RSA with a 2048-bit modulus, where the two factors are each 1024-bit
long prime numbers, provides approximately 112-bits of strength.
Later on it states that 112-bits of strength is acceptable until 2030.

The German Federal Office for Information Security (BSI) reportedly
recommends using a modulus length of at least 3000 bits starting in
2023 [1].

Does that help answer your question?

Thanks,
Peter

[1] My German is very poor.  If yours is better than mine, you can
read the original doc from the BSI at
https://www.bsi.bund.de/SharedDocs/Downloads/DE/BSI/Publikationen/TechnischeRichtlinien/TR02102/BSI-TR-02102.pdf?__blob=publicationFile
and confirm that Google Translate did not cause me to misunderstand
the recommendation
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: TLS-SNI-01 and compliance with BRs

2018-01-19 Thread Peter Bowen via dev-security-policy


> On Jan 19, 2018, at 7:22 AM, Doug Beattie via dev-security-policy 
>  wrote:
> 
> Many CA’s haven’t complied with the Mozilla requirement to list the methods 
> they use (including Google btw), so it’s hard to tell which CAs are using 
> method 10.  Of the CA CPSs I checked, only Symantec has method 10 listed, and 
> with the DigiCert acquisition, it’s not clear if that CPS is still active.  
> We should find out on January 31st who else uses it.
> 
> In the meantime, we should ban anyone from using TLS-SNI as a non-compliant 
> implementation, even outside shared hosting environments.  There could well 
> be other implementations that comply with method 10, so I’m not suggesting we 
> remove that from the BRs yet (those that don’t allow SNI when validating the 
> presence of the random number within the certificate of a TLS handshake are 
> better).
[snip]

> Personally, I think the use of TLS-SNI-01  should be banned immediately, 
> globally (not just by Let’s Encrypt), but without knowing which CAs use it, 
> it’s difficult to enforce.

Doug,

I don’t agree that TLS-SNI-01 should be banned immediately, globally.  Amazon 
does not use TLS-SNI-01 today, so it would not directly impact Amazon 
operations.

I think we need to look back to the Mozilla Root Store Policy.  The relevant 
portions are:

"2.1 CA Operations

prior to issuing certificates, verify certificate requests in a manner that we 
deem acceptable for the stated purpose(s) of the certificates;

2.2 Validation Practices
We consider verification of certificate signing requests to be acceptable if it 
meets or exceeds the following requirements:

For a certificate capable of being used for SSL-enabled servers, the CA must 
ensure that the applicant has registered the domain(s) referenced in the 
certificate or has been authorized by the domain registrant to act on their 
behalf. This must be done using one or more of the 10 methods documented in 
section 3.2.2.4 of version 1.4.1 (and not any other version) of the CA/Browser 
Forum Baseline Requirements. The CA's CP/CPS must clearly specify the 
procedure(s) that the CA employs, and each documented procedure should state 
which subsection of 3.2.2.4 it is complying with. Even if the current version 
of the BRs contains a method 3.2.2.4.11, CAs are not permitted to use this 
method.”

While this clearly does call out that the methods are acceptable, it isn’t a 
results oriented statement.  The BRs also do not have clear results 
requirements for validation methods.

What does Mozilla expect to be verified?  We know the 10 methods allow issuance 
where "the applicant has registered the domain(s) referenced in the certificate 
or has been authorized by the domain registrant to act on their behalf” is not 
true.

I think the next step should be for Mozilla to clearly lay out the requirements 
for CAs and then the validation methods can be compared to see if they met the 
bar.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Updating Root Inclusion Criteria (organizations)

2018-01-17 Thread Peter Bowen via dev-security-policy
On Wed, Jan 17, 2018 at 11:49 AM, Jakob Bohm via dev-security-policy
 wrote:
> 4. Selected company CAs for a handful of too-bit-to-ignore companies
>   that refuse to use a true public CA.  This would currently probably
>   be Microsoft, Amazon and Google.  These should be admitted only on
>   a temporary basis to pressure such companies to use generally trusted
>   independent CAs.

Jakob,

Can you please explain how you define "true public CA"?  How long
should new CAs have to meet this criteria?   I don't like carve outs
for "too-big-to-ignore".

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


  1   2   >