Re: Policy 2.7.1: MRSP Issue #218: Clarify CRL requirements for End Entity Certificates

2021-01-11 Thread Ryan Hurst via dev-security-policy
On Thursday, January 7, 2021 at 5:00:46 PM UTC-8, Ben Wilson wrote:
> This is the last issue that I have marked for discussion in relation to 
> version 2.7.1 of the Mozilla Root Store Policy 
> .
>  
> It is identified and discussed in GitHub Issue #218 
>  for the MRSP. 
> 
> I will soon update everyone on the status of the other 13 discussion items 
> already presented, as some of them are in need of revision based on 
> comments received thus far. 
> 
> While subsection (b) of section 7.1.2.3 of the Baseline Requirements makes 
> a cRLDistributionPoint (CDP) in end entity certificates optional, Mozilla 
> still desires that CRL-based revocation information be available because 
> CRLite uses CRLs to construct its revocation filters. (Apple also uses 
> such CRL information in its certificate validation processes and, as I 
> understand, is making a similar request of CAs with respect to the new 
> CCADB field, discussed below.) 
> 
> While all such CRL information is needed, large CRLs are disfavored because 
> of the time they take to download and process. Thus, CAs shard, partition, 
> or "scope" their CRLs into smaller chunks. Section 5 of RFC 5280 explains, 
> "Each CRL has a particular scope. The CRL scope is the set of certificates 
> that could appear on a given CRL. … A complete CRL lists all unexpired 
> certificates, within its scope, that have been revoked for one of the 
> revocation reasons covered by the CRL scope. A *full and complete CRL* 
> lists all unexpired certificates issued by a CA that have been revoked for 
> any reason." (Emphasis added.) 
> 
> There is a new field in the CCADB for CAs to include information needed for 
> browsers or others to construct a "full and complete CRL", i.e. to gather 
> information from CAs that don't include the CRL path to their "full and 
> complete CRL" in end entity certificates they issue. This new CCADB field 
> is called "Full CRL Issued By This CA" and is located under the heading 
> "Pertaining to Certificates Issued by this CA." Rather than condition the 
> requirement that CAs fill in this information in the CCADB only when they 
> don't include a CDP to a full and complete CRL, I propose that this new 
> CCADB field be populated in all situations where the CA is enabled for 
> server certificate issuance. In cases where the CA shards or partitions its 
> CRL, the CA must provide a JSON-based list of CRLs that when combined are 
> the equivalent of the full and complete CRL. 
> 
> Proposed language to add to section 6 of the Mozilla Root Store Policy is 
> as follows: 
> 
> *CAs SHOULD place the URL for the associated CRL within the 
> crlDistributionPoints extension of issued certificates. A CA MAY omit the 
> crlDistributionPoint extension, if permitted by applicable requirements and 
> policies, such as the Baseline Requirements. * 
> 
> *A CA technically capable of issuing server certificates MUST ensure that 
> the CCADB field "Full CRL Issued By This CA" contains either the URL for 
> the full and complete CRL or the URL for the JSON file containing all URLs 
> for CRLs that when combined are the equivalent of the full and complete CRL* 
> . 
> 
> 
> I look forward to your comments and suggestions. 
> 
> Ben
I think this text strikes a good balance.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CCADB Proposal: Add field called Full CRL Issued By This CA

2020-11-20 Thread Ryan Hurst via dev-security-policy
On Thursday, November 19, 2020 at 3:13:58 PM UTC-8, Ben Wilson wrote:
> FWIW - Here is a recent post on this issue from JC Jones - 
> https://github.com/mozilla/crlite/issues/43#issuecomment-726493990
> On Thu, Nov 19, 2020 at 4:00 PM Ryan Hurst via dev-security-policy < 
> dev-secur...@lists.mozilla.org> wrote: 
> 
> > On Wednesday, November 18, 2020 at 8:26:50 PM UTC-8, Ryan Sleevi wrote:
> > > On Wed, Nov 18, 2020 at 7:57 PM Ryan Hurst via dev-security-policy < 
> > > dev-secur...@lists.mozilla.org> wrote: 
> > > 
> > > > Kathleen, 
> > > > 
> > > > This introduces an interesting question, how might Mozilla want to see 
> > > > partial CRLs be discoverable? Of course, they are pointed to by the 
> > > > associated CRLdp but is there a need for a manifest of these CRL 
> > shards 
> > > > that can be picked up by CCADB? 
> > > > 
> > > What's the use case for sharding a CRL when there's no CDP in the issued 
> > > certificates and the primary downloader is root stores? 
> >
> > I think there may be some confusion. In my response to Kathleen's mail I 
> > stated " Of course, they are pointed to by the associated CRLdp", as such I 
> > am not suggesting there is a value to sharded/partitioned CRLs if not 
> > referenced by the CRLdp. 
> > 
> > The origin of my question is that as I remember the requirements, CAs do 
> > not have to produce a full and complete CRL. Specifically today, I believe 
> > they are allowed to produce partitioned CRLs, this is good because in some 
> > cases a full and complete CRL can be gigabytes in size. I assume the reason 
> > for adding the URL to a full, and I imagine complete, CRL is that Mozilla 
> > would like to use this information in its CRLLite feature. 
> > 
> > If so, and a CA partitions CRLs and does not produce a full and complete 
> > CRL how should the CA ensure Mozilla has the entire set of information it 
> > wants? 
> > 
> > Ryan
> > ___ 
> > dev-security-policy mailing list 
> > dev-secur...@lists.mozilla.org 
> > https://lists.mozilla.org/listinfo/dev-security-policy 
> >

I think the JSON array approach works and it addresses the concerns I had, 
specifically:
1. How do we make sure Mozilla has all the revocation data when a 
sharded/partitioned CRL approach is used.
2. How do we not force those CCAs that are doing sharded/partitioned CRLs from 
having to also maintain full CRLs which can be VERY big which has logistic 
challenges to distribute reliably and usably.

Maybe we can say such CAs provide a list to this JSON document in CCADB Full 
CRL field?

Ryan
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CCADB Proposal: Add field called Full CRL Issued By This CA

2020-11-19 Thread Ryan Hurst via dev-security-policy
On Wednesday, November 18, 2020 at 8:26:50 PM UTC-8, Ryan Sleevi wrote:
> On Wed, Nov 18, 2020 at 7:57 PM Ryan Hurst via dev-security-policy < 
> dev-secur...@lists.mozilla.org> wrote: 
> 
> > Kathleen, 
> > 
> > This introduces an interesting question, how might Mozilla want to see 
> > partial CRLs be discoverable? Of course, they are pointed to by the 
> > associated CRLdp but is there a need for a manifest of these CRL shards 
> > that can be picked up by CCADB? 
> >
> What's the use case for sharding a CRL when there's no CDP in the issued 
> certificates and the primary downloader is root stores?

I think there may be some confusion. In my response to Kathleen's mail I stated 
" Of course, they are pointed to by the associated CRLdp", as such I am not 
suggesting there is a value to sharded/partitioned CRLs if not referenced by 
the CRLdp.

The origin of my question is that as I remember the requirements, CAs do not 
have to produce a full and complete CRL. Specifically today, I believe they are 
allowed to produce partitioned CRLs, this is good because in some cases a full 
and complete CRL can be gigabytes in size. I assume the reason for adding the 
URL to a full, and I imagine complete, CRL is that Mozilla would like to use 
this information in its CRLLite feature.

If so, and a CA partitions CRLs and does not produce a full and complete CRL 
how should the CA ensure Mozilla has the entire set of information it wants?

Ryan
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CCADB Proposal: Add field called Full CRL Issued By This CA

2020-11-18 Thread Ryan Hurst via dev-security-policy
On Wednesday, November 18, 2020 at 3:07:32 PM UTC-8, Kathleen Wilson wrote:
> All, 
> 
> The following changes have been made in the CCADB: 
> 
> On Intermediate Cert pages: 
> - Renamed section heading ‘Revocation Information’ to ‘Revocation 
> Information for this Certificate’ 
> - Added section called ‘Pertaining to Certificates Issued by this CA’ 
> - Added 'Full CRL Issued By This CA' field to this new section. 
> Note: CAs modify this field directly on intermediate cert pages. 
> 
> On Root Cert pages: 
> - Added section called ‘Pertaining to Certificates Issued by this CA’ 
> - Added 'Full CRL Issued By This CA' field to this new section. 
> Note: Only root store operators may directly update root cert pages, so 
> send email to your root store operator if you would like a URL added to 
> this new field for a root cert. 
> 
> 
> Coming soon: 
> Add 'Full CRL Issued By This CA' column to report: 
> http://ccadb-public.secure.force.com/ccadb/AllCertificateRecordsCSVFormat 
> 
> 
> Thanks, 
> Kathleen


Kathleen,

This introduces an interesting question, how might Mozilla want to see partial 
CRLs be discoverable? Of course, they are pointed to by the associated CRLdp 
but is there a need for a manifest of these CRL shards that can be picked up by 
CCADB?

Ryan
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-05 Thread Ryan Hurst via dev-security-policy
On Saturday, July 4, 2020 at 3:43:22 PM UTC-7, Ryan Sleevi wrote:
> > Thank you for explaining that.  We need to hear the official position from
> > Google.  Ryan Hurst are you out there?

Although Ryan Sleevi has already pointed this out, since I was named 
explicitly, I wanted to respond and re-affirm that I am not responsible for 
Chrome's (or anyone else's) root program. I represent Google Trust Services 
(GTS), a Certificate Authority (CA) that is subject to the same requirements as 
any other WebPKI CA.

While I am watching this issue closely, as I do all WebPKI related incidents, 
since this is not an issue that directly impacts GTS I have chosen to be a 
quiet observer.

With that said, as a long time member of the WebPKI, and in a personal 
capacity, I would say one of the largest challenges in operating a CA is how to 
handle incidents when they occur. In every incident, I try to keep in mind is 
that a CAs ultimate responsibility is to the users that rely on the 
certificates they issue.

This means when balancing the impact of decisions a CA should give weight to 
protecting those users. This reality unfortunately also means that sometimes it 
is necessary to take actions that may cause pain for the subscribers they 
provide services to.

Wherever possible a CA should minimize pain on the relying party but more times 
than not, the decision to use the WebPKI for these non-browser TLS use cases 
was done to externalize the costs of deploying a dedicated PKI that is fit for 
purpose and as with most trade-offs there may be later consequences to that 
decision.

As for my take on this topic, I think Peter Bowen has done an excellent job 
capturing the issue, it's risks, origins, and the choices available.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7 Proposal: Forbid Delegation of Email Validation for S/MIME Certificates

2019-05-15 Thread Ryan Hurst via dev-security-policy
On Wednesday, May 15, 2019 at 10:36:00 AM UTC-7, Ryan Sleevi wrote:
> On Wed, May 15, 2019 at 1:18 PM Ryan Hurst via dev-security-policy <
\> > Specifically where Wayne suggested:
> > "CAs MUST NOT delegate validation of the domain name part of an email
> > address to a 3rd party."
> >
> > Are you suggesting with that change mail providers cannot get certificates
> > for their users without the CA validating the local party?
> >
> 
> As Wayne noted in his existing message, there is an existing restriction
> Forbidden Practices:
> 
> Delegation of email address validation is already addressed by Mozilla's
> > Forbidden Practices [1] state:
> > "Domain and Email validation are core requirements of the Mozilla's Root
> > Store Policy and should always be incorporated into the issuing CA's
> > procedures. Delegating this function to 3rd parties is not permitted."
> 
> [1]
> > https://wiki.mozilla.org/CA/Forbidden_or_Problematic_Practices#Delegation_of_Domain_.2F_Email_Validation_to_Third_Parties
> 
> 
> So I'm stating that the proposed change is functionally more liberal than
> the existing requirement.
> 
> I'm suggesting that, as it stands today, CAs cannot be issuing S/MIME
> certificates to end users without first performing the validation of the
> domain name portion themselves (new policy), and potentially the local part
> as well (existing policy)
As I stated above multiple times, this new change does clarify that the domain 
owner is authoritative for the local part and CAs can directly rely on them as 
such.


> Thanks. I think this is desirable to forbid, as it is insecure, and I
> believe it's already forbidden, because the process of step (4) is relying
> on GMAIL to act as a Delegated Third Party for the validation of the e-mail
> address.
> 
> There are a host of security issues here in the described flow. As
> demonstrated, Step (6) and (7) entirely absent any validation by the CA of
> the e-mail address, which should be a dead-ringer for why it's problematic.
> If you replace "SAAS" with "Attacker", this should be clear and obvious.

[rmh] It is a diagram that shows a delegated RA; it is not insecure, it is not 
allowed under the current policy but my point is that a delegated RA agreement 
that limited the RA to use cases where these federated authentication providers 
attest that the user controls an email they manage, given the nature of emails 
and authentication, seems desirable to accommodate if we believe client 
certificates should be used more.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7 Proposal: Forbid Delegation of Email Validation for S/MIME Certificates

2019-05-15 Thread Ryan Hurst via dev-security-policy


> I think this bears expansion because I don't think it's been clearly
> documented what flow you believe is currently permitted today that will be
> prevented tomorrow with this change. 

To be clear, In that statement was referring to that scenario being allowed 
under the proposed change where the mail provider who is authoritative for a 
domain can get certificates for its users.

Specifically where Wayne suggested:
"CAs MUST NOT delegate validation of the domain name part of an email 
address to a 3rd party." 

Are you suggesting with that change mail providers cannot get certificates for 
their users without the CA validating the local party?

> The level of abstraction here doesn't
> help, because understanding the state diagram of what the SAAS is
> requesting, and who it's requesting it of, is vital to understanding the
> security properties.

I put together a quick diagram to try to visually explain the flow:
https://www.dropbox.com/s/ocfow995aluowyl/auth%20redirect%20cert%20flow.png?dl=0


> I'm still at an absolute loss for understanding your flow and what you
> believe is validated, so I do not feel able to evaluate these alternatives,
> other than to note that I find problems with all of them. I'm hoping you
> can, focusing solely on the CA validation process, describe who is
> validating what, and when.

Hopefully, the diagram helps to clarify if not let me know.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7 Proposal: Forbid Delegation of Email Validation for S/MIME Certificates

2019-05-15 Thread Ryan Hurst via dev-security-policy
> I must admit, I'm confused. Based on your concerns as I understand them,
> either the scenario you're describing is already prohibited today (and thus
> no change from existing policy), or its already permitted today and would
> continue to be permitted with this change. I'm hoping you can succinctly
> explain where we might disagree.

Given the inconsistent interpretation, I have heard from many in this area of 
what is allowed and is not I will abstain from inserting my own opinion on that 
matter.

Instead, I just want to make sure that whatever changes are made make it clear 
what is allowed and is not. 

I also want to make sure that while that is done that the whole problem is 
looked at.


> However, I don't think the new or old language prohibits this. The flow
> you've described is functionally a relationship between Google (as the
> GMail operator) and the CA. Google requests certificates on the users'
> behalf - much like a reseller does - and provides them to the user. The CA
> validates that Google is authorized for the gmail.com domain for each of
> these requests, potentially relying on previously completed domain
> validations (e.g. the reuse of data), and then lets Google validate the
> local-part as appropriate.

I believe the case where Google requests a certificate from the CA is 
accommodated but not the case where SAAS requests a certificate from the CA 
based on the authentication of the user it did with Google.

I believe this is an important case to support as it allows the development of 
scenarios where seamless use of certificates happen. 

I also believe that the nature of how email addresses are used, and how many 
there are (billions) suggests that delegation should be allowed if scoped very 
narrowly.

> Hopefully, this analysis avoids the emotive aspects of the previous posts,
> and focuses purely on what technical steps are being provided.

I was not trying to be emotive, I was trying to make sure the consequences of 
the proposed wording is clear.

> Perhaps I overlooked something, but I don't see the requirement for 'ping' 
> emails or
> the like, so I do not understand why it's relevant to the policy
> discussion. If I'm missing something, though, hopefully, you'll be able to
> point it out :)

It is true that ping mails could be replaced with users logging being 
redirected from the application they use to a CA where they authenticate to the 
CA via one of these federated authentication schemes and then be federated 
back. This has all the same issues and is not materially different than a ping 
mail though when looking at usability which is why I omitted it but as you 
point out it is still possible.

It is also possible for a mail service provider to become a CA, or provide 
certificates through a CA by proving control of the base domain and then being 
authoritative for the local part of the address but this limits the use of 
certificates in this case to email providers that have built this.

These options leave SAAS providers with the following choices:
a) use private trust certificates
d) use public trust certificates and accept it makes your user experience 
non-competitive
c) do not use certificates because it makes your user experience non-competitive
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7 Proposal: Forbid Delegation of Email Validation for S/MIME Certificates

2019-05-15 Thread Ryan Hurst via dev-security-policy
Pedro,

That scenario is addressed by Wayne proposed change.

That same change does not allow for applications that use GMail or there 
federated authentication providers to use client certificates without sending 
each user to the CA.

Ryan
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7 Proposal: Forbid Delegation of Email Validation for S/MIME Certificates

2019-05-14 Thread Ryan Hurst via dev-security-policy
> Does replacing the existing "require practice" language by adding the
> following sentence to the Root Store Policy achieve the clarity you're
> seeking and avoid the problems you've pointed out?
> 
> "CAs MUST NOT delegate validation of the domain name part of an email
> address to a 3rd party."


If Mozilla wishes to preclude modern web applications from using digital 
certificates that contain their email address without :
a) having to know that a certificate is in use, 
b) having to know that the certificate is coming from a specific CA,
c) and stopping the associated transaction to enroll for a certificate or 
require pre-enrollment of a certificate.

Then this does make things better. I say this because by my read if that text 
is that it allows the CA to:
1) Delegate the local part of the validation to the mail providers via 
mechanisms like DNS or MX records,
2) Do their own OAUTH based email validation workflows,
3) To continue to do ping mail based validation.

With that said, I do not think that this goes far enough.

Let me provide some background.

The two most common cloud email providers represent around 1.5billion users and 
OAUTH based authentication into third-party services has become the norm. 
Basically, nearly every email provider on the planet likely supports OAUTH (or 
similar) federation at this point.

To put this in perspective, there are less than 340 million domains.

I bring this up because using these federated authentication flows like this 
are really the best way to validate a user in control of an email address. This 
is the case because it happens silently on every authentication and is asserted 
by the entity that knows the best, the mail operator.

So, if we wanted to allow a SAAS service to enroll a client for a certificate 
at login, or transaction time, via a federated login, what would such an RA 
agreement look like? 

In my mind it would say:

"if the mail operator, via an approved mechanism, which includes ONLY the use 
of OAUTH based federated authentication to a mail provider (MS, Google, Yahoo, 
etc) says the user controls that address then you can get the certificate, 
under no other circumstances can you".

The value the CA provides here is that:
a) they are trusted, 
b) they enforce this contract.


> > This is because out of context ping emails to individual users (what many
> > CAs offer today) is essentially a deal breaker.
> >
> >
> A deal breaker due to the poor usability involved in interrupting a task
> while the user retrieves an email and confirms receipt?
Yes, I think so.

As an example, in the EU document signing solutions have used certificates 
since their inception, in the US, on the other-hand we insert a picture of a 
signature derived from a font.

When we look at the solutions that were in the EU, up until very recently, 
these solutions forced people through the path of interacting with the CA to 
get a certificate.

As a result (at least in part) we saw was signing was not adopted anywhere near 
as much as it was the US where you could just "click" and move on with your 
life despite extensive marketing.

As a user, the reality is if you are using a signing service you have no need 
to know:
a) a CA is involved, 
b) or which CA is involved.

I also believe all signing certificates need to have either an email or a phone 
number and all SHOULD have an email address. This means that a decision to not 
allow RA agreements precludes any Mozilla CA from offering certificates to a 
SAAS that use certs like this, even if other root programs allow for it.

I should note, I have created such a service prior to joining Google so I say 
this with a bias, but I do think that having solutions where:
a) the user is the only one in control of their signing key,
b) the user doesn't have to know certificates are in use at all,
c) there is no unnecessary third-party interacting with the user.

If your interested here is a quick video of the signing experience in that 
solution.
https://www.dropbox.com/s/z5omfzew15g5bb7/Hancock%20for%20Wayne.mov?dl=0

While I talked about the document signing use case above, this is not limited 
to those use cases.

There are lots of use cases where SAAS applications could make their offerings 
more secure with end-user certificates if doing so did not ruin the user 
experience.

Ryan
(personal hat)
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7 Proposal: Forbid Delegation of Email Validation for S/MIME Certificates

2019-05-13 Thread Ryan Hurst via dev-security-policy
On Monday, May 13, 2019 at 10:25:18 AM UTC-7, Wayne Thayer wrote:
> The BRs forbid delegation of domain and IP address validation to third
> parties. However, the BRs don't forbid delegation of email address
> validation nor do they apply to S/MIME certificates.
> 
> Delegation of email address validation is already addressed by Mozilla's
> Forbidden Practices [1] state:
> 
> "Domain and Email validation are core requirements of the Mozilla's Root
> Store Policy and should always be incorporated into the issuing CA's
> procedures. Delegating this function to 3rd parties is not permitted."
> 
> I propose that we move this statement (changing "the Mozilla's Root Store
> Policy" to "this policy") into policy section 2.2 "Validation Practices".
> 
> This is https://github.com/mozilla/pkipolicy/issues/175
> 
> I will appreciate everyone's input on this proposal.
> 
> - Wayne
> 
> [1]
> https://wiki.mozilla.org/CA/Forbidden_or_Problematic_Practices#Delegation_of_Domain_.2F_Email_Validation_to_Third_Parties

Though it seems the thread has largely expressed my concerns I do want to chime 
in and stress that I believe that it is important that this text gets clarified.

Email addresses are, as has been pointed out, tricky. 

Today it is common practice for CAs to send "ping mails" for every certificate 
that is sent, this has been a common interpretation for what "email 
certificate" validation has to look like.

This, however, excludes things like:
> Using MX records, as a means to look at which mail service is authoritative 
> for that domain and delegating the local part to the entity operating the 
> mail service.
> Using DNS records as a means to determine who is authoritative for that 
> domain and delegating the local part to that entity.
> Relying on OAUTH based redirection flows from mail service providers such as 
> Google, Microsoft, and others.

These options all offer strong and friction-free user experiences for the 
associated use cases.

I also think that since emails have become the most common account identifier 
excluding the ability for a CA to enter into an RA agreement essentially 
precludes the use of email certificates by anyone other than a) the CA or b) 
the mail service provider.

This means, as an example, one could not use Mozilla trusted certificates at 
scale for mail or document signing unless it was provided by one of those two 
entities.

This is because out of context ping emails to individual users (what many CAs 
offer today) is essentially a deal breaker.

The scale and nature of email validation are such that RA style agreements 
should, in my personal opinion, be within reason to accommodate.

This is particularly problematic in that even if other root stores allowed the 
use of RA agreements for email certificates it would no longer be allowed; in 
essence, precluding the adoption of publicly trusted client certificates for 
mainstream SASS applications.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7 Proposal: Require EKUs in End-Entity Certificates

2019-04-17 Thread Ryan Hurst via dev-security-policy
For what it is worth I agree with Brian.

I would go a bit further and say certificates need to be issued for explicit 
usages anything else produces potentially unknown behaviors.

What's most important though is that any certificate that is trusted as a 
result of membership in the Mozilla root program that can technically be used 
for SSL on the public web is subject to the program requirements intent or not.

It seems since MSFT already requires leaves to have an EKU it wouldn't be 
breaking to apply the same rule in Mozilla's program.

Ryan
On Wednesday, April 17, 2019 at 12:27:49 PM UTC-7, Brian Smith wrote:
> Wayne Thayer via dev-security-policy 
> wrote:
> 
> > My conclusion from this discussion is that we should not add an explicit
> > requirement for EKUs in end-entity certificates. I've closed the issue.
> >
> 
> What will happen to all the certificates without an EKU that currently
> exist, which don't conform to the program requirements?
> 
> For what it's worth, I don't object to a requirement for having an explicit
> EKU in certificates covered by the program. Like I said, I think every
> certificate that is issued should be issued with a clear understanding of
> what applications it will be used for, and having an EKU extension does
> achieve that.
> 
> The thing I am attempting to avoid is the implication that a missing EKU
> implies a certificate is not subject to the program's requirements.
> 
> Cheers,
> Brian

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Arabtec Holding public key?

2019-04-11 Thread Ryan Hurst via dev-security-policy
True, we don't know their intentions but we can at least assume they would
need private keys to use said certificates with any properly implemented
user agent.

Ryan Hurst
(personal capacity)


On Thu, Apr 11, 2019 at 6:12 PM Peter Gutmann 
wrote:

> admin--- via dev-security-policy 
> writes:
>
> >The risk here, of course, is low in that having a certificate you do not
> >control a key for doesn't give you the ability to do anything.
>
> As far as we know.  Presumably someone has an interesting (mis)use for it
> otherwise they wouldn't have bothered obtaining it.
>
> Peter.
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: GRCA Incident: BR Compliance and Document Signing Certificates

2019-03-25 Thread Ryan Hurst via dev-security-policy
While it may be true that the certificates in question do not contain SANs, 
unfortunately, the certificates may still be trusted for SSL since they do not 
have EKUs.

For an example see "The most dangerous code in the world: validating SSL 
certificates in non-browser software" which is available at 
https://crypto.stanford.edu/~dabo/pubs/abstracts/ssl-client-bugs.html

What you will see that hostname verification is one of the most common areas 
applications have a problem getting right. Often times they silently skip 
hostname verification, use libraries provide options to disable host name 
verifications that are either off by default, or turned off for testing and 
never enabled in production.

One of the few checks you can count on being right with any level of 
predictability in my experience is the server EKU check where absence is 
interpreted as an entitlement.

Ryan Hurst
(writing in a personal capacity)
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Trust Services and EJBCA serial number behavior

2019-03-11 Thread Ryan Hurst via dev-security-policy
Dear m.d.s.p,

We wanted to follow-up to this thread and give a brief update.

We have revoked all but 26 of the affected certificates and are working with 
the associated subscribers to enable a smooth transition prior to revocation 
which will occur as each certificate is replaced or by 2019-03-31 whichever 
happens first.

Ryan Hurst
Product Manager
Google Trust Services
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Trust Services and EJBCA serial number behavior

2019-03-06 Thread Ryan Hurst via dev-security-policy
We have attached two files to the bug 
(https://bugzilla.mozilla.org/show_bug.cgi?id=1532842), one that provides a 
list of all certificates issued after ballot 164 that contain 63 bit serial 
numbers and one that lists all certificates in that set that have not yet been 
revoked.

Ryan Hurst
Google Trust Services
Product Manager
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Trust Services and EJBCA serial number behavior

2019-03-05 Thread Ryan Hurst via dev-security-policy
Posting from a personal account but commenting in a professional capacity.

Our decision not to include the list was intended for brevity sake only. It is 
a reasonable request to provide a CSV and we will do that within 24 hours.

Regarding the number of subscribers, yes in this case it is appropriate to say 
there is a single subscriber, Alphabet and it’s affiliates, including Google.

Ryan Hurst
Google Trust Services
Product Manager 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Trust Services and EJBCA serial number behavior

2019-03-05 Thread Ryan Hurst via dev-security-policy
I have created a bug to track this issue: 
https://bugzilla.mozilla.org/show_bug.cgi?id=1532842
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Trust Services and EJBCA serial number behavior

2019-03-05 Thread Ryan Hurst via dev-security-policy
Sleevi,

Thanks you for the links to both the reporting requirements and the underscore 
issue with DigiCert. 

Regarding the statement about the severity of the issue, it was not intended to 
diminish the non-compliance. Instead it was an attempt to frame the issue with 
sufficient context to help others who follow this thread answer the question of 
impact on the community.

As for the incident response, we have been making every effort to gather 
complete and accurate information to enable us to provide a useful and 
actionable incident report. This has unfortunately taken longer than we had 
hoped. We now have that report ready and you can find it below. 

You are right that some affected certificates could not be revoked within the 5 
day requirement. In the last section of the report you'll find more information 
about the reasons for this along with our plan for revoking the remaining 
certificates.

Ryan Hurst
Google Trust Services
Product Manager


Summary
---
Some certificates issued by GTS utilize EJBCA and as a result had serial 
numbers with an effective entropy of 63 bits. These serial numbers were created 
from a 64 bit CSPRNG output and were believed to be in compliance with Section 
7.1 of the Baseline Requirements. Upon closer investigation we learned that 
EJBCA’s logic for serial number generation only selected output numbers having 
a leading 0 bit, which reduced their effective entropy to 63 bits.

Though GTS agree that the issuance of the certificates based on the above 
behavior qualifies as missisuance, we also believe that this issue does not 
represent a material security risk to the community.

To ensure that all of our certificates comply with the community’s 
interpretation of the Baseline Requirements (BR) we have updated the associated 
EJBCA CAs to mitigate the problematic behaviour. 

At this time approximately 95% of the affected certificates have been replaced 
and revoked. The remaining certificates expire over the next 3 months. 

We are actively working with the subscribers of these remaining certificates to 
facilitate a replacement with the goal of minimizing disruption of services. 
Should this not be possible, we will revoke these certificates no later than 
2019-03-31..

Certificates issued from non-EJBCA CAs have been checked and are not affected.

Incident Report
---

1. How your CA first became aware of the problem
We have been following the thread discussing Dark Matter’s root inclusion 
request. When concerns regarding the EJBCA serial number generation logic were 
raised, we analyzed the behaviour of our EJBCA installations and found that 
they were affected as well.

2. A timeline of the actions your CA took in response. 

2019-02-22 - A thread on m.d.s.p. mentions serial entropy issue of Dark Matter 
certificates.
2019-02-26 - GTS begins reviewing the serial number generation behaviour of its 
CAs.
2019-02-27 - A third-party reports that serial numbers in all certificates 
issued from a specific GTS CA have a leading bit of 0 and suggests that we may 
have the same issue as Dark Matter.
2019-02-27 - GTS requests clarification from PrimeKey. It is confirmed that 
EJBCA serial generation logic causes the issue and that in order to create 
compliant serial numbers, the logic has to be replaced.
2019-02-27 - The associated CAs used an earlier version of EJBCA where the 
serial number logic was not configurable. As a result code from a newer  
version of EJBCA that supports configurable serial number length is backported 
and configured to use 16 byte serials.
2019-02-28 - The backported code is deployed to production.
2019-02-28 - Ongoing discussion on m.d.s.p. revolves around interpretation of 
Section 7.1 BR. A consensus emerges that the affected certificates must be 
considered to have been misissued.
2019-02-28 - We inventory the number of certificates issued since Section 7.1 
BR went into effect in September 2016, the number that were currently valid as 
well as their validity period. The results are provided in the section on 
remediation actions below.
2019-03-01 - GTS decides to replace and revoke all affected certificates. 
Customers are contacted to work out revocation plans.
2019-03-01 - Issuance of replacement certificates begins.
2019-03-02 - A first notification is posted to m.d.s.p 
2019-03-04 - Certificate revocation begins.
2019-03-05 - An update on progress is posted to m.d.s.p
2019-03-05 - This post mortem is posted to m.d.s.p

3. Whether your CA has stopped, or has not yet stopped, issuing certificates 
with the problem.
GTS has stopped using the incorrect serial number generation logic. As of 
2019-02-28, all GTS certificates issued from its EJBCA CAs have serial numbers 
with at least 64 bits of entropy.

4. A summary of the problematic certificates.
All certificates issued from GIAG3 (https://crt.sh/?id=109354897, 
https://crt.sh/?id=158511650) between 2016-09-30 and 2019-02-28 were affected.

5. The complete certificate data for the 

Re: Google Trust Services and EJBCA serial number behavior

2019-03-05 Thread Ryan Hurst via dev-security-policy
Dear m.d.s.p,

We wanted to follow-up to this thread and give an update. 

We have decided to replace and revoke the certificates with 63 bit serial 
numbers, so far we have finished about 95% of the affected certificates. 

We are actively working with the remaining subscribers to replace their 
certificates as soon as possible without creating a service disruption. We have 
made the decision to work with subscribers to enable a smooth transition prior 
to revocation since the issue in question does not reflect a material security 
issue.

We will share more information as we have it and will publish a complete post 
mortem once the associated response is complete.

Ryan Hurst
Product Manager
Google Trust Services
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Google Trust Services and EJBCA serial number behavior

2019-03-01 Thread Ryan Hurst via dev-security-policy
Dear m.d.s.p,

We at Google Trust Services have been following the thread discussing Dark 
Matter’s root inclusion request.  In particular the elements of the thread that 
discuss the EJBCA serial number generation logic stood out to us.

This is because we use EJBCA for some of our own CAs. This element of the 
thread spurred us to review how our EJBCA based CAs were generating serial 
numbers. As a result of this review we determined our legacy EJBCA CAs were 
exhibiting the same behavior.

Though we believe this not to represent a material security issue and we 
believe that this issue is systemic given it is a result of behavior of the 
most common CA software in use in the WebPKI, we are actively working on a post 
mortem and are evaluating if and how to replace the affected certificates. 

It is noteworthy that the associated EJBCA CAs have been patched and any new 
certificates will not have this issue. Additionally these CAs were already 
actively being deprecated for a new generation of EJBCA and a bespoke CA code 
base that do not exhibit this behavior.

We will follow up with a post mortem when our investigation is complete.

Ryan Hurst
Product Manager
Google Trust Services
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Online exposed keys database

2018-12-18 Thread Ryan Hurst via dev-security-policy
On Tuesday, December 18, 2018 at 2:44:22 AM UTC-8, Matt Palmer wrote:
> Hi all,
> 
> I'd like to make everyone aware of a service I've just stood up, called
> pwnedkeys.com.  It's intended to serve as a clearinghouse of known-exposed
> private keys, so that services that accept public keys from external
> entities (such as -- relevant to mdsp's interests -- CAs) can make one call
> to get a fairly authoritative answer to the question "has the private key
> I'm being asked to interact with in some way been exposed?".
> 
> It's currently loaded with great piles of Debian weak keys (from multiple
> architectures, etc), as well as some keys I've picked up at various times. 
> I'm also developing scrapers for various sites where keys routinely get
> dropped.
> 
> The eventual intention is to be able to go from "private key is on The
> Public Internet somewhere" to "shows up in pwnedkeys.com" automatically and
> in double-quick time.
> 
> I know there are a number of very clever people on this list who have found
> and extracted keys from more esoteric places than Google search, and I'd be
> really interested in talking to you (privately, I'd imagine) about getting
> specimens of those keys to add to the database.
> 
> I'd also welcome comments from anyone about the query API, the attestation
> format, the documentation, or anything else vaguely relevant to the service. 
> Probably best to take that off-list, though.
> 
> I do have plans to develop a PR against (the AWS Labs') certlint to cause it
> to query the API, so there's no need for anyone to get deep into that unless
> they're feeling especially frisky.  Other linting tools will *probably* have
> to do their own development, as my Go skills are... rudimentary at best,
> shall we say.  I'd be happy to give guidance or any other necessary help to
> anyone looking at building those, though.
> 
> Finally, if any CAs are interested in integrating the pwnedkeys database
> into their issuance pipelines, I'd love to discuss how we can work together.
> 
> Thanks,
> - Matt

This is great. I purchased keycompromise.com ages ago to build something just 
like this. Im very glad to see you took the time to make this.

My first thought is by using SPKI you have limited the service unnecessarily to 
X.509 related keys, I imagined something like this covering PGP, JWT as well as 
other formats. It would be nice to see the scope increased accordingly.

It would be ideal if it were possible to download the database also, the 
latency of the use of a third-party service while issuing certs is potentially 
too much for a CA to eat at issuance time; something that could optionally be 
used on-prem wouldn't leak affiliation and address this.

As long as its limited to X.509, or at least as long as it supports it and uses 
SPKI, it would be interesting to have the website use PKIjs to let you browse 
to a cert, csr or key and the SPKI calculated for you. Happy to help with that 
if your interested.

Personally I prefer https://api.pwnedkeys.com/v1/ to 
https://v1.pwnedkeys.com/.

I see your using JWS; I had been planning on building mine on top of Trillian 
(https://github.com/google/trillian) so you could have an auditable low trust 
mechanism to do this. Let me know if your interested in that and I would be 
happy to help there.

Anyways thanks for doing this.

Ryan Hurst
(personal)


___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: No Russian CAs

2018-08-27 Thread Ryan Hurst via dev-security-policy
On Friday, August 24, 2018 at 11:23:37 AM UTC-7, Caju Mihai wrote:
> Greetings,
> I would like to ask why there are no root certificate authorities from 
> organizations in the Russian Federation. Specifically I haven't found any 
> with the country code RU in the NSS CA bundle. Is it due to political 
> pressure? Or does the Russian government have a bad history with forcing CAs 
> to issue certificates? As far as I know Yandex has it's own intermediate CA, 
> signed by Certum. So I can't see the issue? Also can you point me to a few 
> bugs where Russian CAs have attempted inclusion? Bugzilla search isn't very 
> helpful, and I have tried searching in "CA Certificates Code", "CA 
> Certificate Mis-Issuance" and "CA Certificate Root Program"

The Russian market (really the whole FSU) is notably different than other 
markets, at least in the context of the WebPKI. Most notably the goverment 
mandate for the use of GOST approved algorithms and implementations conflicts 
with the WebTrust mandate of RSA, and the global standard ECC curves.

This is meaningful because many CAs make a large portion of their revenue not 
off SSL certificates but other services (digital signatures, enterprise use 
cases, etc). Much of these other use cases are covered by the many goverment 
licensed CAs that (hundreds last I heard) that are used for these cases while 
using GOST approved algorithms.

Above and beyond that I would say the cost realities of commercial WebPKI 
offerings make it difficult to justify that particular business in the Russian 
market.

With that said I think your real question is could a Russian CA become a 
WebTrust and browser trusted CA? I personally think the answer is yes (though I 
doubt the business viability) if they could get clarity from the FSB on 
approval to operate such a CA given the current guidance regarding approved 
GOST algorithms.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Disallowed company name

2018-06-04 Thread Ryan Hurst via dev-security-policy
I apologize, I originally wrote in haste and did not clearly state what I
was suggesting.

Specifically, while it is typical for a given jurisdiction (state, etc) to
require a name to be unique, it is typically not a requirement for it to
not be so unique that it can not be confused for another name. For example,
I have seen businesses registered with punctuation and without; I have also
seen non-latin characters in use in business names this clearly has the
potential to introduce name confusion.

Ryan

On Fri, Jun 1, 2018 at 11:55 PM, Matthew Hardeman 
wrote:

>
>
> On Fri, Jun 1, 2018 at 10:28 AM, Ryan Hurst via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
>>
>> re: Most of the government offices responsible for approving entity
>> creation are concerned first and foremost with ensuring that a unique name
>> within their jurisdiction is chosen
>>
>> What makes you say that, most jurisdictions have no such requirement.
>>
>>
> This was anecdotal, based on my own experience with formation of various
> limited liability entities in several US states.
>
> Even my own state of Alabama, for example, (typically regarded as pretty
> backwards) has strong policies and procedures in place for this.
>
> In Alabama, formation of a limited liability entity whether a Corporation
> or LLC, etc, begins with a filing in the relevant county probate court of
> an Articles of Incorporation, Articles or Organization, trust formation
> documents, or similar.  As part of the mandatory filing package for those
> document types, a name reservation certificate (which will be validated by
> the probate court) from the Alabama Secretary of State will be required.
> The filer must obtain those directly from the appropriate office of the
> Alabama Secretary of State.  (It can be done online, with a credit card.
> The system enforces entity name uniqueness.)
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Disallowed company name

2018-06-01 Thread Ryan Hurst via dev-security-policy
On Thursday, May 31, 2018 at 3:07:36 PM UTC-7, Matthew Hardeman wrote:
> On Thu, May 31, 2018 at 4:18 PM, Peter Saint-Andre via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
> >
> >
> > We can also think of many business types (e.g., scammers) that would
> > love to have names like ⒶⓅⓅⓁⒺ but that doesn't mean it's smart to issue
> > certificates with such names. The authorities who approve of company
> > names don't necessarily have certificate handling in mind...
> >
> 
> Indeed.  Most of the government offices responsible for approving entity
> creation are concerned first and foremost with ensuring that a unique name
> within their jurisdiction is chosen and that a public record of the entity
> creation exists.  They are not concerned with risk management or
> legitimacy, broadly speaking.
> 
> Anyone at any level of risk management in the rest of the ecosystem around
> a business will be concerned with such matters.  Banks, trade vendors, etc,
> tend to reject accounts with names like this.  Perhaps CAs should look upon
> this similarly.

re: Most of the government offices responsible for approving entity creation 
are concerned first and foremost with ensuring that a unique name within their 
jurisdiction is chosen

What makes you say that, most jurisdictions have no such requirement.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Bit encoding (AW: Policy 2.6 Proposal: Add prohibition on CA key generation to policy)

2018-05-04 Thread Ryan Hurst via dev-security-policy

> True, but CAs can put technical constraints on that to limit the acceptable 
> passwords to a certain strength. (hopefully with a better strength-testing 
> algorithm than the example Tim gave earlier)

Tim is the best of us -- this is hard to do well :)

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Bit encoding (AW: Policy 2.6 Proposal: Add prohibition on CA key generation to policy)

2018-05-04 Thread Ryan Hurst via dev-security-policy

> 
> What about "or a user supplied password"?
> -carl

user supplied passwords will (in real world scenarios) not be as good as a one 
generated for them; this is in part why I suggested earlier if a user password 
to be used that it be mixed with a server provided value.

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Bit encoding (AW: Policy 2.6 Proposal: Add prohibition on CA key generation to policy)

2018-05-04 Thread Ryan Hurst via dev-security-policy
On Friday, May 4, 2018 at 1:00:03 PM UTC-7, Doug Beattie wrote:
> First comments on this: "MUST be encrypted and signed; or, MUST have a 
> password that..."
> - Isn't the password the key used for encryption?  I'm not sure if the "or" 
> makes sense since in both cases the password is the key for encryption

There are modes of PKCS#12 that do not use passwords.

> - In general, I don't think PKCS#12 files are signed, so I'd leave that out, 
> a signature isn't necessary.  I could be wrong...

They may be, see: http://unmitigatedrisk.com/?p=543

> 
> I'd still like to see a modification on the requirement: "password MUST be 
> transferred using a different channel than the PKCS#12 file".  A user should 
> be able to download the P12 and password via HTTP.  Can we add an exception 
> for that?

Why do you want to allow the use of HTTP?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.6 Proposal: Add prohibition on CA key generation to policy

2018-05-01 Thread Ryan Hurst via dev-security-policy
On Tuesday, May 1, 2018 at 1:00:20 PM UTC-7, Tim Hollebeek wrote:
> I get that, but any CA that can securely erase and forget the user’s 
> contribution to the password and certainly do the same thing to the entire 
> password, so I’m not seeing the value of the extra complexity and interaction.

It forces a conscious decision to violate a core premise.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.6 Proposal: Add prohibition on CA key generation to policy

2018-05-01 Thread Ryan Hurst via dev-security-policy
> I'm not sure I agree with this as a recommendation; if you want both
parties
> to provide inputs to the generation of the password, use a
well-established
> and vetted key agreement scheme instead of ad hoc mixing.

> Of course, at that point you have a shared transport key, and you should
> probably
> just use a stronger, more modern authenticated key block than PKCS#12,
> but that's a conversation for another day.

I say this because it is desirable that the CA plausibly not be able to
decrypt the key even if it holds the encrypted key blob.



On Tue, May 1, 2018 at 12:40 PM, Tim Hollebeek 
wrote:

>
> > - What is sufficient? I would go with a definition tied to the effective
> > strength of
> > the keys it protects; in other words, you should protect a 2048bit RSA
> key
> > with
> > something that offers similar properties or that 2048bit key does not
> live
> > up to
> > its 2048 bit properties.
>
> Yup, this is the typical position of standards bodies for crypto stuff.  I
> noticed that
> the 32 got fixed to 64, but it really should be 112.
>
> > - The language should recommend that the "password" be a value that is a
> mix
> > of a user-supplied value and the CSPRNG output and that the CA can not
> store
> > the user-supplied value for longer than necessary to create the PKCS#12.
>
> I'm not sure I agree with this as a recommendation; if you want both
> parties
> to provide inputs to the generation of the password, use a well-established
> and vetted key agreement scheme instead of ad hoc mixing.
>
> Of course, at that point you have a shared transport key, and you should
> probably
> just use a stronger, more modern authenticated key block than PKCS#12,
> but that's a conversation for another day.
>
> > - The language requires the use of a password when using PKCS#12s but
> > PKCS#12 supports both symmetric and asymmetric key based protection also.
> > While these are not broadly supported the text should not probit the use
> of
> > stronger mechanisms than 3DES and a password.
>
> Strongly agree.
>
> -Tim
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.6 Proposal: Add prohibition on CA key generation to policy

2018-05-01 Thread Ryan Hurst via dev-security-policy
A few problems I see with the proposed text:

- What is sufficient? I would go with a definition tied to the effective 
strength of the keys it protects; in other words, you should protect a 2048bit 
RSA key with something that offers similar properties or that 2048bit key does 
not live up to its 2048 bit properties. This is basically the same CSPRNG 
conversation but it's worth looking at https://www.keylength.com/ 
- The language should recommend that the "password" be a value that is a mix of 
a user-supplied value and the CSPRNG output and that the CA can not store the 
user-supplied value for longer than necessary to create the PKCS#12.
- The strength of the password is discussed but PKCS#12 supports a bunch of 
weak cipher suites and it is common to find them in use in PKCS#12s. The 
minimum should be specified to be what Microsoft supports which is 
pbeWithSHAAnd3-KeyTripleDES-CBC for “privacy” of keys and for the privacy of 
certificates it uses pbeWithSHAAnd40BitRC2-CBC.
- The language requires the use of a password when using PKCS#12s but PKCS#12 
supports both symmetric and asymmetric key based protection also. While these 
are not broadly supported the text should not probit the use of stronger 
mechanisms than 3DES and a password.

Ryan
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: "multiple perspective validations" - AW: Regional BGP hijack of Amazon DNS infrastructure

2018-04-26 Thread Ryan Hurst via dev-security-policy
On Thursday, April 26, 2018 at 11:45:15 AM UTC, Tim Hollebeek wrote:
> > > which is why in the near future we can hopefully use RDAP over TLS
> > > (RFC
> > > 7481) instead of WHOIS, and of course since the near past, DNSSEC :)
> > 
> > I agree moving away from WHOIS to RDAP over TLS is a good low hanging fruit
> > mitigator once it is viable.
> 
> My opinion is it is viable now, and the time to transition to optionally 
> authenticated RDAP over TLS is now.  It solves pretty much all the problems 
> we are currently having in a straightforward, standards-based way.  
> 
> The only opposition I've seem comes from people who seem to want to promote 
> alternative models that destroy the WHOIS ecosystem, leading to proprietary 
> distribution and monetization of WHOIS data.
> 
> I can see why that is attractive to some people, but I don’t think it's best 
> for everyone.
> 
> I also agree that DNSSEC is a lost cause, though I understand why Paul 
> doesn't want to give up   I've wanted to see it succeed for basically my 
> entire career, but it seems to be making about as much progress as fusion 
> energy.
> 
> -Tim

Moving to RDAP does not solve "all the problems we are currently having" in 
that it does not do anything for DCV which is what I think this thread was 
about (e.g. BGP implications for DCV).

That said, if in fact, RDAP is viable today I agree we should deprecate the use 
of WhoIs and mandate use of RDAP in the associated scenarios.

Ryan Hurst
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: "multiple perspective validations" - AW: Regional BGP hijack of Amazon DNS infrastructure

2018-04-26 Thread Ryan Hurst via dev-security-policy
On Wednesday, April 25, 2018 at 3:48:07 PM UTC+2, Paul Wouters wrote:
> On Wed, 25 Apr 2018, Ryan Hurst via dev-security-policy wrote:
> 
> > Multiple perspectives is useful when relying on any insecure third-party 
> > resource; for example DNS or Whois.
> >
> > This is different than requiring multiple validations of different types; 
> > an attacker that is able to manipulate the DNS validation at the IP layer 
> > is also likely going to be able to do the same for HTTP and Whois.
> 
> which is why in the near future we can hopefully use RDAP over TLS (RFC
> 7481) instead of WHOIS, and of course since the near past, DNSSEC :)
> 
> I'm not sure how useful it would be to have multiple network points for
> ACME testing - it will just lead to the attackers doing more then one
> BGP hijack at once. In the end, that's a numbers game with a bunch of
> race conditions. But hey, it might lead to actual BGP security getting
> deployed :)
> 
> Paul

I agree moving away from WHOIS to RDAP over TLS is a good low hanging fruit 
mitigator once it is viable.

Having been responsible for a very popular/mainstream DNS server and worked on 
implementing/deploying DNSSEC in enterprises I am of the opinion this is a lost 
cause and do not have the patience or energy to try to engage in all the 
reasons why this is not a viable solution.

As for multi-perspective domain control validation and the idea that an 
attacker who can attack one perspective can attack all perspectives, that may 
be true but the larger your quorum set is the harder that becomes. The goal is 
not to make it impossible to cheat is not realistic, the goal is to raise the 
bar so that cheating is meaningfully harder.

Ryan

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: "multiple perspective validations" - AW: Regional BGP hijack of Amazon DNS infrastructure

2018-04-25 Thread Ryan Hurst via dev-security-policy
On Wednesday, April 25, 2018 at 1:28:43 PM UTC+2, Buschart, Rufus wrote:
> Hi Ryan!
> 
> The "multiple perspective validations" is an interesting idea. Did you think 
> about combining it with CAA checking? I could imagine having a new tag, e.g. 
> "allowedMethods", in which the legitimate owner of  a domain can specify the 
> set of allowed methods to validate his domain. As an example the value 
> "(3.2.2.4.1 AND 3.2.2.4.5) OR 3.2.2.4.9" in the new "allowedMethods" tag 
> could mean, that a certificate may only be issued, if two validations acc. 
> 3.2.2.4.1 and 3.2.2.4.1 were successful or if one validation acc. 3.2.2.4.9 
> was successful. Any other method of validation would be not allowed. I see 
> here the benefit, that the owner of a domain can choose how to verify 
> according his business needs and select the appropriate level of security for 
> his domains.
> 
> With best regards,
> Rufus Buschart
> 

Multiple perspectives is useful when relying on any insecure third-party 
resource; for example DNS or Whois. 

This is different than requiring multiple validations of different types; an 
attacker that is able to manipulate the DNS validation at the IP layer is also 
likely going to be able to do the same for HTTP and Whois.

Ryan
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Regional BGP hijack of Amazon DNS infrastructure

2018-04-25 Thread Ryan Hurst via dev-security-policy
On Tuesday, April 24, 2018 at 5:29:05 PM UTC+2, Matthew Hardeman wrote:
> This story is still breaking, but early indications are that:
> 
> 1.  An attacker at AS10297 (or a customer thereof) announced several more
> specific subsets of some Amazon DNS infrastructure prefixes:
> 
> 205.251.192-.195.0/24 205.251.197.0/24 205.251.199.0/24
> 
> 2.  It appears that AS10297 via peering arrangement with Google got
> Google's infrastructure to buy (accept) the hijacked advertisements.
> 
> 3.  It has been suggested that at least one of the any cast 8.8.8.8
> resolvers performed resolutions of some zones via the hijacked targets.
> 
> It seems prudent for CAs to look into this deeper and scrutinize any domain
> validations reliant in DNS from any of those ranges this morning.

This is an example of why ALL CA's should either already be doing 
multi-perspective domain control validation or be working towards that in the 
very near future.

These types of attacks are far from new, we had discussions about them back in 
the early 2000s while at Microsoft and I know we were not the only ones. One of 
the earlier papers I recall discussing this topic was from the late 08 
timeframe from CMU - https://www.cs.cmu.edu/~dga/papers/perspectives-usenix2008/

The most recent work on this I am aware of is the Princeton paper from last 
year: http://www.cs.princeton.edu/~jrex/papers/bamboozle18.pdf

As the approved validation mechanisms are cleaned up and hopefully reduced to a 
limited few with known security properties the natural next step is to require 
those that utilize these methods to also use multiple perspective validations 
to mitigate this class of risk.

Ryan Hurst (personal)
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Sigh. stripe.ian.sh back with EV certificate for Stripe, Inc of Kentucky....

2018-04-13 Thread Ryan Hurst via dev-security-policy
On Friday, April 13, 2018 at 2:15:47 PM UTC-7, Matthew Hardeman wrote:
As a parent it is not uncommon for me to have to explain to my children that 
something they ask for is not reasonable. In some cases I joke and say things 
like “well I want a pony” or “and I wish water wasn't wet”.

When I look at arguments that support the idea of name squatting on a the 
internet and trying to solve that problem via the WebPKI I immediately think of 
these conversations with my kids.

The topic of trademark rights has numerous professions dedicated to it combined 
with both international and domestic laws that define the rights, obligations 
and associated dispute resolution processes for claims associated with 
trademarks must use. I do not see how it would be effective or reasonable to 
place CAs as the arbitrator of this. Instead, should there be a trademark 
violation, it seems the existing legal system would be the appropriate way to 
address such concerns.

If we accept that, which seems reasonable to me,  then the question becomes in 
the event of a trademark dispute where should remediation happen. Since the CA 
is not the owner of the trademark or responsible for the registration of the 
name, it seems misplaced to think they should be the initiator of this process. 
Additionally it seems wrong that they would even be the first place you would 
go to if you wanted trademark enforcement, the registration of the name happens 
at the DNS layer and revoking the certificate does not change that the domain 
is still out there.

To that end, ICANN actually has specific policies and procedures on how that 
process is supposed to work (see: 
https://www.icann.org/resources/pages/dispute-resolution-2012-02-25-en). The 
WebPKI ecosystem does not, it is, as has been discussed in this thread 
effectively acting arbitrarily when revoking for Trademark infringement.

Based on the above, it seems clear to me the only potentially reasonable 
situation a CA should revoked on the basis of the outcome of Trademark claim 
through the aforementioned processes.

To the topic of revoking a certificate because it is “deceiving”; this idea 
sounds a lot like book burning to me 
(https://www.ushmm.org/wlc/en/article.php?ModuleId=10005852). 

```
Book burning refers to the ritual destruction by fire of books or other written 
materials. Usually carried out in a public context, the burning of books 
represents an element of censorship and usually proceeds from a cultural, 
religious, or political opposition to the materials in question.
```

This is a great example of that, what we have here is a legitimate business 
publishing information into the public domain that some people find offensive. 
Those people happen to control the doors to the library and have used that fact 
to censor that information so others can not access it.

As a technologist who has spent a good chunk of his career working to secure 
the internet and make it more accessible this give me great pause and if you 
don’t come to the same conclusion I suggest you take a few minutes to look at 
how many CAs are operated by or in countries who have a bad history of freedom 
of speech.

I strongly hope that Mozilla, and the other browsers, take a hard look at the 
topic of how CAs are expected to handle cases like this. The current situation 
may have been acceptable 10 years ago but as we approach 100% encryption on the 
web do we really want the WebPKI to be used as a censorship tool?

Ryan Hurst
(Speaking as an individual)
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Sigh. stripe.ian.sh back with EV certificate for Stripe, Inc of Kentucky....

2018-04-13 Thread Ryan Hurst via dev-security-policy
On Thursday, April 12, 2018 at 5:39:39 PM UTC-7, Tim Hollebeek wrote:
> > Independent of EV, the BRs require that a CA maintain a High Risk
> Certificate
> > Request policy such that certificate requests are scrubbed against an
> internal
> > database or other resources of the CAs discretion.
> 
> Unless you're Let's Encrypt, in which case you can opt out of this
> requirement via a blog post.
> 
> -Tim

As you know, that is not what that post says, nor does it reflect what Let's 
Encrypt does.

The BRs define the High Risk Certificate Request as:

```
High Risk Certificate Request: A Request that the CA flags for additional 
scrutiny by reference to internal criteria and databases maintained by the CA, 
which may include names at higher risk for phishing or other fraudulent usage, 
names contained in previously rejected certificate requests or revoked 
Certificates, names listed on the Miller Smiles phishing list or the Google 
Safe Browsing list, or names that the CA identifies using its own 
risk-mitigation criteria.
```

It also explicitly allows for phishing lists, such as the Google Safe Browsing 
list to be used.

The blog post in question 
(https://letsencrypt.org/2015/10/29/phishing-and-malware.html) states that 
Let's Encrypt (rightfully in my mind) believes that CAs are not the right place 
to try to protect users from Phishing. They state this for a variety of 
reasons, including one brought up in this thread about making CAs censors on 
the web.

They go on to state that despite them thinking CAs are not the right place to 
solve this problem that:

```
At least for the time being, Let’s Encrypt is going to check with the Google 
Safe Browsing API before issuing certificates, and refuse to issue to sites 
that are flagged as phishing or malware sites. Google’s API is the best source 
of phishing and malware status information that we have access to, and 
attempting to do more than query this API before issuance would almost 
certainly be wasteful and ineffective.
```

They have also publicly stated that they maintain a blacklist of domains they 
will not issue for.

Ryan Hurst
(speaking for myself, not Google or Let's Encrypt)
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.6 Proposal: Add prohibition on CA key generation to policy

2018-04-05 Thread Ryan Hurst via dev-security-policy
On Thursday, April 5, 2018 at 9:55:39 AM UTC-7, Wayne Thayer wrote:
> On Thu, Apr 5, 2018 at 3:15 AM, Dimitris Zacharopoulos 
> wrote:
> 
> > My proposal is "CAs MUST NOT distribute or transfer private keys and
> > associated certificates in PKCS#12 form through insecure physical or
> > electronic channels " and remove the rest.
> >
> > +1 - I support this proposal.

That seems an appropriate level of detail for policy. +1
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.6 Proposal: Require English Language Audit Reports

2018-04-04 Thread Ryan Hurst via dev-security-policy

> An authoritative English language version of the publicly-available audit
> information MUST be supplied by the Auditor.
> 
> it would be helpful for auditors that issue report in languages other than
> English to confirm that this won't create any issues.

That would address my concern.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: FW: Complying with Mozilla policy on email validation

2018-04-04 Thread Ryan Hurst via dev-security-policy
On Wednesday, April 4, 2018 at 3:39:46 PM UTC-7, Wayne Thayer wrote:
> On Wed, Apr 4, 2018 at 2:44 PM, Ryan Hurst via dev-security-policy <
> > My opinion on this method and on Adrian's comments is that the CA/Browser
> Forum, with it's new-found ability to create an S/MIME Working Group, is a
> better venue for formulating secure email validation methods. Does it make
> sense for us to define more specific email validation methods in this forum
> when it's likely the CA/Browser Forum will do the same in the next year or
> two?

I understand that position, and maybe this is acceptable, but I believe the 
removal of "business controls" (which to be clear I like) prohibits this 
practice when it is reasonable and even desirable.

I was thinking until an S/MIME policy is established some accommodation of 
federated login in the mozilla policy accompanying the removal of "business 
controls" would address that.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.6 Proposal: Add prohibition on CA key generation to policy

2018-04-04 Thread Ryan Hurst via dev-security-policy
Some thoughts:

1 - Should additional text be included to mandate strong cipher suites 
(http://unmitigatedrisk.com/?p=543) be used; it is not uncommon for me to find 
PKCS#12s with very weak cryptographic algorithms in use. Such guidance would be 
limited by Windows which does not support modern cryptographic algorithms for 
key protection but having some standard would be better than none though it 
would potentially hurt interoperability for those use cases if the chosen 
suites were not uniform.

2 - Should additional text be included to mandate the that CA resellers cannot 
be used as an escape to this requirement; e.g. today A CA may simply rely on a 
third-party to implement this practice to stay in conformance with the policy.

3 - Should additional text be included to require that the user provide part or 
all of the secrete used as the "password" on the PKCS#12 file and that CA 
cannot store the user provided value?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.6 Proposal: Require English Language Audit Reports

2018-04-04 Thread Ryan Hurst via dev-security-policy
On Wednesday, April 4, 2018 at 1:58:35 PM UTC-7, Wayne Thayer wrote:
> Mozilla needs to be able to read audit reports in the English language
> without relying on machine translations that may be inaccurate or
> misleading.
> 
> I suggest adding the following sentence to the end of policy section 3.1.4
> “Public Audit Information”:
> 
> An English language version of the publicly-available audit information
> MUST be supplied by the Auditor.
> 
> This is: https://github.com/mozilla/pkipolicy/issues/106
> 
> ---
> 
> This is a proposed update to Mozilla's root store policy for version
> 2.6. Please keep discussion in this group rather than on GitHub. Silence
> is consent.
> 
> Policy 2.5 (current version):
> https://github.com/mozilla/pkipolicy/blob/2.5/rootstore/policy.md

Should the text require the English version to be the authoritative version?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: FW: Complying with Mozilla policy on email validation

2018-04-04 Thread Ryan Hurst via dev-security-policy
On Tuesday, April 3, 2018 at 1:17:50 PM UTC-7, Wayne Thayer wrote:
> > I agree that name constraints would be difficult to implement in this
> scenario, but I'm less convinced that section 2.2(2) doesn't permit this.
> It says:
> 
> 
> *For a certificate capable of being used for digitally signing or
> encrypting email messages, the CA takes reasonable measures to verify that
> the entity submitting the request controls the email account associated
> with the email address referenced in the certificate or has been authorized
> by the email account holder to act on the account holder’s behalf.*

I can see that covering it. Maybe this could be provided as an explicit example 
of how that might happen?

> > Another case I think is interesting is that of a delegation of email
> > verification to a third-party. For example, when you do a OAUTH
> > authentication to Facebook it will return the user’s email address if it
> > has been verified. The same is true for a number of related scenarios, for
> > example, you can tell via Live Authentication and Google Authentication if
> > the user's email was verified.
> >
> > The business controls text plausibly would have allowed this use case also.
> >
> > I'm not a fan of expanding the scope of such a vague requirement as
> "business controls", and I'd prefer to have the CA/Browser Forum define
> more specific validation methods, but if section 2.2(2) of our current
> policy is too limiting, we can consider changing it to accommodate this use
> case.

I dislike business controls also, however in this case the LARGE majority of 
authentication on the web happens via OAUTH and federated user authentication 
is a thing we won't se going away. 

It seems broken to have a policy that prohibits this in the case of secure 
email or other related use cases of these certificates.

Maybe this can be addressed through an explicit carve out for the case of 
federated authentication systems that provide a reliable verification of 
control of an email address.

Alternatively, maybe Mozilla should maintain a listing common provider where 
Mozilla says this is allowable (Google, Microsoft, Facebook, and Twitter, for 
example).
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: FW: Complying with Mozilla policy on email validation

2018-04-03 Thread Ryan Hurst via dev-security-policy
On Monday, April 2, 2018 at 1:10:13 PM UTC-7, Wayne Thayer wrote:
> I'm forwarding this for Tim because the list rejected it as SPAM.
> 
> 
> 
> *From:* Tim Hollebeek
> *Sent:* Monday, April 2, 2018 2:22 PM
> *To:* 'mozilla-dev-security-policy'  lists.mozilla.org>
> *Subject:* Complying with Mozilla policy on email validation
> 
> 
> 
> 
> 
> Mozilla policy currently has the following to say about validation of email
> addresses in certificates:
> 
> 
> 
> “For a certificate capable of being used for digitally signing or
> encrypting email messages, the CA takes reasonable measures to verify that
> the entity submitting the request controls the email account associated
> with the email address referenced in the certificate or has been authorized
> by the email account holder to act on the account holder’s behalf.”
> 
> 
> 
> “If the certificate includes the id-kp-emailProtection extended key usage,
> then all end-entity certificates MUST only include e-mail addresses or
> mailboxes that the issuing CA has confirmed (via technical and/or business
> controls) that the subordinate CA is authorized to use.”
> 
> 
> 
> “Before being included and periodically thereafter, CAs MUST obtain certain
> audits for their root certificates and all of their intermediate
> certificates that are not technically constrained to prevent issuance of
> working server or email certificates.”
> 
> 
> 
> (Nit: Mozilla policy is inconsistent in it’s usage of email vs e-mail.  I’d
> fix the one hyphenated reference)
> 
> 
> 
> This is basically method 1 for email certificates, right?  Is it true that
> Mozilla policy today allows “business controls” to be used for validating
> email addresses, which can essentially be almost anything, as long as it is
> audited?
> 
> 
> 
> (I’m not talking about what the rules SHOULD be, just what they are.  What
> they should be is a discussion we should have in a newly created CA/* SMIME
> WG)
> 
> 
> 
> -Tim

Reading this thread and thinking the current text, based on the interpretation 
discussed, does not accommodate a few cases that I think are useful.

For example, if we consider a CA supporting a large mail provider in providing 
S/MIME certificates to all of its customers. In this model, the mail provider 
is the authoritative namespace owner.  

In the context of mail, you can imagine gmail.com or peculiarventures.com as 
examples, both are gmail (as determined by MX records). It seems reasonable to 
me (Speaking as Ryan and not Google here) to allow a CA to leverage this 
internet reality (expressed via MX records) to work with a CA to get S/MIME 
certificates for all of its customers without forcing them through an email 
challenge. 

In this scenario, you could not rely on name constraints because the onboarding 
of custom domains (like peculiarventures.com) happens real time as part of 
account creation. The prior business controls text seemed to allow this case 
but it seems the interpretation discussed here would prohibit it.


Another case I think is interesting is that of a delegation of email 
verification to a third-party. For example, when you do a OAUTH authentication 
to Facebook it will return the user’s email address if it has been verified. 
The same is true for a number of related scenarios, for example, you can tell 
via Live Authentication and Google Authentication if the user's email was 
verified.

The business controls text plausibly would have allowed this use case also.

I think a policy that does not allow a CA to support these use cases would 
severly limit the use cases in which S/MIME could be used and I would like to 
see them considered.

Ryan Hurst
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Following up on Trustico: reseller practices and accountability

2018-03-05 Thread Ryan Hurst via dev-security-policy
On Monday, March 5, 2018 at 11:38:31 AM UTC-8, Ryan Sleevi wrote:
> While these are interesting questions, I think it gets to the heart of
> policy questions, which is how is policy maintained and enforced. Today,
> there’s only one method - distrust.
> 
> So are you suggesting the CA should be distrusted if these “other parties”
> (which may have no observable relationship with the CA) don’t adhere to
> this policy? Are you suggesting the certificates these “other parties” are
> involved with get distrusted?  Or something else?
> 
> Because without teeth, the policy suggestions themselves are hollow.

That is a very valid point. 

Well since I do not have a concrete proposal it is hard to say at this point if 
a CA should be kicked out for non-conformance to a given critera. With that 
said today there are over 20 SHOULDs in the BRs and I can imagine failure to 
meet those should would be considered in aggregate when looking at a distrust 
event.

If nothing else addressing any potential ambiguity would be useful.

> 
> I disagree on that venue suggestion, since here we can actually have
> widespread public participation. I would also suggest that Section 1.3 of
> the Bylaws would no doubt be something constantly having to be pointed out
> in such discussions.
> 

Fair enough, as I am on the plane to CA/Browser Forum event maybe, as a result, 
I had this venue on my mind, I agree this is a fine venue for this discussion.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Following up on Trustico: reseller practices and accountability

2018-03-05 Thread Ryan Hurst via dev-security-policy
I agree with Sleevi on this, the real question on what can and should be done 
here is dependent on who the reseller is an agent of and what role they play in 
the overall ecosystem.

While it is easy to say that resellers are pure marketers with no vested 
interest in security outcomes, and there is some truth to this, the reality is 
far more complex. For one there is no one size fits all definition for a 
reseller, for example:

- Hosting “reseller” - As a hosting provider, for example one that utilizes 
CPANEL, you may be responsible for enrolling for certificates and generating 
keys for users as well as managing the lifecycle, you are clearly acting “as a 
reseller” if you are selling “a certificate” but you are also acting as a 
delegate of the user if you are configuring and managing SSL for them.
- SaaS “reseller” - As a SaaS provider, for example one that hosts Wordpress, 
you may be responsible for enrolling for certificates and generating keys for 
users as well as managing the lifecycle, you are clearly acting “as a reseller” 
if you are selling “a certificate” but you are also acting as a delegate of the 
user if you are configuring and managing SSL for them.
- Marketing “resellers” - As a pure reseller, for example one that offers 
regional sales and marketing support, again you are clearly acting as a 
delegate of the CA by providing marketing and sales support for a vertical, 
region or market segment, but you could very well be providing value added 
services to the user (such as simplfying enrollment and/or SSL configuration) 
and as such are again a delagate of both parties.

As I look at this non-exhaustive list, it seems to me the difference between 
the reseller and a and the more typical SaaS service provider where SSL is 
possibly a paid feature is the sale of a certificate.

With that said, since there are so many different types of “other parties” it 
is probably better to avoid discussing resellers directly and focus on 
responsibilities of “other parties” instead.

For example, today the BRs require that CAs and RAs:
- Require consent to subscriber key archival (section 6.1.2),
- Require the encryption of the subscribers private in transport (section 
6.1.2).

In no particular order here are some questions I have for myself on this topic:

- Should we provide a definition of “other parties”, and “reseller” and make 
sure they are clear so responsibilities of parties are unambiguous?
- In the BRs we currently say “Parties other than the Subscriber SHALL NOT 
archive the Subscriber Private Key” (in Section 6.1.2) should we also say that 
the CAs should be required to demonstrate they have communicated this 
requirement to the other party and get affirmative acknowledgement from the 
“other party” during their audits?
- The BRs currently state subscriber authorization is required for archival but 
there is no text covering what minimal level of authorization expressed. I 
would have thought this not necessary but TrustIco has been arguing users 
should have implicitly known they had this practice even though it was not 
disclosed and there was no explicit consent for archival. While I think that is 
a irresponsible position the text could be made clearer.
- The current BR text talks about RAs generating keys on behalf of the 
subscriber (6.1.2 but it says nothing about other parties?
- Should the BRs be revised to require CAs to have the “other parties” publicly 
disclose if they generate keys for users, how they protect them at rest and in 
transport, and how they capture consent for these practices if at all?
- Though the concept of key archival for RAs and CAs is allowed in the BRs 
(section 6.1.2) it does not require keys be encrypted while in archive,  Should 
this be changed? At the same time should we mandate some minimal level of 
protection that would prevent all user keys being accessed without user consent 
like was done here?
- Today the BRs inconsistently discuss private key archival, for example 
section 6.2.5 talks about CA key archival but not subscriber archival. Should 
we fix this?
- Should we formalize a proof proof of possession mechanism, such as what is 
done in ACME, as an alternative to sharing the actual key as to encourage this 
approach to be used instead of distribution of the actual key?

One thing for us to keep in mind while looking at these issues is we are moving 
to a world where SSL is the default and for that to be true we need automation 
and permissionless SSL deployment (e.g. automation) is necessary for that to be 
a reality.

This discussion is probably better for the CABFORUM public list but since the 
thread started here I thought it best to share my thoughts here.

Ryan Hurst
Google Trust Services
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla’s Plan for Symantec Roots

2018-03-01 Thread Ryan Hurst via dev-security-policy
> >
> > Google requests that certain subCA SPKIs are whitelisted, to ensure
> > continued trust of Symantec-issued certificates that are used by
> > infrastructure that is operated by Google.
> >
> > Is whitelisting the SPKI found in the Google subCA sufficient to achieve
> > the need of trusting Google's server infrastructure?

Kai,

I will do my best to answer this question.

Alphabet has a policy that all of its companies should be getting certificates 
from the Google PKI infrastructure. Right now in the context of certificate 
chains you see that manifested as certificates issued under GIAG2 and GIAG3.

We are actively migrating from GIAG2 (issued under a Symantec owned Root) to 
GIAG3 (issued under a root we own and operate). This transition will be 
complete in August 2018.

Given the size and nature of the Google organization sometimes other CAs are 
used either on accident because the team did not know any better, because the 
organization is part of an acquisition that is not yet integrated or there may 
be some sort of exceptional requirement/situation that necessitates it.

For this, and other reasons, we tell partners that we reserve the right to use 
other roots should the need arise and we publish a list of root certificates we 
may use (https://pki.goog/faq.html see what roots to trust).

As for the use of the With that background nearly all certificates for Alphabet 
(and Google) properties will be issued by a Google operated CA.

In the context of the whitelist, we believe the SPKI approach should be 
sufficient for those applications who also need to whitelist associated CA(s). 

I am also not aware of any Alphabet properties utilizing the DigiCert's Managed 
Partner Infrastructure (beyond one subca they operate that is not in use).

In summary while a SPKI whitelist should work for the current situation 
applications communicating with Alphabet properties should still trust (and 
periodically update to) the more complete list of roots listed in the FAQ.

Ryan Hurst
Google
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Deadline for whitelisting of the Apple/Google subCAs issued by Symantec?

2018-03-01 Thread Ryan Hurst via dev-security-policy
On Thursday, March 1, 2018 at 7:15:52 AM UTC-8, Kai Engert wrote:

> Are the owners of the Apple and Google subCAs able to announce a date,
> after which they will no longer require their Symantec-issued subCAs to
> be whitelisted?

Kai,

We are actively migrating to the Google Trust Services operated root 
certificates and while we would love to provide a concrete date the nature of 
these sorts of deployments makes that hard to provide.

What I can say is that our plan is to be migrated off by the time the Equifax 
root expires August 22nd 2018.

Ryan Hurst
Google

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Allowing WebExtensions to Override Certificate Trust Decisions

2018-02-28 Thread Ryan Hurst via dev-security-policy
On Wednesday, February 28, 2018 at 10:42:25 AM UTC-8, Alex Gaynor wrote:
> If the "fail verification only" option is not viable, I personally think we
> shouldn't expose this to extensions.
> 

I agree, there are far too many ways this will be abused and the cases in which 
it would be useful are not worth the negative consequences to the average 
browser user, at least in my opinion.

Ryan Hurst
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: How do you handle mass revocation requests?

2018-02-28 Thread Ryan Hurst via dev-security-policy
On Wednesday, February 28, 2018 at 11:56:04 AM UTC-8, Ryan Sleevi wrote:
> Assuming Trustico sent the keys to DigiCert, it definitely sounds like even
> if Trustico was authorized to hold the keys (which is a troubling argument,
> given all things), they themselves compromised the keys of their customers,
> and revocation is both correct and necessary. That is, whether or not
> Trustico believed they were compromised before, they compromised their
> customers keys by sending them, and it's both correct and accurate to
> notify the Subscribers that their keys have been compromised by their
> Reseller.

That seems to be the case to me as well.

It also seems that this situation should result in the UAs and/or CABFORUM 
re0visit section 6.1.2 
(https://github.com/cabforum/documents/blob/master/docs/BR.md) in the BRs.

Specifically, this section states:

```
Parties other than the Subscriber SHALL NOT archive the Subscriber Private Key 
without authorization by the Subscriber.

If the CA or any of its designated RAs generated the Private Key on behalf of 
the Subscriber, then the CA SHALL encrypt the Private Key for transport to the 
Subscriber.
```

In this case, TrustIco is not the subscriber, and there is no indication in 
their terms and conditions 
(https://www.trustico.com/terms/terms-and-conditions.php) that they are 
authorized to archive the private key. Yet clearly if they were able to provide 
20k+ private keys to DigiCert they are archiving them. This text seems to cover 
this case clearly but as worded I do not see how audits would catch this 
behavior. I think it may make sense for the CAs to be responsible for 
demonstrating how they and other non-subscribers in the lifecycle flow handle 
this case.

Additionally, it seems if the private keys were provided to DigiCert in a way 
they were verifiable by them they may have been stored in a non-encrypted 
fashion, at a minimum they were likley not generated and protected on an HSM. 
The BRs should probably be revised to specify some minimum level of security to 
be provided in these cases of for these cases to be simply disallowed 
altogether.

Finally, the associated text speaks to RAs but not to the non-subscriber 
(reseller) case, this gap should be addressed minimally.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google OCSP service down

2018-02-25 Thread Ryan Hurst via dev-security-policy
Tim,

I can see value in a ballot on how to clarify incident reporting and other
contact related issues, right now 1.5.2 is pretty sparse in regards to how
to handle this. I would be happy to work with you on a proposal here.

Ryan

On Sun, Feb 25, 2018 at 6:41 AM, Tim Hollebeek <tim.holleb...@digicert.com>
wrote:

> Ryan,
>
> Wayne and I have been discussing making various improvements to 1.5.2
> mandatory for all CAs.  I've made a few improvements to DigiCert's CPSs in
> this area, but things probably still could be better.  There will probably
> be
> a CA/B ballot in this area soon.
>
> DigiCert's 1.5.2 has our support email address, and our Certificate Problem
> Report email (which I recently added).  That doesn't really cover
> everything
> (yet).
>
> It looks like GTS 1.5.2 splits things into security (including CPRs),
> non-security
> requests.
>
> I didn't chase down any other 1.5.2's yet, but it'd be interesting to hear
> what
> other CAs have here.  I suspect most only have one address for everything.
>
> Something to keep in mind once the CA/B thread shows up.
>
> -Tim
>
> > -Original Message-
> > From: dev-security-policy [mailto:dev-security-policy-
> > bounces+tim.hollebeek=digicert@lists.mozilla.org] On Behalf Of Ryan
> > Hurst via dev-security-policy
> > Sent: Wednesday, February 21, 2018 9:53 PM
> > To: mozilla-dev-security-pol...@lists.mozilla.org
> > Subject: Re: Google OCSP service down
> >
> > I wanted to follow up with our findings and a summary of this issue for
> the
> > community.
> >
> > Bellow you will see a detail on what happened and how we resolved the
> issue,
> > hopefully this will help explain what hapened and potentially others not
> > encounter a similar issue.
> >
> > Summary
> > ---
> > January 19th, at 08:40 UTC, a code push to improve OCSP generation for a
> > subset of the Google operated Certificate Authorities was initiated. The
> change
> > was related to the packaging of generated OCSP responses. The first time
> this
> > change was invoked in production was January 19th at 16:40 UTC.
> >
> > NOTE: The publication of new revocation information to all geographies
> can
> > take up to 6 hours to propagate. Additionally, clients and middle-boxes
> > commonly implement caching behavior. This results in a large window where
> > clients may have begun to observe the outage.
> >
> > NOTE: Most modern web browsers “soft-fail” in response to OCSP server
> > availability issues, masking outages. Firefox, however, supports an
> advanced
> > option that allows users to opt-in to “hard-fail” behavior for revocation
> > checking. An unknown percentage of Firefox users enable this setting. We
> > believe most users who were impacted by the outage were these Firefox
> users.
> >
> > About 9 hours after the deployment of the change began (2018-01-20 01:36
> > UTC) a user on Twitter mentions that they were having problems with their
> > hard-fail OCSP checking configuration in Firefox when visiting Google
> > properties. This tweet and the few that followed during the outage
> period were
> > not noticed by any Google employees until after the incident’s
> post-mortem
> > investigation had begun.
> >
> > About 1 day and 22 hours after the push was initiated (2018-01-21 15:07
> UTC),
> > a user posted a message to the mozilla.dev.security.policy mailing list
> where
> > they mention they too are having problems with their hard-fail
> configuration in
> > Firefox when visiting Google properties.
> >
> > About two days after the push was initiated, a Google employee
> discovered the
> > post and opened a ticket (2018-01-21 16:10 UTC). This triggered the
> > remediation procedures, which began in under an hour.
> >
> > The issue was resolved about 2 days and 6 hours from the time it was
> > introduced (2018-01-21 22:56 UTC). Once Google became aware of the
> issue, it
> > took 1 hour and 55 minutes to resolve the issue, and an additional 4
> hours and
> > 51 minutes for the fix to be completely deployed.
> >
> > No customer reports regarding this issue were sent to the notification
> > addresses listed in Google's CPSs or on the repository websites for the
> duration
> > of the outage. This extended the duration of the outage.
> >
> > Background
> > --
> > Google's OCSP Infrastructure works by generating OCSP responses in
> batches,
> > with each batch being made up of the certificates issued by an
> individual CA.
> >
> > In the case of GI

Re: Google OCSP service down

2018-02-21 Thread Ryan Hurst via dev-security-policy
I wanted to follow up with our findings and a summary of this issue for the 
community. 

Bellow you will see a detail on what happened and how we resolved the issue, 
hopefully this will help explain what hapened and potentially others not 
encounter a similar issue.

Summary
---
January 19th, at 08:40 UTC, a code push to improve OCSP generation for a subset 
of the Google operated Certificate Authorities was initiated. The change was 
related to the packaging of generated OCSP responses. The first time this 
change was invoked in production was January 19th at 16:40 UTC. 

NOTE: The publication of new revocation information to all geographies can take 
up to 6 hours to propagate. Additionally, clients and middle-boxes commonly 
implement caching behavior. This results in a large window where clients may 
have begun to observe the outage.

NOTE: Most modern web browsers “soft-fail” in response to OCSP server 
availability issues, masking outages. Firefox, however, supports an advanced 
option that allows users to opt-in to “hard-fail” behavior for revocation 
checking. An unknown percentage of Firefox users enable this setting. We 
believe most users who were impacted by the outage were these Firefox users.

About 9 hours after the deployment of the change began (2018-01-20 01:36 UTC) a 
user on Twitter mentions that they were having problems with their hard-fail 
OCSP checking configuration in Firefox when visiting Google properties. This 
tweet and the few that followed during the outage period were not noticed by 
any Google employees until after the incident’s post-mortem investigation had 
begun. 

About 1 day and 22 hours after the push was initiated (2018-01-21 15:07 UTC), a 
user posted a message to the mozilla.dev.security.policy mailing list where 
they mention they too are having problems with their hard-fail configuration in 
Firefox when visiting Google properties.

About two days after the push was initiated, a Google employee discovered the 
post and opened a ticket (2018-01-21 16:10 UTC). This triggered the remediation 
procedures, which began in under an hour.

The issue was resolved about 2 days and 6 hours from the time it was introduced 
(2018-01-21 22:56 UTC). Once Google became aware of the issue, it took 1 hour 
and 55 minutes to resolve the issue, and an additional 4 hours and 51 minutes 
for the fix to be completely deployed.

No customer reports regarding this issue were sent to the notification 
addresses listed in Google's CPSs or on the repository websites for the 
duration of the outage. This extended the duration of the outage. 

Background
--
Google's OCSP Infrastructure works by generating OCSP responses in batches, 
with each batch being made up of the certificates issued by an individual CA.

In the case of GIAG2, this batch is produced in chunks of certificates issued 
in the last 370 days. For each chunk, the GIAG2 CA is asked to produce the 
corresponding OCSP responses, the results of which are placed into a separate 
.tar file.

The issuer of GIAG2 has chosen to issue new certificates to GIAG2 periodically, 
as a result GIAG2 has multiple certificates. Two of these certificates no 
longer have unexpired certificates associated with them. As a result, and as 
expected, the CA does not produce responses for the corresponding periods.

All .tar files produced during this process are then concatenated with the 
-concatenate command in GNU tar. This produces a single .tar file containing 
all of the OCSP responses for the given Certificate Authority, then this .tar 
file is distributed to our global CDN infrastructure for serving.

A change was made in how we batch these responses, specifically instead of 
outputting many .tar files within a batch, a concatenation was of all tar files 
was produced.

The change in question triggered an unexpected behaviour in GNU tar which then 
manifested as an empty tarball. These "empty" updates ended up being 
distributed to our global CDN, effectively dropping some responses, while 
continuing to serve responses for other CAs.

During testing of the change, this behaviour was not detected, as the tests did 
not cover the scenario in which some chunks did not contain unexpired 
certificates.

Findings

- The outage only impacted sites with TLS certificates issued by the GIAG2 CA 
as it was the only CA that met the required pre-conditions of the bug. 
- The bug that introduced this failure manifested itself as an empty container 
of OCSP responses. The root cause of the issue was an unexpected behavior of 
GNU tar relating to concatenating tar files.
- The outage was observed by revocation service monitoring as  “unknown 
certificate” (HTTP 404) errors. HTTP 404 errors are expected in OCSP responder 
operations; they typically are the result of poorly configured clients. These 
events are monitored and a threshold does exist for an on-call escalation.
- Due to a configuration error the designated Google team did 

Re: Google OCSP service down

2018-01-22 Thread Ryan Hurst via dev-security-policy
On Monday, January 22, 2018 at 1:26:01 AM UTC-8, ihave...@gmail.com wrote:
> Hi,
> 
> Just as an FYI, I am still getting 404. My geographic location is UAE if that 
> helps at all.
> 
> My openssl command:
> openssl ocsp -issuer gtsx1.pem -cert goodr1demopkigoog.crt -url 
> http://ocsp.pki.goog/GTSGIAG3  -CAfile gtsrootr1.pem 
> Error querying OCSP responder
> 77317:error:27075072:OCSP routines:PARSE_HTTP_LINE1:server response 
> error:/BuildRoot/Library/Caches/com.apple.xbs/Sources/OpenSSL098/OpenSSL098-59.60.1/src/crypto/ocsp/ocsp_ht.c:224:Code=404,Reason=Not
>  Found

Tham,

It seems you are not specifying the hostname header which is required by HTTP 
1.1 which is required by RFC 2560:

Here is what a command for that root would look like:
openssl ocsp -issuer r1goodissuer.cer -cert r1good.cer -no_nonce -text -url 
"http://ocsp.pki.goog/GTSGIAG3; -header host ocsp.pki.goog

Ryan
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google OCSP service down

2018-01-21 Thread Ryan Hurst via dev-security-policy
On Sunday, January 21, 2018 at 1:42:59 PM UTC-8, Ryan Hurst wrote:
> On Sunday, January 21, 2018 at 1:29:58 PM UTC-8, s...@gmx.ch wrote:
> > Hi
> > 
> > Thanks for investigating.
> > 
> > I can confirm that the service is now working again for me most of the
> > time, but some queries still fail (may be due load balancing in the
> > backend?).
> > 
> 
> Thank you for your report and confirming you are seeing things starting to 
> work.
> 
> Google operates a global network utilizing many redundant servers and the 
> nature of the way that works is one connection to the next you may be hitting 
> a different cluster of servers. 
> 
> It can take a while for all of these different clusters to receive the 
> associated updates.
> 
> This would explain your inconsistent results.
> 
> I am actively watching this deployment to ensure it completes successfully 
> but at this point, it seems all will continue to roll out as expected.
> 
> As an aside, We are still continuing our post-mortem.

The issue should be 100% resolved now.

As per earlier posts, we will complete the post-mortem and report to the 
community with our findings.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google OCSP service down

2018-01-21 Thread Ryan Hurst via dev-security-policy
On Sunday, January 21, 2018 at 1:29:58 PM UTC-8, s...@gmx.ch wrote:
> Hi
> 
> Thanks for investigating.
> 
> I can confirm that the service is now working again for me most of the
> time, but some queries still fail (may be due load balancing in the
> backend?).
> 

Thank you for your report and confirming you are seeing things starting to work.

Google operates a global network utilizing many redundant servers and the 
nature of the way that works is one connection to the next you may be hitting a 
different cluster of servers. 

It can take a while for all of these different clusters to receive the 
associated updates.

This would explain your inconsistent results.

I am actively watching this deployment to ensure it completes successfully but 
at this point, it seems all will continue to roll out as expected.

As an aside, We are still continuing our post-mortem.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google OCSP service down

2018-01-21 Thread Ryan Hurst via dev-security-policy
> > Is there a known contact to report it (or is someone with a Google hat
> > reading this anyway)?
> 

David,

I am sorry you experienced difficulty in contacting us about this issue. 

We maintain contact details both within our CPS (like other CAs) and at 
https://pki.goog so that people can reach us expeditiously. In the future if 
anyone needs to reach us please use those details.

Google is a large organization and when other teams are contacted (such as DNS) 
we do not have control over when and if those issues will reach us. 

We are actively working on a post mortem on this issue and when it is complete 
we will share it in this thread.

Thanks for your help in this matter,

Ryan Hurst
Product Manager
Google
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google OCSP service down

2018-01-21 Thread Ryan Hurst via dev-security-policy

> 
> We are investigating the issue and will provide a update when that 
> investigation is complete.
> 
> Thank you for letting us know.
> 
> Ryan Hurst
> Product Manager
> Google

I wanted to provide an update to the group. The issue has been identified and a 
roll out of the fix is in progress across all geographies.

I have personally verified the fix in several geographies.

A post mortem will be created and shared with the group as soon as it is ready.

Ryan Hurst
Product Manager
Google
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google OCSP service down

2018-01-21 Thread Ryan Hurst via dev-security-policy
On Sunday, January 21, 2018 at 8:13:30 AM UTC-8, David E. Ross wrote:
> On 1/21/2018 7:47 AM, Paul Kehrer wrote:
> > Is there a known contact to report it (or is someone with a Google hat
> > reading this anyway)?
> 
> On Friday (two days ago), I reported this to dns-ad...@google.com, the
> only E-mail address in the WhoIs record for google.com.
> 
> I received an automated reply indicating that security issues should
> instead be reported to secur...@google.com. I immediately resent
> (Thunderbird's Edit As New Message) to secur...@google.com.
> 
> I then received an automated reply from secur...@google.com that listed
> a variety of Web addresses for reporting various problems.  I replied
> via E-mail to secur...@google.com:
> > Because of the OCSP failure, I am unable to reach any of the google.com
> > Web site cited in your reply.
> 
> Yes, I could disable OCSP checking.  But I my need for Google is
> insufficient for me to browse insecurely.
> 
> By the way, in SeaMonkey 2.49.1 (the latest version) the Google Internet
> Authority G2 certificate appears to be an intermediate, signed by the
> GeoTrust Global CA root.
> 
> There is a pending request (bug #1325532) from Google to add a Google
> root certificate to NSS.  Given the inadequacy of Google's current
> information on reporting security problems, I have doubts whether this
> request should be approved.
> 
> See .
> 
> -- 
> David E. Ross
> 
> 
> President Trump:  Please stop using Twitter.  We need
> to hear your voice and see you talking.  We need to know
> when your message is really your own and not your attorney's.


We are investigating the issue and will provide a update when that 
investigation is complete.

Thank you for letting us know.

Ryan Hurst
Product Manager
Google
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Updating Root Inclusion Criteria

2018-01-17 Thread Ryan Hurst via dev-security-policy
On Tuesday, January 16, 2018 at 3:46:03 PM UTC-8, Wayne Thayer wrote:
> I would like to open a discussion about the criteria by which Mozilla
> decides which CAs we should allow to apply for inclusion in our root store.
> 
> Section 2.1 of Mozilla’s current Root Store Policy states:
> 
> CAs whose certificates are included in Mozilla's root program MUST:
> > 1.provide some service relevant to typical users of our software
> > products;
> >
> 
> Further non-normative guidance for which organizations may apply to the CA
> program is documented in the ‘Who May Apply’ section of the application
> process at https://wiki.mozilla.org/CA/Application_Process . The original
> intent of this provision in the policy and the guidance was to discourage a
> large number of organizations from applying to the program solely for the
> purpose of avoiding the difficulties of distributing private roots for
> their own internal use.
> 
> Recently, we’ve encountered a number of examples that cause us to question
> the usefulness of the currently-vague statement(s) we have that define
> which CAs to accept, along a number of different axes:
> 
> * Visa is a current program member that has an open request to add another
> root. They only issue a relatively small number of certificates per year to
> partners and for internal use. They do not offer certificates to the
> general public or to anyone with whom they do not have an existing business
> relationship.
> 
> * Google is also a current program member, admitted via the acquisition of
> an existing root, but does not currently, to the best of our knowledge,
> meet the existing inclusion criteria, even though it is conceivable that
> they would issue certificates to the public in the future.
> 
> * There are potential applicants for CA status who deploy a large number of
> certificates, but only on their own infrastructure and for their own
> domains, albeit that this infrastructure is public-facing rather than
> company-internal.
> 
> * We have numerous government CAs in the program or in the inclusion
> process that only intend to issue certificates to their own institutions.
> 
> * We have at least one CA applying for the program that (at least, it has
> been reported in the press) is controlled by an entity which may wish to
> use it for MITM.
> 
> There are many potential options for resolving this issue. Ideally, we
> would like to establish some objective criteria that can be measured and
> applied fairly. It’s possible that this could require us to define
> different categories of CAs, each with different inclusion criteria. Or it
> could be that we should remove the existing ‘relevance’ requirement and
> inclusion guidelines and accept any applicant who can meet all of our other
> requirements.
> 
> With this background, I would like to encourage everyone to provide
> constructive input on this topic.
> 
> Thanks,
> 
> Wayne

Wayne,

I recall facing this topic at Microsoft when I was defining the root policy for 
them. At the time I failed to effectively come up with language that would 
capture all of the use cases we felt were important. This is why we ended up 
with what was at the time a vague statement on broad value to Microsoft 
consumers.

With that said, despite the challenges associated with the tasks, I agree this 
is an area where clarity is needed.

Since Google's PKI was mentioned as an example, I can publicly state that the 
plan is for Google to utilize the Google Trust Services infrastructure to 
satisfy its SSL certificate needs. While I can not announce specific product 
roadmaps I can say that this includes the issuance of certificates for Google 
offerings involving hosting of products and services for customers.

Ryan Hurst
Product Manager 
Google
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Possible Issue with Domain Validation Method 9 in a shared hosting environment

2018-01-15 Thread Ryan Hurst via dev-security-policy
Sleevi,

Valid point, no intention to confuse, I have no current affiliation with
GlobalSign, though I once did.

The documentation that described the protocol seems to no longer be online,
the behavior is observable and has been discussed in the validation working
group within the CABFORUM so it is not a secret.

Ryan

On Sun, Jan 14, 2018 at 7:10 AM, Ryan Sleevi <r...@sleevi.com> wrote:

>
>
> On Sat, Jan 13, 2018 at 8:46 PM, Ryan Hurst via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
>> On Friday, January 12, 2018 at 6:10:00 PM UTC-8, Matt Palmer wrote:
>> > On Fri, Jan 12, 2018 at 02:52:54PM +, Doug Beattie via
>> dev-security-policy wrote:
>> > > I’d like to follow up on our investigation and provide the community
>> with some more information about how we use Method 9.
>> > >
>> > > 1)  Client requests a test certificate for a domain (only one
>> FQDN)
>> >
>> > Does this test certificate chain to a publicly-trusted root?  If so, on
>> what
>> > basis are you issuing a publicly-trusted certificate for a name which
>> > doesn't appear to have been domain-control validated?  If not, doesn't
>> this
>> > test certificate break the customer's SSL validation for the period the
>> > certificate is installed, while you do the validation?
>> >
>> > - Matt
>>
>> The certificate comes from a private PKI, not public one.
>
>
> Matt: The Baseline Requirements provide a definition of Test Certificate
> that applies to 3.2.2.4.9 that already addresses your concerns:
>
> Test Certificate: A Certificate with a maximum validity period of 30 days
> and which: (i) includes a critical
> extension with the specified Test Certificate CABF OID (2.23.140.2.1), or
> (ii) is issued under a CA where there
> are no certificate paths/chains to a root certificate subject to these
> Requirements.
>
> Ryan: I think it'd be good to let GlobalSign answer, or, if the answer is
> available publicly, to point them out. This hopefully helps avoid confusion
> :)
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Possible Issue with Domain Validation Method 9 in a shared hosting environment

2018-01-13 Thread Ryan Hurst via dev-security-policy
On Friday, January 12, 2018 at 6:10:00 PM UTC-8, Matt Palmer wrote:
> On Fri, Jan 12, 2018 at 02:52:54PM +, Doug Beattie via 
> dev-security-policy wrote:
> > I’d like to follow up on our investigation and provide the community with 
> > some more information about how we use Method 9.
> > 
> > 1)  Client requests a test certificate for a domain (only one FQDN)
> 
> Does this test certificate chain to a publicly-trusted root?  If so, on what
> basis are you issuing a publicly-trusted certificate for a name which
> doesn't appear to have been domain-control validated?  If not, doesn't this
> test certificate break the customer's SSL validation for the period the
> certificate is installed, while you do the validation?
> 
> - Matt

The certificate comes from a private PKI, not public one.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Dashboard and Study on CAA Adoption

2017-12-15 Thread Ryan Hurst via dev-security-policy
On Friday, December 15, 2017 at 7:10:11 AM UTC-8, Quirin Scheitle wrote:
> Dear all,
> 
> some colleagues and I want to share an academic study on CAA we have been 
> working on in the past months. 
> We hope that our findings can provide quantitative data to assist further 
> discussion, such as the “CAA-simplification” draft at IETF and work at the 
> validation-wg at CABF.
> We also give specific recommendations how *we think* that CAA can be improved.
> 
> The results, paper, and a dashboard tracking CAA adoption are available under 
> 
> https://caastudy.github.io/
> 
> [Please note that the paper discusses facts as of Nov 30]
> We will be happy to elaborate some aspects further, the paper does not 
> discuss all the details. 
> We have discussed previous drafts with various individuals in this community 
> and thank them for their inputs.
> 
> Kind regards
> Quirin and team

This is great work. Thank you.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA generated keys

2017-12-15 Thread Ryan Hurst via dev-security-policy
On Friday, December 15, 2017 at 1:34:30 PM UTC-8, Matthew Hardeman wrote:
> On Friday, December 15, 2017 at 3:21:54 PM UTC-6, Ryan Hurst wrote:
>  
> > Unfortunately, the PKCS#12 format, as supported by UAs and Operating 
> > Systems is not a great candidate for the role of carrying keys anymore. You 
> > can see my blog post on this topic here: http://unmitigatedrisk.com/?p=543
> > 
> > The core issue is the use of old cryptographic primitives that barely live 
> > up to the equivalent cryptographic strengths of keys in use today. The 
> > offline nature of the protection involved also enables an attacker to grind 
> > any value used as the password as well.
> > 
> > Any plan to allow a CA to generate keys on behalf of users, which I am not 
> > against as long as there are strict and auditable practices associated with 
> > it, needs to take into consideration the protection of those keys in 
> > transit and storage.
> > 
> > I also believe any language that would be adopted here would clearly 
> > addresses cases where a organization that happens to operate a CA but is 
> > also a relying party. For example Amazon, Google and Apple both operate 
> > WebTrust audited CAs but they also operate cloud services where they are 
> > the subscriber of that CA. Any language used would need to make it clear 
> > the relative scopes and responsibilities in such a case.
> 
> I had long wondered about the PKCS#12 issue.  To the extent that any file 
> format in use today is convenient for delivering a package of certificates 
> including a formal validation chain and associated private key(s), PKCS#12 is 
> so convenient and fairly ubiquitous.
> 
> It is a pain that the cryptographic and integrity portions of the format are 
> showing their age -- at least, as you point out, in the manner in which 
> they're actually implemented in major software today.

So I have read this thread in its entirety now and I think it makes sense for 
it to reset to first principles, specifically:

What are the technological and business goals trying to be achieved,
What are the requirements derived from those goals,
What are the negative consequences of those goals.

My feeling is there is simply an abstract desire to allow for the CA, on behalf 
of the subject, to generate the keys but we have not sufficiently articulated a 
business case for this.

In my experience building and working with embedded systems I, like Peter, have 
found it is possible to build a sufficient pseudo random number generator on 
these devices, In practice however deployed devices commonly either do not do 
so or seed them poorly. 

This use case is one where transport would likely not need to be PKCS#12 given 
the custom nature of these solutions.

At the same time, these devices are often provisioned in a production line and 
the key generation could just as easily (and probably more appropriately) 
happen there.

In my experience as a CA the desire to do server side key generation almost 
always stems from a desire to reduce the friction for customers to acquire 
certificates for use in regular old web servers. Seldom does this case come up 
with network appliances as they do not support the PKCS#12 format normally. 
While the reduction of friction is a laudable goal, it seems the better way to 
do that would be to adopt a protocol like ACME for certificate lifecycle 
managment.

As I said in a earlier response I am not against the idea of server side key 
generation as long as:
There is a legitimate business need,
This can be done in a way that the CA does not have access to the key,
The process in which that this is done is fully transparent and auditable,
The transfer of the key is done in a way that is sufficiently secure,
The storage of the key is done in a way that is sufficiently secure,
We are extremely clear in how this can be done securely.

Basically I believe due to the varying degrees of technical background and 
skill in the CA operator ecosystem allowing this without being extremely is 
probably a case of the cure is worse than the ailment.

With that background I wonder, is this even worth exploring?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA generated keys

2017-12-15 Thread Ryan Hurst via dev-security-policy
On Tuesday, December 12, 2017 at 1:08:24 PM UTC-8, Jakob Bohm wrote:
> On 12/12/2017 21:39, Wayne Thayer wrote:
> > On Tue, Dec 12, 2017 at 7:45 PM, Jakob Bohm via dev-security-policy <
> > dev-security-policy@lists.mozilla.org> wrote:
> > 
> >> On 12/12/2017 19:39, Wayne Thayer wrote:
> >>
> >>> The outcome to be avoided is a CA that holds in escrow thousands of
> >>> private keys used for TLS. I don’t think that a policy permitting a CA to
> >>> generate the key pair is bad as long as the CA doesn’t hold on to the key
> >>> (unless  the certificate was issued to the CA or the CA is hosting the
> >>> site).
> >>>
> >>> What if the policy were to allow CA key generation but require the CA to
> >>> deliver the private key to the Subscriber and destroy the CA’s copy prior
> >>> to issuing a certificate? Would that make key generation easier? Tim, some
> >>> examples describing how this might be used would be helpful here.
> >>>
> >>>
> >> That would conflict with delivery in PKCS#12 format or any other format
> >> that delivers the key and certificate together, as users of such
> >> services commonly expect.
> >>
> >> Yes, it would. But it's a clear policy. If the requirement is to deliver
> > the key at the same time as the certificate, then how long can the CA hold
> > the private key?
> > 
> > 
> 
> Point is that many end systems (including Windows IIS) are designed to
> either import certificates from PKCS#12 or use a specific CSR generation
> procedure.  If the CA delivered the key and cert separately, then the
> user (who is apparently not sophisticated enough to generate their own
> CSR) will have a hard time importing the key+cert into their system.
> 
> > 
> >> It would also conflict with keeping the issuing CA key far removed from
> >> public web interfaces, such as the interface used by users to pick up
> >> their key and certificate, even if separate, as it would not be fun to
> >> have to log in twice with 1 hour in between (once to pick up key, then
> >> once again to pick up certificate).
> >>
> >> I don't think I understand this use case, or how the proposed policy
> > relates to the issuing CA.
> > 
> 
> If the issuing CA HSM is kept away from online systems and processes
> vetted issuance requests only in a batched offline manner, then a user
> responding to a message saying "your application has been accepted,
> please log in with your temporary password to retrieve your key and
> certificate" would have to download the key, after which the CA can
> delete key and queue the actual issuance to the offline CA system, and
> only after that can the user actually download their certificate.
> 
> Another thing with similar effect is the BR requirement that all the
> OCSP responders must know about issued certificates, which means that
> both the serial number and a hash of the signed certificate must be
> replicated to all the OCSP machines before the certificate is delivered.
> (One of the good OCSP extensions is to include a hash of the valid
> certificate in the OCSP response, thus allowing the relying party
> software to check that a "valid" response is actually for the
> certificate at hand).
> 
> 
> 
> 
> > 
> >> It would only really work with a CSR+key generation service where the
> >> user receives the key at application time, then the cert after vetting.
> >> And many end systems cannot easily import that.
> >>
> >> Many commercial CAs could accommodate a workflow where they deliver the
> > private key at application time. Maybe you are thinking of IOT scenarios?
> > Again, some use cases describing the problem would be helpful.
> > 
> 
> One major such use case is IIS or Exchange at the subscriber end.
> Importing the key and cert at different times is just not a feature of
> Windows server.
> 
> > 
> >> A policy allowing CAs to generate key pairs should also include provisions
> >>> for:
> >>> - The CA must generate the key in accordance with technical best practices
> >>> - While in possession of the private key, the CA must store it securely
> >>>
> >>> Wayne
> >>>
> >>>
> >>
> 
> 
> 
> Enjoy
> 
> Jakob
> -- 
> Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
> Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
> This public discussion message is non-binding and may contain errors.
> WiseMo - Remote Service Management for PCs, Phones and Embedded

I agree that the "right way(tm)" is to have the keys generated in a HSM, the 
keys exported in ciphertext and for this to be done in a way that the CA can 
not decrypt the keys.

Technically the PKCS#12 format would allow for such a model as you can encrypt 
the keybag to a public key (in a certificate. You could, for example generate a 
key in a HSM, export it encrypted to a public key, and the CA would never see 
the key. 

This has several issues, the first is, of course, you must trust the CA not to 
use a different key; this could be addressed by requiring the code performing 
this logic to be made public, 

Re: CA generated keys

2017-12-15 Thread Ryan Hurst via dev-security-policy
On Tuesday, December 12, 2017 at 11:31:18 AM UTC-8, Tim Hollebeek wrote:
> > A policy allowing CAs to generate key pairs should also include provisions
> > for:
> > - The CA must generate the key in accordance with technical best practices
> > - While in possession of the private key, the CA must store it securely
> 
> Don't forget appropriate protection for the key while it is in transit.  I'll 
> look a bit closer at the use cases and see if I can come up with some 
> reasonable suggestions.
> 
> -Tim

Unfortunately, the PKCS#12 format, as supported by UAs and Operating Systems is 
not a great candidate for the role of carrying keys anymore. You can see my 
blog post on this topic here: http://unmitigatedrisk.com/?p=543

The core issue is the use of old cryptographic primitives that barely live up 
to the equivalent cryptographic strengths of keys in use today. The offline 
nature of the protection involved also enables an attacker to grind any value 
used as the password as well.

Any plan to allow a CA to generate keys on behalf of users, which I am not 
against as long as there are strict and auditable practices associated with it, 
needs to take into consideration the protection of those keys in transit and 
storage.

I also believe any language that would be adopted here would clearly addresses 
cases where a organization that happens to operate a CA but is also a relying 
party. For example Amazon, Google and Apple both operate WebTrust audited CAs 
but they also operate cloud services where they are the subscriber of that CA. 
Any language used would need to make it clear the relative scopes and 
responsibilities in such a case.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: On the value of EV

2017-12-11 Thread Ryan Hurst via dev-security-policy
On Monday, December 11, 2017 at 12:41:02 PM UTC-8, Paul Wouters wrote:
> On Mon, 11 Dec 2017, James Burton via dev-security-policy wrote:
> 
> > EV is on borrowed time
> 
> You don't explain why?
> 
> I mean domain names can be confusing or malicious too. Are domain names
> on borrowed time?
> 
> If you remove EV, how will the users react when paypal or their bank is
> suddenly no longer "green" ? Are we going to teach them again that
> padlocks and green security come and go and to ignore it?
> 
> Why is your cure (remove EV) better than fixing the UI parts of EV?
> 
> Paul

The issues with EV are much larger than UI. It needs to be revisited and a 
honest and achievable set of goals need to be established and the processes and 
procedures used pre-issuance and post-issuance need to be defined in support 
those goals. Until thats been done I can not imagine any browser would invest 
in new UI and education of users for this capability.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: On the value of EV

2017-12-11 Thread Ryan Hurst via dev-security-policy
Stripe, Inc could very well be a road striping company.

This may have situationally been the equivalent of a misleading certificate but 
the scenario of name collisions is real.

Ryan Hurst
On Monday, December 11, 2017 at 11:39:57 AM UTC-8, Tim Hollebeek wrote:
> Nobody is disputing the fact that these certificates were legitimate given 
> the rules that exist today.
> 
> However, I don't believe "technically correct, but intentionally misleading" 
> information should be included in certificates.  The question is how best to 
> accomplish that.
> 
> -Tim
> 
> -Original Message-
> From: Jonathan Rudenberg [mailto:jonat...@titanous.com] 
> Sent: Monday, December 11, 2017 12:34 PM
> To: Tim Hollebeek 
> Cc: Ryan Sleevi ; 
> mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Re: On the value of EV
> 
> 
> > On Dec 11, 2017, at 14:14, Tim Hollebeek via dev-security-policy 
> >  wrote:
> > 
> > 
> > It turns out that the CA/Browser Validation working group is currently 
> > looking into how to address these issues, in order to tighten up 
> > validation in these cases.
> 
> This isn’t a validation issue. Both certificates were properly validated and 
> have correct (but very misleading information) in them. Business entity names 
> are not unique, so it’s not clear how validation changes could address this.
> 
> I think it makes a lot of sense to get rid of the EV UI, as it can be 
> trivially used to present misleading information to users in the most 
> security-critical browser UI area. My understanding is that the research done 
> to date shows that EV does not help users defend against phishing attacks, it 
> does not influence decision making, and users don’t understand or are 
> confused by EV.
> 
> Jonathan

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Welcome Wayne Thayer to Mozilla!

2017-11-27 Thread Ryan Hurst via dev-security-policy
That is great!

On Monday, November 27, 2017 at 4:04:09 PM UTC-8, Kathleen Wilson wrote:
> All,
> 
> I am pleased to announce that Wayne Thayer is now a Mozilla employee, 
> and will be working with me on our CA Program!
> 
> Many of you know Wayne from his involvement in this discussion forum and 
> in the CA/Browser Forum, as a representative for the Go Daddy CA. Wayne 
> was involved in Go Daddy's CA program from the beginning, so he has a 
> deep understanding of CA policies, audits, and standards.
> 
> Some of the things Wayne will be working on in his new role include:
> + Review of root inclusion/update requests in discussion.
> + Investigate more complex root inclusion/update requests.
> + Help with CA mis-issuance investigations, bugs, and discussions.
> + Lead prioritization, effort, and discussions to update Mozilla Root 
> Store Policy and CCADB Policy. (transition from Gerv over time)
> + Represent Mozilla in the CA/Browser Forum, along with Gerv.
> 
> I have added Wayne to the Policy_Participants wiki page:
> https://wiki.mozilla.org/CA/Policy_Participants
> 
> Welcome, Wayne!
> 
> Thanks,
> Kathleen

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: CAs not compliant with CAA CP/CPS requirement

2017-09-08 Thread Ryan Hurst via dev-security-policy
Responding from my personal account but I can confirm that Google Trust 
Services does check CAA and our policy was updated earlier today to reflect 
that.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Regarding CA requirements as to technical infrastructure utilized in automated domain validations, etc. (if any)

2017-08-29 Thread Ryan Hurst via dev-security-policy
On Monday, August 28, 2017 at 1:15:55 AM UTC-7, Nick Lamb wrote:
> I think that instead Ryan H is suggesting that (some) CAs are taking 
> advantage of multiple geographically distinct nodes to run the tests from one 
> of the Blessed Methods against an applicant's systems from several places on 
> the Internet at once. This mitigates against attacks that are able to disturb 
> routing only for the CA or some small corner of the Internet containing the 
> CA. For example my hypothetical 17 year-old at the ISP earlier in the thread 
> can't plausibly also be working at four other ISPs around the globe.
> 
> This is a mitigation not a fix because a truly sophisticated attacker can 
> obtain other certificates legitimately to build up intelligence about the 
> CA's other perspective points on the Internet and then attack all of them 
> simultaneously. It doesn't involve knowing much about Internet routing, 
> beyond the highest level knowledge that connections from very distant 
> locations will travel by different routes to reach the "same" destination.

Thanks, Nick, that is exactly what I was saying.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Regarding CA requirements as to technical infrastructure utilized in automated domain validations, etc. (if any)

2017-08-25 Thread Ryan Hurst via dev-security-policy
Dimitris,

I think it is not accurate to characterize this as being outside of the CAs 
controls. Several CAs utilize multiple network perspectives and consensus to 
mitigate these risks. While this is not a total solution it is fairly effective 
if the consensus pool is well thought out.

Ryan
On Thursday, August 24, 2017 at 5:45:11 AM UTC-7, Dimitris Zacharopoulos wrote:
> On 26/7/2017 3:38 πμ, Matthew Hardeman via dev-security-policy wrote:
> > On Tuesday, July 25, 2017 at 1:00:39 PM UTC-5,birg...@princeton.edu  wrote:
> >> We have been considering research in this direction. PEERING controls 
> >> several ASNs and may let us use them more liberally with some convincing. 
> >> We also have the ASN from Princeton that could be used with cooperation 
> >> from Princeton OIT (the Office of Information Technology) where we have 
> >> several contracts. The problem is not the source of the ASNs but the 
> >> network anomaly the announcement would cause. If we were to hijack the 
> >> prefix of a cooperating organization, the PEERING ASes might have their 
> >> announcements filtered because they are seemingly launching BGP attacks. 
> >> This could be fixed with some communication with ISPs, but regardless 
> >> there is a cost to launching such realistic attacks. Matthew Hardeman 
> >> would probably know more detail about how this would be received by the 
> >> community, but this is the general impression I have got from engaging 
> >> with the people who run the PEERING framework.
> > I have some thoughts on how to perform such experiments while mitigating 
> > the likelihood of significant lasting consequence to the party helping 
> > ingress the hijack to the routing table, but you correctly point out that 
> > the attack surface is large and the one consistent feature of all 
> > discussion up to this point on the topic of BGP hijacks for purpose of 
> > countering CA domain validation is that none of those discuss have, up to 
> > this point, expressed doubt as to the risks or the feasibility of carrying 
> > out these risks.  To that ends, I think the first case that would need to 
> > be made to further that research is whether anything of significance is 
> > gained in making the attack more tangible.
> >
> >> So far we have not been working on such an attack very much because we are 
> >> focusing our research more on countermeasures. We believe that the attack 
> >> surface is large and there are countless BGP tricks an adversary could use 
> >> to get the desired properties in an attack. We are focusing our research 
> >> on simple and countermeasures CAs can implement to reduce this attack 
> >> space. We also aim to use industry contacts to accurately asses the false 
> >> positive rates of our countermeasures and develop example implementations.
> >>
> >> If it appears that actually launching such a realistic attack would be 
> >> valuable to the community, we certainty could look into it further.
> > This is the question to answer before performing such an attack.  In 
> > effect, who is the audience that needs to be impressed?  What criteria must 
> > be met to impress that audience?  What benefits in furtherance of the work 
> > arise from impressing that audience?
> >
> > Thanks,
> >
> > Matt Hardeman
> > ___
> > dev-security-policy mailing list
> > dev-security-policy@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-security-policy
> 
> That was a very interesting topic to read. Unfortunately, CAs can't do 
> much to protect against network hijacking because most of the 
> counter-measures lie in the ISPs' side. However, the CAs could request 
> some counter-measures from their ISPs.
> 
> Best practices for ISPs state that for each connected peer, the ISP need 
> to apply a prefix filter that will allow announcements for only 
> legitimate prefixes that the peer controls/owns. We can easily imagine 
> that this is not performed by all ISPs. Another solution that has been 
> around for some time, is RPKI 
>  
> along with BGP Origin Validation 
> .
>  
> Of course, we can't expect all ISPs to check for Route Origin 
> Authorizations (ROAs) but if the major ISPs checked for ROAs, it would 
> improve things a lot in terms of securing the Internet.
> 
> So, in order to minimize the risk for a CA or a site owner network from 
> being hijacked, if a CA/site owner has an address space that is Provider 
> Aggregatable (PA) (this means the ISP "owns" the IP space), they should 
> check that their upstream network provider has properly created the ROAs 
> for the CA/site operator's network prefix(es) in the RIR authorized 
> list, and that they have configured their routers to validate ROAs for 
> each prefix. If the CA/site operator has a Provider Independent (PI) 
> 

Re: Criticism of Mozilla Re: Google Trust Services roots

2017-03-10 Thread Ryan Hurst via dev-security-policy

Most are not directed at me so I won’t respond to each item, but for several
I think I can provide some additional context, see below:

> * Manner of transfer:  As we learned from Ryan H., a second HSM was 
> introduced for the transfer of the private key meaning that for a period of 
> time 2 copies of the private key were in existence.  Presumably one copy 
> was destroyed at some point, but I'm not familiar with any relevant 
> standards or requirements to know when/how that takes place.  Whatever the 
> case may be, this situation seems to fall outside of the Root Transfer 
> Policy as I now read it.  Also, did GlobalSign ever confirm to Mozilla that 
> they are no longer in possession of or otherwise have access to the private 
> key for those 2 roots? 

A few things are relevant to this comment. First, when designing a key 
management 
program for keys that may live ten or twenty years it is extremely important 
one builds a
disaster recovery plan. Such plans require duplicate copies of keys exist. 
Basically no 
responsible CA would not have backups of their keys.

Additionally given the reliability and performance requirements issuing CAs 
also, almost 
always, are deployed with a cluster of HSMs.

The point of mentioning the above is that having multiple copies of keys is a 
standard practice.

Regarding who has control over the associated keys, you are correct, as is the 
standard 
practice (this is my 8th transfer in my professional career) the process of 
transfer Involved 
reviewing the history and associated artifacts of the keys and ensuring, in the 
presence
of our auditors, all copies not belonging to Google were destroyed.

While I can not speak for GlobalSign I can state that I do know they notified 
all root relevant
programs that they no longer have control of the associated keys.


> * Conduct of the transfer:  I think an expectation should be set that the 
> "current holder of the trust" must be the one to drive the transfer.  Trust 
> can be handed to someone else; reaching in and taking trust doesn't sound 
> very...trustworthy?  To that end, I think the policy should state that the 
> current root holder must do more than merely notify Mozilla about the 
> change in ownership; the holder (and their auditor) must provide the 
> audits, attestations, and answers to questions that come up.  Only after 
> the transfer is complete would the "new holder" step in to perform those 
> duties. 

It is the expectation of the Mozilla Program as well as the Microsoft Program 
(and others)
that the current holder of the trust drives the transfer. That was what 
happened in this case
aswell.

As was noted in the original thread Mozilla does not publicly require 
permission to be secured but
does so privately and in this case that permission was secured, at least 
implicitly since we discussed
with Mozilla our purchase numerous times before terms were reached. Other 
programs, such as Microsoft make this requirement public so we explicitly 
secured their permission before finalizing terms as well. 

While securing such permission complicates the the process I think the value to 
the ecosystem is warrents the complication and I think it makes sense for 
Mozilla to formalize their requirement to secure permission before a transfer.


> * Public notification:  I appreciate that confidentiality is required when 
> business transactions are being discussed but at some point, in the 
> interest of transparency, the public must be notified of the transfer.  I 
> think this is implied (or assumed) in the current policy, but it might be 
> good to state explicitly that a public announcement must be made.  I would 
> add that making an announcement at a CABF meeting is all well and good, but 
> considering that most people on the Internet are not able to attend those 
> meetings it would be good if an announcement could be made in other forums 
> as well. 

This representation misrepresents what notification has taken place, for others 
I suggest 
reading the other thread for a more accurate representation.

To the specific policy suggestion, the fact that changes in the Mozilla program 
are all
tracked via public channels like the bug database and this forum mean that 
today public 
notice is already mandated. 

There may be value in requiring something “larger” than that, but defining that 
in a concrete
way is hard. In our case when we published our blog post it was picked up by 
many technical
publications but that is because we are Google. In historic transfers of keys, 
the actors in the
transfer were not as visible as Google and as such their public notices were, 
well... .not noticed.

One thing that could be a reasonable step is to require that on their document 
repository, for 
some period of time after a transfer they maintain notice there. I am not sure 
this materially
Moves the bar forward in that, I can say I have seen the web traffic for many 
repository pages for
Some of the larger 

Re: Google Trust Services roots

2017-03-09 Thread Ryan Hurst via dev-security-policy

> Of all these, Starfield seems to be the only case where a single CA
> name now refers to two different current CA operators (GoDaddy and
> Amazon).  All the others are cases of complete takeover.  None are
> cases where the name in the certificate is a still operating CA
> operator, but the root is actually operated by a different entity
> entirely.

That is true, but my point is that one can not rely on the name in root 
certificates, when certs are made to be good for well over a decade the concept 
of name continuity just doesn't hold.

> Also, I don't see Google on that list.

I noticed that too, Ill be reaching out to Microsoft to make sure its updated.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Trust Services roots

2017-03-09 Thread Ryan Hurst via dev-security-policy
On Thursday, March 9, 2017 at 9:00:21 PM UTC-8, Peter Kurrasch wrote:
> By definition, a CPS is the authoritative document on what root
> certificates a CA operates and how they go about that operation.  If the
> GlobalSign CPS has been updated to reflect the loss of their 2 roots,
> that's fine.  Nobody is questioning that.
> 
> What is being questioned is whether updating the GlobalSign CPS is
> sufficient to address the needs, concerns, questions, or myriad other
> issues that are likely to come up in the minds of GlobalSign subscribers
> and relying parties--and, for that matter, Google's own subscribers and
> relying parties.  To that, I think the answer must be: "no, it's not
> enough".  Most people on the internet have never heard of a CPS and of
> those who have, few will have ever read one and fewer still will have read
> the GlobalSign CPS.

Again while I can not speak for GlobalSign I can say that there has been far 
more public notice than a simple CP/CPS update. 

In addition to the Google Blog post about the acquisition 
(https://security.googleblog.com/2017/01/the-foundation-of-more-secure-web.html),
 the purchase was picked up by many high profile technology news sources, some 
of which included:
-  https://www.theregister.co.uk/2017/01/27/google_root_ca/
-  
http://www.infoworld.com/article/3162102/security/google-moves-into-root-certificate-authority-business.html
- http://www.securityweek.com/google-launches-its-own-root-certificate-authority

Also this topic has been discussed at great length in numerous forums around 
the web. 

This is above and beyond the public notification that is built into the various 
root programs such as:
> The Google Trust Services CP/CPs lists GlobalSign as subordinates
> The Google Trust Services website has a link to the GlobalSign CP/CPS as well 
> as their audit reports.
> The Mozilla bug on this topic discusses the change in ownership,
> The Mozilla CA registry will also reference the change in ownership,
> The Microsoft CA registry will also reference the change in ownership,
> The Mozilla Salesforce instance will reference the change in ownership,
> This public thread discusses the change in ownership.

I am not sure there is much more meaningful options of notification left.

Additionally as stated, EV badges will still correctly reflect that it is 
GlobalSign who issues the associated certificates, and not Google.

The only opportunity for confusion comes from those who look at the 
certificates themselves and missed all of the above notifications.

It is also important to note that this is a very common situation, to see how 
common it is visit the page Microsoft maintains for Root Program members - 
https://social.technet.microsoft.com/wiki/contents/articles/37425.microsoft-trusted-root-certificate-program-participants-as-of-march-9-2017.aspx

You will notice the first column is the name of the current owner and the 
second column is the name in the certificate.

A few you will notice are:

Amazon,   Starfield Services Root Certificate Authority - G2
Asseco Data Systems S.A. (previously Unizeto Certum), Certum CA
Entrust, Trend Micro 1
Entrust, Trend Micro 2
Entrust, Trend Micro 3
Entrust, Trend Micro 4  
Comodo, The USERTrust Network™
Comodo, USERTrust (Client Authentication / Secure Email)
Comodo, USERTrust (Code Signing)
Comodo, USERTrust RSA Certification Authority
Comodo, UTN-USERFirst-Hardware
Symantec / GeoTrust
Symantec / Thawte   
Symantec / VeriSign
Trustwave, XRamp Global Certification Authority

And more...

While I sincerely want to make sure there are no surprises, given how common it 
is for names in root certificates not to match the current owner, those who are 
looking at certificate chains should not be relying on the value in the root 
certificate in the first place wrong in very significant situations. 

Ryan
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Trust Services roots

2017-03-08 Thread Ryan Hurst via dev-security-policy
> Jakob: An open question is how revocation and OCSP status for the 
> existing intermediaries issued by the acquired roots is handled. 

Google is responsible for producing CRLs for from these roots. We are also 
currently
relying on the OCSP responder infrastructure of GlobalSign for this root but are
In the process of migrating that inhouse.

> Jakob: Does GTS sign regularly updated CRLs published at the (GlobalSign) 
> URLs 
> listed in the CRL URL extensions in the GlobalSign operated non-expired 
> intermediaries? 

At this time Google produces CRLs and works with GlobalSign to publish those 
CRLs.

> Jakob: Hopefully these things are answered somewhere in the GTS CP/CPS for 
> the 
> acquired roots. 

This level of detail is not typically included in a CPS, for example, a service 
may change 
Which internet service provider or CDN service they use and not need update 
their CP/CPS.


> Jakob: Any relying party seeing the existing root in a chain would see the 
> name GlobalSign in the Issuer DN and naturally look to GlobalSign's 
> website and CP/CPS for additional information in trying to decide if 
> the chain should be trusted. 

The GlobalSign CPS indicates that the R2 and R4 are no longer under their 
control.

Additionally given the long term nature of CA keys, it is common for the DN not 
to accurately 
represent the organization that controls it. As I mentioned in an earlier 
response in the 90’s I 
created roots for a company called Valicert that has changed hands several 
times, additionally
Verisign, now Symantec in this context has a long history of acquiring CAs and 
as such they 
have CA certificates with many different names within them.

> Jakob: A relying party might assume, without detailed checks, that these 
> roots 
> are operated exclusively by GlobalSign in accordance with GlobalSign's 
> good reputation. 

As the former CTO of GlobalSign I love hearing about their good reputation ;)

However I would say the CP/CPS is the authoritative document here and since
 GMO GlobalSign CP/CPS clearly states the keys are no longer in their control I 
believe this
Should not be an issue.

> Jakob: Thus a clear notice that these "GlobalSign roots" are no longer 
> operated by GlobalSign at any entrypoint where a casual relying party 
> might go to check who "GlobalSign R?" is would be appropriate. 

I would argue the CA’s CP/CPS’s are the authoritative documents here and would
Satisfy this requirement.

> Jakob: If possible, making Mozilla products present these as "Google", not 
> "GlobalSign" in short-form UIs (such as the certificate chain tree-like 
> display element).  Similarly for other root programs (for example, the 
> Microsoft root program could change the "friendly name" of these). 

I agree with Jakob here, given the frequency in which roots change hands, it 
would make
sense to have an ability to do this. Microsoft maintains this capability that 
is made available
to the owner.

There are some limitations relative to where this domain information is used, 
for example
 in the case of an EV certificate, if Google were to request Microsoft  use 
this capability the
EV badge would say verified by Google. This is because they display the root 
name for the 
EV badge. However, it is the subordinate CA in accordance with its CP/CPS that 
is responsible
for vetting, as such the name displayed in this case should be GlobalSign.

Despite these limitations, it may make sense in the case of Firefox to maintain 
a similar capability.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Trust Services roots

2017-03-08 Thread Ryan Hurst via dev-security-policy
> pzb: Policy Suggestion A) When transferring a root that is EV enabled, it 
> should be clearly stated whether the recipient of the root is also 
> receiving the EV policy OID(s). 

> Gerv: I agree with this suggestion; we should update 
> https://wiki.mozilla.org/CA:RootTransferPolicy , and eventually 
> incorporate it into the main policy when we fix 
> https://github.com/mozilla/pkipolicy/issues/57 . 

I think this is good.


> Gerv: https://wiki.mozilla.org/CA:RootTransferPolicy says that "The 
> organization who is transferring ownership of the root certificate’s 
> private key must ensure that the transfer recipient is able to fully 
> comply with Mozilla’s CA Certificate Policy. The original organization 
> will continue to be responsible for the root certificate's private key 
> until the transfer recipient has provided Mozilla with their Primary 
> Point of Contact, CP/CPS documentation, and audit statement (or opinion 
> letter) confirming successful transfer of the root certificate and key." 

> Gerv: I would say that an organization which has acquired a root certificate 
> in the program and which has provided Mozilla with the above-mentioned 
> information is thereby a member of the program. As the policy says that 
> the transferring entity continues to be responsible until the 
> information is provided, that seems OK to me. 

This seems reasonable to me also.

> Gerv: This position would logically lead to the position that a root 
> inclusion 
> request from an organization which does not have any roots is also, 
> implicitly, an application to become a member of the program but the two 
> things are distinct. One can become a member of the program in other 
> ways. Membership is sort of something that happens to one automatically 
> when one successfully achieves ownership of an included root. 

This seems reasonable to me also.


> pzb: Policy Suggestion B) Require that any organization wishing to become 
> a member of the program submit a bug with links to content 
> demonstrating compliance with the Mozilla policy.  Require that this 
> be public prior to taking control of any root in the program. 

> Gerv: We do require this, but not publicly. I note and recognise Ryan's 
> concern about requiring advance disclosure of private deals. I could see 
> a requirement that a transferred root was not allowed to issue anything 
> until the appropriate paperwork was publicly in place. Would that be 
> suitable? 

Could you clarify what you mean by appropriate paperwork?


> pzb: Policy Suggestion C) Recognize that root transfers are distinct from 
> the acquisition of a program member.  Acquisition of a program 
> member (meaning purchase of the company) is a distinctly different 
> activity from moving only a private key, as the prior business 
> controls no longer apply in the latter case. 

> Gerv: https://wiki.mozilla.org/CA:RootTransferPolicy does make this 
distinction, I feel - how could it be better made? 

After re-reading this text I personally think this is clear.


> pzb: Policy Suggestion D) Moving from being a RA to a CA or moving from 
> being a single tier/online (i.e. Subordinate-only) CA to being a 
> multi-tier/root CA requires a PITRA 

> Gerv: Again, would this be covered by a requirement that no issuance was 
> permitted from a transferred root until all the paperwork was in place, 
> including appropriately-scoped audits? This might lead to a PITRA, but 
> would not have to. 

This seems reasonable to me also.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Trust Services roots

2017-03-08 Thread Ryan Hurst via dev-security-policy
> jacob: Could a reasonably condition be that decision authority, actual and 
> physical control for a root are not moved until proper root program 
> coordination has been done (an action which may occur after/before the 
> commercial conclusion of a transaction).  From a business perspective 
> this could be comparable to similar requirements imposed on some 
> physical objects that can have public interest implications. 

Microsoft has a similar requirement in their program, we had to get permission
from them before we could finalize commercial terms for this acquisition. 
I personally think this is a good policy and one I think Mozilla should adopt 
as well.

It adds more complexity to these acquisitions in that one needs to get the 
approvals
from multiple parties but I think that the value to the ecosystem warrants 
this complexity.


> Jacob: For clarity could Google and/or GTS issue a dedicated CP/CPS pair for 
> the brief period where Google (not GTS) had control of the former 
> GlobalSign root (such a CP/CPS would be particularly simple given that 
> no certificates were issued).  Such as CP/CPS should also clarify any 
> practices and procedures for signing revocation related data (CRLs, 
> OCSP responses, OCSP responder certificates) from that root during the 
> transition.  The CP/CPS would also need to somehow state that the 
> former GlobalSign issued certificates remain valid, though no further 
> such certificates were issued in this interim period. 

> Similarly could Google and/or GTS issue a dedicated CP/CPS pair for the 
> new roots during the brief period where Google (not GTS) had control of 
> those new roots. 

While we want to work with the community to provide assurances we followed
best practices and the required policies in this transfer I do not think this 
would provide
any further insights.

Before the transfer we, and our auditors, reviewed the CP/CPS, as well as the 
policies 
and procedures associated with the the management of these keys, and found them 
to be
both compliant with both the requirements and best practices. In other words,
both we, and our auditors, are stating, as supported by the opinion letter, 
that we believe the 
Google CP/CPS covered these keys during this period.

If we created a new CP/CPS for that period it would, at best, be a subset of 
the 
Google CP/CPS and offer no new information other than the omission of a few 
details.

Could you maybe clarify what your goals are with this request, with that we can 
potentially 
propose an alternate approach to address those concerns. 

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Trust Services roots

2017-03-08 Thread Ryan Hurst via dev-security-policy
> pzb: According to the opinion letter:
> "followed the CA key generation and security requirements in its:
> Google Internet Authority G2 CPS v1.4" (hyperlink omitted)

> According to that CPS, "Key Pairs for the Google Internet Authority
> are generated and installed in accordance with the contract between
> Google and GeoTrust, Inc., the Root CA."

> Are you asserting that the authority for the key generation process
> the new Google roots is "the contract between Google and GeoTrust,
> Inc."?

No, that is not the intent of that statement, it is a good catch. This is 
simply a poorly worded statement.

To clarify our acquisition of these keys and certificates are independent of 
our agreement with GeoTrust, Inc. 

The Intent of that statement is to say that the technical requirements of that 
contract, which in essence refer to meeting the WebTrust requirements, were 
followed.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Trust Services roots

2017-03-07 Thread Ryan Hurst via dev-security-policy
> pzb: I appreciate you finally sending responses.  I hope you appreciate

> that they are clearly not adequate, in my opinion.  Please see the

> comments inline.

Again, sorry for the delay in responding, I will be more prompt moving
forward.

> pzb: This does not resolve the concern.  The BRs require an "an unbroken

> sequence of audit periods".  Given that GlobalSign clearly cannot make

> any assertion about the roots after 11 August 2016, you would have a

> gap from 11 August 2016 to 30 September 2016 in your sequence of audit

> periods if your next report runs 1 October 2016 to 30 September 2017.


I understand your point but this is not entirely accurate. Our strategy, to
ensure a smooth transition, which was reviewed with the auditors and root
program administrators was that we take possession of the root key material
and manage it offline, in accordance with our existing WebTrust audit and
the “Key Storage, Backup and Recovery Criterion”.  It was our, and EY's
opinion that the existing controls and ongoing WebTrust audits were
sufficient given this plan and scope.

As such, during the period in question, the existing audits provide an
un-broken sequence of audit periods.

That said, we will follow-up with our auditors to see if it is possible to
extend the scope of our 2017 audit to also cover this interval to ensure
the community has further assurances of continuity.

> pzb: Based on my personal experience, it is possible to negotiate a deal

> and set a closing date in the future.  This is standard for many

> acquisitions; you frequently see purchases announced with a closing

> date in the future for all kinds of deals.  The gap between signing

> the deal and closing gives acquirers the opportunity to complete the

> steps in B.

As I stated, I think that moving forward this could be a good policy
change, I am hesitant to see any user agent adopt policies that are overly
prescriptive of commercial terms between two independent parties.


> pzb: You appear to be confusing things here.  "Subordinate CA Certificate

> Life Cycle Management" is the portion of the WebTrust criteria that

> covers the controls around issuing certificates with the cA component

> of the basicConstraints extension set to true.  It has nothing to do

> with operating a subordinate CA.

I am familiar with the "Subordinate CA Certificate Life Cycle Management"
controls

I just should have been more explicit in my earlier response.

These keys were generated and stored in accordance with Asset
Classification and Management Criterion, and Key Storage, Backup and
Recovery Criterion.

Before utilizing the associated keys in any activity covered by the
“Subordinate CA

Certificate Life Cycle Management” criterion all associated policies and
procedures were

created, tested and then reviewed by our auditors. Additionally, those
auditors were

present during the associated ceremony. All such activities which will be
covered under

our 2017 audit.

This is similar to how a CA can, and does, revise and extend their policies
between

audits to cover new products and services.

This is consistent with the approach we discussed, and had approved with
the various root program administrators.


> pzb: You have stated that the Google CPS (not the GTS CP/CPS) was the

> applicable CPS for your _root CAs_ between 11 August 2016 and 8

> December 2016.  The Google CPS makes these statements.  Therefore, you

> are stating that the roots (not just GIA G2) were only permitted to

> issue Certificates to Google and Google Affiliates.

Correct, these roots were not used to issue certificates at all until last
week and when one was used, it was used to issue a subordinate CA
certificate to Google.

Though we do not have a product or service to announce currently, we can
say we will expand the  use of GTS beyond GIAG2, at which time policies,
procedures, CP and CPS will be updated accordingly. This progression makes
sense as we're moving from a constrained intermediate to a Root.

> Mozilla has consistently taken the position that roots that exclusively
issue to a

> single company are not acceptable in the root program.

Google and its affiliate companies are more than a single company.

Additionally, clearly the intent of this rule is to prevent thousands of
organizations issuing a handful of certificates polluting the root store.

In the case of Google and its Affiliate companies, we operate products and
services for our

customers. This is similar to how Amazon and a number of other root
operators operate

products and services for their customers, the core difference being the
breadth of user

facing products we have.

> This does not address the question.  The Google CPS clearly states

> that it only covers the GIA G2 CA.  You have stated that the Google

> CPS (not the GTS CP/CPS) was the applicable CPS for your _root CAs_

> between 11 August 2016 and 8 December 2016.  This puts your statement

> at adds with what is written in 

Re: Google Trust Services roots

2017-03-06 Thread Ryan Hurst via dev-security-policy
> Gerv: Which EV OID are you referring to, precisely? 

I was referring to the GlobalSign EV Certificate Policy OID 
(1.3.6.1.4.1.4146.1.1) but more concretely I meant any and all EV related OIDs, 
including the CAB Forum OID of 2.23.140.1.1.

> Gerv: Just to be clear: GlobalSign continues to operate at least one subCA 
> under a root which Google has purchased, and that root is EV-enabled, 
> and the sub-CA continues to do EV issuance (and is audited as such) but 
> the root is no longer EV audited, and nor is the rest of the hierarchy? 

Yes, that is correct.

> Gerv: Can you tell us what the planned start/end dates for the audit period 
> of 
> that annual audit are/will be? 

Our audit period is October 1st to the end of September. The associated report 
will be released between October and November, depending on our auditors 
schedules. 

> Gerv: Are the Google roots and/or the GlobalSign-acquired roots currently 
> issuing EE certificates? Were they issuing certificates between 11th 
August 2016 and 8th December 2016? 

No they were not issuing certificates between 11th August 2016 and 8th December 
2016.

We generated our first certificate, a subordinate CA, last week, that CA is not 
yet in use.

Ryan Hurst
Product Manager 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Trust Services roots

2017-03-06 Thread Ryan Hurst via dev-security-policy
[Trying to resend without the quoted email to get through the spam filter]

First, let me apologize for the delay in my response, I have had a draft of
this letter in my inbox for a while and have just been unable to get back
to it and finish it due to scheduling conflicts. I promise to address all
other questions in a more prompt manner.

> pzb: Mozilla recognizes 2.23.140.1.1 as being a valid OID for
EVcertificates for all EV-enabled roots
> (https://bugzilla.mozilla.org/show_bug.cgi?id=1243923).

> 1) Do you consider it mis-issuance for Google to issue a certificate
containing the 2.23.140.1.1 OID?

> Policy Suggestion A) When transferring a root that is EV enabled, it
should be clearly stated whether the
> recipient of the root is also receiving the EV policy OID(s).

rmh: Yes. We believe that until we have:
- The associated policies, procedures, and other associated work completed,
- Have successfully completed an EV audit,
- And have been approved by one or more of the various root programs as an
EV issuer.

That it would be an example of miss-issuance for us to issue such a
certificate.



> pzb:  Second, according to the GTS CPS v1.3, "Between 11 August 2016 and
8 December 2016, Google Inc. operated these Roots according to Google
Inc.’s Certification Practice Statement."  The basic WebTrust for CA and
WebTrust BR audit reports for the period ending September 30, 2016
explicitly state they are for "subordinate CA under external Root CA" and
do not list the roots in the GTS CPS at all.
>
> rmh: I believe this will be answered by my responses to your third and
fourth observations.

> It was not.

rmh: I just attached two opinion letters from our auditors, I had
previously provided these to the root programs directly but it took some
time to get permission to release them publicly. One letter is covering the
key generation ceremony of the new roots, and another covering the transfer
of the keys to our facilities. In this second report you will find the
following statement:

```
In our opinion, as of November 17, 2016, Google Trust Services LLC
Management’s Assertion, as referred to above, is fairly stated, in all
material respects, based on Certification Practices Statement Management
Criterion 2.2, Asset Classification and Management Criterion 3.2, and Key
Storage, Backup and Recovery Criterion 4.2 of the WebTrust Principles and
Criteria for Certification Authorities v2.0.
```

Based on our conversations with the various root program operator's prior
to our acquisition it has been our plan and understanding, that we can
utilize these opinion letters to augment the webtrust audit with the
material details, relating to these activities. It is our hope that this
also addresses you specific concern here.


> 2) Will Google be publishing an audit report for a period starting 11
> August 2016 that covers the transferred GS roots?  If so, can you
> estimate the end of period date?

rmh: It is our belief, based on our conversations with the various root
store operators, as well as our own auditors that the transfer itself is
covered by the opinion letters. With that said our audit period is October
1st to the end of September. The associated report will be released between
October and November, depending on our auditors schedules.


> pzb: I think that this is the key issue.  In my reading, "root
> certificates" are not members of the program.  Rather organizations
> (legal entities) are members and each member has some number of root
> certificates.

> Google was not a member of the program and had not applied to be a
> member of the program at the time they received the roots already in
> the program.  This seems problematic to me.

> Policy Suggestion B) Require that any organization wishing to become a
> member of the program submit a bug with links to content demonstrating
> compliance with the Mozilla policy.  Require that this be public prior
> to taking control of any root in the program.

> Policy Suggestion C) Recognize that root transfers are distinct from
> the acquisition of a program member.  Acquisition of a program member
> (meaning purchase of the company) is a distinctly different activity
> from moving only a private key, as the prior business controls no
> longer apply in the latter case.

We discussed the topic of disclosure with the root program administrators
prior to our acquisition. Our goal was to tell the community as soon as
possible, the complexity of this transaction made it hard to get a hard
date for that announcement. Based on our conversations with root program
administrators we were told the policy did not require disclosure to be
public which left the timing of that notification up to us.

As for the recommendation to clarify the policy in this area, I think it
would be valuable to do that.

Both of your recommendations seem reasonable, my concern with B) is how to
do so in a way that does not make it impossible or even more complicated to
successfully negotiate such a 

Re: Google Trust Services roots

2017-02-09 Thread Ryan Hurst via dev-security-policy
Peter,

Thank you very much for your, as always, thorough review. 

Let me start by saying I agree there is an opportunity for improving the 
policies around how key transfers such your recent transfer and Google's are 
handled.

It is my hope we can, through our respective recent experiences performing such 
transfers, help Mozilla revise their policy to provide better guidance for such 
cases in the future.

As for your specific questions, my responses follow:

pzb: First, according to the GTS website, there is no audit using the WebTrust 
Principles and Criteria for Certification Authorities – Extended Validation 
SSL.  However the two roots in the Mozilla CA  program currently are EV enabled 
and at least one subordinate CA under them is issuing EV certificates. 

rmh: Prior to our final stage of the acquisition we contacted both Mozilla and 
Microsoft about this particular situation. 

At this time, we do not have any interest in the issuance of EV SSL 
certificates, however GlobalSign does. Based on our conversations with 
representatives from both organizations we were told that since:
- The EV OID associated with this permission is associated with GlobalSign and 
not Google and,
- GlobalSign is active member in good standing with the respective root 
programs and,
- Google will not be issuing EV SSL certificates,
- Google will operate these roots under their own CP/CPS’s and associated OIDs,
- Google issuing a certificate with the GlobalSign OIDs would qualify as 
miss-issuance.

That it would be acceptable for us not to undergo a EV SSL audit, and that 
GlobalSign could keep the EV right for the associated subordinate CA for the 
remaining validity period to facilitate the transition (assuming continued 
compliance).

As a former manager of a root program, this seems an appropriate position to 
take. And as one who has been involved in several such root transfers I think 
differences in intended use are common enough that they should be explicitly 
handled by policy. 

pzb:  Second, according to the GTS CPS v1.3, "Between 11 August 2016 and 8 
December 2016, Google Inc. operated these Roots according to Google Inc.’s 
Certification Practice Statement."  The basic WebTrust for CA and WebTrust BR 
audit reports for the period ending September 30, 2016 explicitly state they 
are for "subordinate CA under external Root CA" and do not list the roots in 
the GTS CPS at all. 

rmh: I believe this will be answered by my responses to your third and fourth 
observations.

pzb: Third, the Google CPS says Google took control of these roots on August 
11, 2016.  The Mozilla CA policy explicitly says that a bug report must be 
filed to request to be included in the Mozilla CA program.  It was not until 
December 22, 2016 that Google requested inclusion as a CA in Mozilla's CA 
program (https://bugzilla.mozilla.org/show_bug.cgi?id=1325532).  This does not 
appear to align with Mozilla requirements for public disclosure. 

rmh: As has been mentioned, timing for a transaction like this is very 
complicated. The process of identifying candidates that could meet our needs 
took many months with several false starts with different organizations. That 
said, prior to beginning this process we proactively reached out to both 
Microsoft and Mozilla root programs to let them know we were beginning the 
process. Once it looked like we would be able to come to an agreement with 
GlobalSign we again reached out and notified both programs of our intent to 
secure these specific keys. Then once the transaction was signed we again 
notified the root programs that the deal was done.

As you know the process to ensure a secure, audited and well structured key 
migration is also non-trivial. Once this migration was performed we again 
notified both root programs.

Our intention was to notify all parties, including the public, shortly after 
the transfer but it took some time for our auditors, for reasons unrelated to 
our audit, to produce the associated audit letters. 

Once we received said letters, we then filed the bugs.

This is, although not our ideal timeline, and based on our understanding, in 
compliance with the Mozilla root program in that since these roots were already 
members they did not require us to publicly disclose the above negotiation, 
contracting, planning, migration and other intermediate steps.

pzb: Fourth, the audit reports linked in the bug explicitly set the scope of 
"subordinate CA operated under external Root CA" and do not include any 
indication of controls around the issuance of subordinate CA certificates.  
These audit reports do not have an appropriate scope for a root CA. 

rmh: Yes, we were also concerned about this topic, especially with the recent 
scope issues with audits. As such, we discussed this with both our auditors, 
and the the root programs prior to acquisition of the key material. 

When looking at this issue it is important to keep in mind Google has operated 
a WebTrust 

Re: Google Trust Services roots

2017-02-09 Thread Ryan Hurst via dev-security-policy
Peter,

Thank you very much for your, as always, thorough review.

Let me start by saying I agree there is an opportunity for improving the
policies around how key transfers such your recent transfer and Google's
are handled.

It is my hope we can, through our respective recent experiences performing
such transfers, help Mozilla revise their policy to provide better guidance
for such cases in the future.

As for your specific questions, my responses follow:

pzb: First, according to the GTS website, there is no audit using the
WebTrust Principles and Criteria for Certification Authorities – Extended
Validation SSL.  However the two roots in the Mozilla CA  program currently
are EV enabled and at least one subordinate CA under them is issuing EV
certificates.

rmh: Prior to our final stage of the acquisition we contacted both Mozilla
and Microsoft about this particular situation.

At this time, we do not have any interest in the issuance of EV SSL
certificates, however GlobalSign does. Based on our conversations with
representatives from both organizations we were told that since:

   -

   The EV OID associated with this permission is associated with GlobalSign
   and not Google and,
   -

   GlobalSign is active member in good standing with the respective root
   programs and,
   -

   Google will not be issuing EV SSL certificates,
   -

   Google will operate these roots under their own CP/CPS’s and associated
   OIDs,
   -

   Google issuing a certificate with the GlobalSign OIDs would qualify as
   miss-issuance.


That it would be acceptable for us not to undergo a EV SSL audit, and that
GlobalSign could keep the EV right for the associated subordinate CA for
the remaining validity period to facilitate the transition (assuming
continued compliance).

As a former manager of a root program, this seems an appropriate position
to take. And as one who has been involved in several such root transfers I
think differences in intended use are common enough that they should be
explicitly handled by policy.

pzb:  Second, according to the GTS CPS v1.3, "Between 11 August 2016 and 8
December 2016, Google Inc. operated these Roots according to Google Inc.’s
Certification Practice Statement."  The basic WebTrust for CA and WebTrust
BR audit reports for the period ending September 30, 2016 explicitly state
they are for "subordinate CA under external Root CA" and do not list the
roots in the GTS CPS at all.

rmh: I believe this will be answered by my responses to your third and
fourth observations.

pzb: Third, the Google CPS says Google took control of these roots on
August 11, 2016.  The Mozilla CA policy explicitly says that a bug report
must be filed to request to be included in the Mozilla CA program.  It was
not until December 22, 2016 that Google requested inclusion as a CA in
Mozilla's CA program (https://bugzilla.mozilla.org/show_bug.cgi?id=1325532).
This does not appear to align with Mozilla requirements for public
disclosure.

rmh: As has been mentioned, timing for a transaction like this is very
complicated. The process of identifying candidates that could meet our
needs took many months with several false starts with different
organizations. That said, prior to beginning this process we proactively
reached out to both Microsoft and Mozilla root programs to let them know we
were beginning the process. Once it looked like we would be able to come to
an agreement with GlobalSign we again reached out and notified both
programs of our intent to secure these specific keys. Then once the
transaction was signed we again notified the root programs that the deal
was done.

As you know the process to ensure a secure, audited and well structured key
migration is also non-trivial. Once this migration was performed we again
notified both root programs.

Our intention was to notify all parties, including the public, shortly
after the transfer but it took some time for our auditors, for reasons
unrelated to our audit, to produce the associated audit letters.

Once we received said letters, we then filed the bugs.

This is, although not our ideal timeline, and based on our understanding,
in compliance with the Mozilla root program in that since these roots were
already members they did not require us to publicly disclose the above
negotiation, contracting, planning, migration and other intermediate steps.

pzb: Fourth, the audit reports linked in the bug explicitly set the scope
of "subordinate CA operated under external Root CA" and do not include any
indication of controls around the issuance of subordinate CA certificates.
These audit reports do not have an appropriate scope for a root CA.

rmh: Yes, we were also concerned about this topic, especially with the
recent scope issues with audits. As such, we discussed this with both our
auditors, and the the root programs prior to acquisition of the key
material.

When looking at this issue it is important to keep in mind Google has
operated a WebTrust audited