Re: Online exposed keys database

2018-12-18 Thread Ryan Hurst via dev-security-policy
On Tuesday, December 18, 2018 at 2:44:22 AM UTC-8, Matt Palmer wrote:
> Hi all,
> 
> I'd like to make everyone aware of a service I've just stood up, called
> pwnedkeys.com.  It's intended to serve as a clearinghouse of known-exposed
> private keys, so that services that accept public keys from external
> entities (such as -- relevant to mdsp's interests -- CAs) can make one call
> to get a fairly authoritative answer to the question "has the private key
> I'm being asked to interact with in some way been exposed?".
> 
> It's currently loaded with great piles of Debian weak keys (from multiple
> architectures, etc), as well as some keys I've picked up at various times. 
> I'm also developing scrapers for various sites where keys routinely get
> dropped.
> 
> The eventual intention is to be able to go from "private key is on The
> Public Internet somewhere" to "shows up in pwnedkeys.com" automatically and
> in double-quick time.
> 
> I know there are a number of very clever people on this list who have found
> and extracted keys from more esoteric places than Google search, and I'd be
> really interested in talking to you (privately, I'd imagine) about getting
> specimens of those keys to add to the database.
> 
> I'd also welcome comments from anyone about the query API, the attestation
> format, the documentation, or anything else vaguely relevant to the service. 
> Probably best to take that off-list, though.
> 
> I do have plans to develop a PR against (the AWS Labs') certlint to cause it
> to query the API, so there's no need for anyone to get deep into that unless
> they're feeling especially frisky.  Other linting tools will *probably* have
> to do their own development, as my Go skills are... rudimentary at best,
> shall we say.  I'd be happy to give guidance or any other necessary help to
> anyone looking at building those, though.
> 
> Finally, if any CAs are interested in integrating the pwnedkeys database
> into their issuance pipelines, I'd love to discuss how we can work together.
> 
> Thanks,
> - Matt

This is great. I purchased keycompromise.com ages ago to build something just 
like this. Im very glad to see you took the time to make this.

My first thought is by using SPKI you have limited the service unnecessarily to 
X.509 related keys, I imagined something like this covering PGP, JWT as well as 
other formats. It would be nice to see the scope increased accordingly.

It would be ideal if it were possible to download the database also, the 
latency of the use of a third-party service while issuing certs is potentially 
too much for a CA to eat at issuance time; something that could optionally be 
used on-prem wouldn't leak affiliation and address this.

As long as its limited to X.509, or at least as long as it supports it and uses 
SPKI, it would be interesting to have the website use PKIjs to let you browse 
to a cert, csr or key and the SPKI calculated for you. Happy to help with that 
if your interested.

Personally I prefer https://api.pwnedkeys.com/v1/ to 
https://v1.pwnedkeys.com/.

I see your using JWS; I had been planning on building mine on top of Trillian 
(https://github.com/google/trillian) so you could have an auditable low trust 
mechanism to do this. Let me know if your interested in that and I would be 
happy to help there.

Anyways thanks for doing this.

Ryan Hurst
(personal)


___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: No Russian CAs

2018-08-27 Thread Ryan Hurst via dev-security-policy
On Friday, August 24, 2018 at 11:23:37 AM UTC-7, Caju Mihai wrote:
> Greetings,
> I would like to ask why there are no root certificate authorities from 
> organizations in the Russian Federation. Specifically I haven't found any 
> with the country code RU in the NSS CA bundle. Is it due to political 
> pressure? Or does the Russian government have a bad history with forcing CAs 
> to issue certificates? As far as I know Yandex has it's own intermediate CA, 
> signed by Certum. So I can't see the issue? Also can you point me to a few 
> bugs where Russian CAs have attempted inclusion? Bugzilla search isn't very 
> helpful, and I have tried searching in "CA Certificates Code", "CA 
> Certificate Mis-Issuance" and "CA Certificate Root Program"

The Russian market (really the whole FSU) is notably different than other 
markets, at least in the context of the WebPKI. Most notably the goverment 
mandate for the use of GOST approved algorithms and implementations conflicts 
with the WebTrust mandate of RSA, and the global standard ECC curves.

This is meaningful because many CAs make a large portion of their revenue not 
off SSL certificates but other services (digital signatures, enterprise use 
cases, etc). Much of these other use cases are covered by the many goverment 
licensed CAs that (hundreds last I heard) that are used for these cases while 
using GOST approved algorithms.

Above and beyond that I would say the cost realities of commercial WebPKI 
offerings make it difficult to justify that particular business in the Russian 
market.

With that said I think your real question is could a Russian CA become a 
WebTrust and browser trusted CA? I personally think the answer is yes (though I 
doubt the business viability) if they could get clarity from the FSB on 
approval to operate such a CA given the current guidance regarding approved 
GOST algorithms.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Disallowed company name

2018-06-04 Thread Ryan Hurst via dev-security-policy
I apologize, I originally wrote in haste and did not clearly state what I
was suggesting.

Specifically, while it is typical for a given jurisdiction (state, etc) to
require a name to be unique, it is typically not a requirement for it to
not be so unique that it can not be confused for another name. For example,
I have seen businesses registered with punctuation and without; I have also
seen non-latin characters in use in business names this clearly has the
potential to introduce name confusion.

Ryan

On Fri, Jun 1, 2018 at 11:55 PM, Matthew Hardeman 
wrote:

>
>
> On Fri, Jun 1, 2018 at 10:28 AM, Ryan Hurst via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
>>
>> re: Most of the government offices responsible for approving entity
>> creation are concerned first and foremost with ensuring that a unique name
>> within their jurisdiction is chosen
>>
>> What makes you say that, most jurisdictions have no such requirement.
>>
>>
> This was anecdotal, based on my own experience with formation of various
> limited liability entities in several US states.
>
> Even my own state of Alabama, for example, (typically regarded as pretty
> backwards) has strong policies and procedures in place for this.
>
> In Alabama, formation of a limited liability entity whether a Corporation
> or LLC, etc, begins with a filing in the relevant county probate court of
> an Articles of Incorporation, Articles or Organization, trust formation
> documents, or similar.  As part of the mandatory filing package for those
> document types, a name reservation certificate (which will be validated by
> the probate court) from the Alabama Secretary of State will be required.
> The filer must obtain those directly from the appropriate office of the
> Alabama Secretary of State.  (It can be done online, with a credit card.
> The system enforces entity name uniqueness.)
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Disallowed company name

2018-06-01 Thread Ryan Hurst via dev-security-policy
On Thursday, May 31, 2018 at 3:07:36 PM UTC-7, Matthew Hardeman wrote:
> On Thu, May 31, 2018 at 4:18 PM, Peter Saint-Andre via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
> >
> >
> > We can also think of many business types (e.g., scammers) that would
> > love to have names like ⒶⓅⓅⓁⒺ but that doesn't mean it's smart to issue
> > certificates with such names. The authorities who approve of company
> > names don't necessarily have certificate handling in mind...
> >
> 
> Indeed.  Most of the government offices responsible for approving entity
> creation are concerned first and foremost with ensuring that a unique name
> within their jurisdiction is chosen and that a public record of the entity
> creation exists.  They are not concerned with risk management or
> legitimacy, broadly speaking.
> 
> Anyone at any level of risk management in the rest of the ecosystem around
> a business will be concerned with such matters.  Banks, trade vendors, etc,
> tend to reject accounts with names like this.  Perhaps CAs should look upon
> this similarly.

re: Most of the government offices responsible for approving entity creation 
are concerned first and foremost with ensuring that a unique name within their 
jurisdiction is chosen

What makes you say that, most jurisdictions have no such requirement.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Bit encoding (AW: Policy 2.6 Proposal: Add prohibition on CA key generation to policy)

2018-05-04 Thread Ryan Hurst via dev-security-policy

> True, but CAs can put technical constraints on that to limit the acceptable 
> passwords to a certain strength. (hopefully with a better strength-testing 
> algorithm than the example Tim gave earlier)

Tim is the best of us -- this is hard to do well :)

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Bit encoding (AW: Policy 2.6 Proposal: Add prohibition on CA key generation to policy)

2018-05-04 Thread Ryan Hurst via dev-security-policy

> 
> What about "or a user supplied password"?
> -carl

user supplied passwords will (in real world scenarios) not be as good as a one 
generated for them; this is in part why I suggested earlier if a user password 
to be used that it be mixed with a server provided value.

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Bit encoding (AW: Policy 2.6 Proposal: Add prohibition on CA key generation to policy)

2018-05-04 Thread Ryan Hurst via dev-security-policy
On Friday, May 4, 2018 at 1:00:03 PM UTC-7, Doug Beattie wrote:
> First comments on this: "MUST be encrypted and signed; or, MUST have a 
> password that..."
> - Isn't the password the key used for encryption?  I'm not sure if the "or" 
> makes sense since in both cases the password is the key for encryption

There are modes of PKCS#12 that do not use passwords.

> - In general, I don't think PKCS#12 files are signed, so I'd leave that out, 
> a signature isn't necessary.  I could be wrong...

They may be, see: http://unmitigatedrisk.com/?p=543

> 
> I'd still like to see a modification on the requirement: "password MUST be 
> transferred using a different channel than the PKCS#12 file".  A user should 
> be able to download the P12 and password via HTTP.  Can we add an exception 
> for that?

Why do you want to allow the use of HTTP?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.6 Proposal: Add prohibition on CA key generation to policy

2018-05-01 Thread Ryan Hurst via dev-security-policy
On Tuesday, May 1, 2018 at 1:00:20 PM UTC-7, Tim Hollebeek wrote:
> I get that, but any CA that can securely erase and forget the user’s 
> contribution to the password and certainly do the same thing to the entire 
> password, so I’m not seeing the value of the extra complexity and interaction.

It forces a conscious decision to violate a core premise.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.6 Proposal: Add prohibition on CA key generation to policy

2018-05-01 Thread Ryan Hurst via dev-security-policy
> I'm not sure I agree with this as a recommendation; if you want both
parties
> to provide inputs to the generation of the password, use a
well-established
> and vetted key agreement scheme instead of ad hoc mixing.

> Of course, at that point you have a shared transport key, and you should
> probably
> just use a stronger, more modern authenticated key block than PKCS#12,
> but that's a conversation for another day.

I say this because it is desirable that the CA plausibly not be able to
decrypt the key even if it holds the encrypted key blob.



On Tue, May 1, 2018 at 12:40 PM, Tim Hollebeek 
wrote:

>
> > - What is sufficient? I would go with a definition tied to the effective
> > strength of
> > the keys it protects; in other words, you should protect a 2048bit RSA
> key
> > with
> > something that offers similar properties or that 2048bit key does not
> live
> > up to
> > its 2048 bit properties.
>
> Yup, this is the typical position of standards bodies for crypto stuff.  I
> noticed that
> the 32 got fixed to 64, but it really should be 112.
>
> > - The language should recommend that the "password" be a value that is a
> mix
> > of a user-supplied value and the CSPRNG output and that the CA can not
> store
> > the user-supplied value for longer than necessary to create the PKCS#12.
>
> I'm not sure I agree with this as a recommendation; if you want both
> parties
> to provide inputs to the generation of the password, use a well-established
> and vetted key agreement scheme instead of ad hoc mixing.
>
> Of course, at that point you have a shared transport key, and you should
> probably
> just use a stronger, more modern authenticated key block than PKCS#12,
> but that's a conversation for another day.
>
> > - The language requires the use of a password when using PKCS#12s but
> > PKCS#12 supports both symmetric and asymmetric key based protection also.
> > While these are not broadly supported the text should not probit the use
> of
> > stronger mechanisms than 3DES and a password.
>
> Strongly agree.
>
> -Tim
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.6 Proposal: Add prohibition on CA key generation to policy

2018-05-01 Thread Ryan Hurst via dev-security-policy
A few problems I see with the proposed text:

- What is sufficient? I would go with a definition tied to the effective 
strength of the keys it protects; in other words, you should protect a 2048bit 
RSA key with something that offers similar properties or that 2048bit key does 
not live up to its 2048 bit properties. This is basically the same CSPRNG 
conversation but it's worth looking at https://www.keylength.com/ 
- The language should recommend that the "password" be a value that is a mix of 
a user-supplied value and the CSPRNG output and that the CA can not store the 
user-supplied value for longer than necessary to create the PKCS#12.
- The strength of the password is discussed but PKCS#12 supports a bunch of 
weak cipher suites and it is common to find them in use in PKCS#12s. The 
minimum should be specified to be what Microsoft supports which is 
pbeWithSHAAnd3-KeyTripleDES-CBC for “privacy” of keys and for the privacy of 
certificates it uses pbeWithSHAAnd40BitRC2-CBC.
- The language requires the use of a password when using PKCS#12s but PKCS#12 
supports both symmetric and asymmetric key based protection also. While these 
are not broadly supported the text should not probit the use of stronger 
mechanisms than 3DES and a password.

Ryan
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: "multiple perspective validations" - AW: Regional BGP hijack of Amazon DNS infrastructure

2018-04-26 Thread Ryan Hurst via dev-security-policy
On Thursday, April 26, 2018 at 11:45:15 AM UTC, Tim Hollebeek wrote:
> > > which is why in the near future we can hopefully use RDAP over TLS
> > > (RFC
> > > 7481) instead of WHOIS, and of course since the near past, DNSSEC :)
> > 
> > I agree moving away from WHOIS to RDAP over TLS is a good low hanging fruit
> > mitigator once it is viable.
> 
> My opinion is it is viable now, and the time to transition to optionally 
> authenticated RDAP over TLS is now.  It solves pretty much all the problems 
> we are currently having in a straightforward, standards-based way.  
> 
> The only opposition I've seem comes from people who seem to want to promote 
> alternative models that destroy the WHOIS ecosystem, leading to proprietary 
> distribution and monetization of WHOIS data.
> 
> I can see why that is attractive to some people, but I don’t think it's best 
> for everyone.
> 
> I also agree that DNSSEC is a lost cause, though I understand why Paul 
> doesn't want to give up   I've wanted to see it succeed for basically my 
> entire career, but it seems to be making about as much progress as fusion 
> energy.
> 
> -Tim

Moving to RDAP does not solve "all the problems we are currently having" in 
that it does not do anything for DCV which is what I think this thread was 
about (e.g. BGP implications for DCV).

That said, if in fact, RDAP is viable today I agree we should deprecate the use 
of WhoIs and mandate use of RDAP in the associated scenarios.

Ryan Hurst
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: "multiple perspective validations" - AW: Regional BGP hijack of Amazon DNS infrastructure

2018-04-26 Thread Ryan Hurst via dev-security-policy
On Wednesday, April 25, 2018 at 3:48:07 PM UTC+2, Paul Wouters wrote:
> On Wed, 25 Apr 2018, Ryan Hurst via dev-security-policy wrote:
> 
> > Multiple perspectives is useful when relying on any insecure third-party 
> > resource; for example DNS or Whois.
> >
> > This is different than requiring multiple validations of different types; 
> > an attacker that is able to manipulate the DNS validation at the IP layer 
> > is also likely going to be able to do the same for HTTP and Whois.
> 
> which is why in the near future we can hopefully use RDAP over TLS (RFC
> 7481) instead of WHOIS, and of course since the near past, DNSSEC :)
> 
> I'm not sure how useful it would be to have multiple network points for
> ACME testing - it will just lead to the attackers doing more then one
> BGP hijack at once. In the end, that's a numbers game with a bunch of
> race conditions. But hey, it might lead to actual BGP security getting
> deployed :)
> 
> Paul

I agree moving away from WHOIS to RDAP over TLS is a good low hanging fruit 
mitigator once it is viable.

Having been responsible for a very popular/mainstream DNS server and worked on 
implementing/deploying DNSSEC in enterprises I am of the opinion this is a lost 
cause and do not have the patience or energy to try to engage in all the 
reasons why this is not a viable solution.

As for multi-perspective domain control validation and the idea that an 
attacker who can attack one perspective can attack all perspectives, that may 
be true but the larger your quorum set is the harder that becomes. The goal is 
not to make it impossible to cheat is not realistic, the goal is to raise the 
bar so that cheating is meaningfully harder.

Ryan

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: "multiple perspective validations" - AW: Regional BGP hijack of Amazon DNS infrastructure

2018-04-25 Thread Ryan Hurst via dev-security-policy
On Wednesday, April 25, 2018 at 1:28:43 PM UTC+2, Buschart, Rufus wrote:
> Hi Ryan!
> 
> The "multiple perspective validations" is an interesting idea. Did you think 
> about combining it with CAA checking? I could imagine having a new tag, e.g. 
> "allowedMethods", in which the legitimate owner of  a domain can specify the 
> set of allowed methods to validate his domain. As an example the value 
> "(3.2.2.4.1 AND 3.2.2.4.5) OR 3.2.2.4.9" in the new "allowedMethods" tag 
> could mean, that a certificate may only be issued, if two validations acc. 
> 3.2.2.4.1 and 3.2.2.4.1 were successful or if one validation acc. 3.2.2.4.9 
> was successful. Any other method of validation would be not allowed. I see 
> here the benefit, that the owner of a domain can choose how to verify 
> according his business needs and select the appropriate level of security for 
> his domains.
> 
> With best regards,
> Rufus Buschart
> 

Multiple perspectives is useful when relying on any insecure third-party 
resource; for example DNS or Whois. 

This is different than requiring multiple validations of different types; an 
attacker that is able to manipulate the DNS validation at the IP layer is also 
likely going to be able to do the same for HTTP and Whois.

Ryan
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Regional BGP hijack of Amazon DNS infrastructure

2018-04-25 Thread Ryan Hurst via dev-security-policy
On Tuesday, April 24, 2018 at 5:29:05 PM UTC+2, Matthew Hardeman wrote:
> This story is still breaking, but early indications are that:
> 
> 1.  An attacker at AS10297 (or a customer thereof) announced several more
> specific subsets of some Amazon DNS infrastructure prefixes:
> 
> 205.251.192-.195.0/24 205.251.197.0/24 205.251.199.0/24
> 
> 2.  It appears that AS10297 via peering arrangement with Google got
> Google's infrastructure to buy (accept) the hijacked advertisements.
> 
> 3.  It has been suggested that at least one of the any cast 8.8.8.8
> resolvers performed resolutions of some zones via the hijacked targets.
> 
> It seems prudent for CAs to look into this deeper and scrutinize any domain
> validations reliant in DNS from any of those ranges this morning.

This is an example of why ALL CA's should either already be doing 
multi-perspective domain control validation or be working towards that in the 
very near future.

These types of attacks are far from new, we had discussions about them back in 
the early 2000s while at Microsoft and I know we were not the only ones. One of 
the earlier papers I recall discussing this topic was from the late 08 
timeframe from CMU - https://www.cs.cmu.edu/~dga/papers/perspectives-usenix2008/

The most recent work on this I am aware of is the Princeton paper from last 
year: http://www.cs.princeton.edu/~jrex/papers/bamboozle18.pdf

As the approved validation mechanisms are cleaned up and hopefully reduced to a 
limited few with known security properties the natural next step is to require 
those that utilize these methods to also use multiple perspective validations 
to mitigate this class of risk.

Ryan Hurst (personal)
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Sigh. stripe.ian.sh back with EV certificate for Stripe, Inc of Kentucky....

2018-04-13 Thread Ryan Hurst via dev-security-policy
On Friday, April 13, 2018 at 2:15:47 PM UTC-7, Matthew Hardeman wrote:
As a parent it is not uncommon for me to have to explain to my children that 
something they ask for is not reasonable. In some cases I joke and say things 
like “well I want a pony” or “and I wish water wasn't wet”.

When I look at arguments that support the idea of name squatting on a the 
internet and trying to solve that problem via the WebPKI I immediately think of 
these conversations with my kids.

The topic of trademark rights has numerous professions dedicated to it combined 
with both international and domestic laws that define the rights, obligations 
and associated dispute resolution processes for claims associated with 
trademarks must use. I do not see how it would be effective or reasonable to 
place CAs as the arbitrator of this. Instead, should there be a trademark 
violation, it seems the existing legal system would be the appropriate way to 
address such concerns.

If we accept that, which seems reasonable to me,  then the question becomes in 
the event of a trademark dispute where should remediation happen. Since the CA 
is not the owner of the trademark or responsible for the registration of the 
name, it seems misplaced to think they should be the initiator of this process. 
Additionally it seems wrong that they would even be the first place you would 
go to if you wanted trademark enforcement, the registration of the name happens 
at the DNS layer and revoking the certificate does not change that the domain 
is still out there.

To that end, ICANN actually has specific policies and procedures on how that 
process is supposed to work (see: 
https://www.icann.org/resources/pages/dispute-resolution-2012-02-25-en). The 
WebPKI ecosystem does not, it is, as has been discussed in this thread 
effectively acting arbitrarily when revoking for Trademark infringement.

Based on the above, it seems clear to me the only potentially reasonable 
situation a CA should revoked on the basis of the outcome of Trademark claim 
through the aforementioned processes.

To the topic of revoking a certificate because it is “deceiving”; this idea 
sounds a lot like book burning to me 
(https://www.ushmm.org/wlc/en/article.php?ModuleId=10005852). 

```
Book burning refers to the ritual destruction by fire of books or other written 
materials. Usually carried out in a public context, the burning of books 
represents an element of censorship and usually proceeds from a cultural, 
religious, or political opposition to the materials in question.
```

This is a great example of that, what we have here is a legitimate business 
publishing information into the public domain that some people find offensive. 
Those people happen to control the doors to the library and have used that fact 
to censor that information so others can not access it.

As a technologist who has spent a good chunk of his career working to secure 
the internet and make it more accessible this give me great pause and if you 
don’t come to the same conclusion I suggest you take a few minutes to look at 
how many CAs are operated by or in countries who have a bad history of freedom 
of speech.

I strongly hope that Mozilla, and the other browsers, take a hard look at the 
topic of how CAs are expected to handle cases like this. The current situation 
may have been acceptable 10 years ago but as we approach 100% encryption on the 
web do we really want the WebPKI to be used as a censorship tool?

Ryan Hurst
(Speaking as an individual)
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Sigh. stripe.ian.sh back with EV certificate for Stripe, Inc of Kentucky....

2018-04-13 Thread Ryan Hurst via dev-security-policy
On Thursday, April 12, 2018 at 5:39:39 PM UTC-7, Tim Hollebeek wrote:
> > Independent of EV, the BRs require that a CA maintain a High Risk
> Certificate
> > Request policy such that certificate requests are scrubbed against an
> internal
> > database or other resources of the CAs discretion.
> 
> Unless you're Let's Encrypt, in which case you can opt out of this
> requirement via a blog post.
> 
> -Tim

As you know, that is not what that post says, nor does it reflect what Let's 
Encrypt does.

The BRs define the High Risk Certificate Request as:

```
High Risk Certificate Request: A Request that the CA flags for additional 
scrutiny by reference to internal criteria and databases maintained by the CA, 
which may include names at higher risk for phishing or other fraudulent usage, 
names contained in previously rejected certificate requests or revoked 
Certificates, names listed on the Miller Smiles phishing list or the Google 
Safe Browsing list, or names that the CA identifies using its own 
risk-mitigation criteria.
```

It also explicitly allows for phishing lists, such as the Google Safe Browsing 
list to be used.

The blog post in question 
(https://letsencrypt.org/2015/10/29/phishing-and-malware.html) states that 
Let's Encrypt (rightfully in my mind) believes that CAs are not the right place 
to try to protect users from Phishing. They state this for a variety of 
reasons, including one brought up in this thread about making CAs censors on 
the web.

They go on to state that despite them thinking CAs are not the right place to 
solve this problem that:

```
At least for the time being, Let’s Encrypt is going to check with the Google 
Safe Browsing API before issuing certificates, and refuse to issue to sites 
that are flagged as phishing or malware sites. Google’s API is the best source 
of phishing and malware status information that we have access to, and 
attempting to do more than query this API before issuance would almost 
certainly be wasteful and ineffective.
```

They have also publicly stated that they maintain a blacklist of domains they 
will not issue for.

Ryan Hurst
(speaking for myself, not Google or Let's Encrypt)
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.6 Proposal: Add prohibition on CA key generation to policy

2018-04-05 Thread Ryan Hurst via dev-security-policy
On Thursday, April 5, 2018 at 9:55:39 AM UTC-7, Wayne Thayer wrote:
> On Thu, Apr 5, 2018 at 3:15 AM, Dimitris Zacharopoulos 
> wrote:
> 
> > My proposal is "CAs MUST NOT distribute or transfer private keys and
> > associated certificates in PKCS#12 form through insecure physical or
> > electronic channels " and remove the rest.
> >
> > +1 - I support this proposal.

That seems an appropriate level of detail for policy. +1
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.6 Proposal: Require English Language Audit Reports

2018-04-04 Thread Ryan Hurst via dev-security-policy

> An authoritative English language version of the publicly-available audit
> information MUST be supplied by the Auditor.
> 
> it would be helpful for auditors that issue report in languages other than
> English to confirm that this won't create any issues.

That would address my concern.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: FW: Complying with Mozilla policy on email validation

2018-04-04 Thread Ryan Hurst via dev-security-policy
On Wednesday, April 4, 2018 at 3:39:46 PM UTC-7, Wayne Thayer wrote:
> On Wed, Apr 4, 2018 at 2:44 PM, Ryan Hurst via dev-security-policy <
> > My opinion on this method and on Adrian's comments is that the CA/Browser
> Forum, with it's new-found ability to create an S/MIME Working Group, is a
> better venue for formulating secure email validation methods. Does it make
> sense for us to define more specific email validation methods in this forum
> when it's likely the CA/Browser Forum will do the same in the next year or
> two?

I understand that position, and maybe this is acceptable, but I believe the 
removal of "business controls" (which to be clear I like) prohibits this 
practice when it is reasonable and even desirable.

I was thinking until an S/MIME policy is established some accommodation of 
federated login in the mozilla policy accompanying the removal of "business 
controls" would address that.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.6 Proposal: Add prohibition on CA key generation to policy

2018-04-04 Thread Ryan Hurst via dev-security-policy
Some thoughts:

1 - Should additional text be included to mandate strong cipher suites 
(http://unmitigatedrisk.com/?p=543) be used; it is not uncommon for me to find 
PKCS#12s with very weak cryptographic algorithms in use. Such guidance would be 
limited by Windows which does not support modern cryptographic algorithms for 
key protection but having some standard would be better than none though it 
would potentially hurt interoperability for those use cases if the chosen 
suites were not uniform.

2 - Should additional text be included to mandate the that CA resellers cannot 
be used as an escape to this requirement; e.g. today A CA may simply rely on a 
third-party to implement this practice to stay in conformance with the policy.

3 - Should additional text be included to require that the user provide part or 
all of the secrete used as the "password" on the PKCS#12 file and that CA 
cannot store the user provided value?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.6 Proposal: Require English Language Audit Reports

2018-04-04 Thread Ryan Hurst via dev-security-policy
On Wednesday, April 4, 2018 at 1:58:35 PM UTC-7, Wayne Thayer wrote:
> Mozilla needs to be able to read audit reports in the English language
> without relying on machine translations that may be inaccurate or
> misleading.
> 
> I suggest adding the following sentence to the end of policy section 3.1.4
> “Public Audit Information”:
> 
> An English language version of the publicly-available audit information
> MUST be supplied by the Auditor.
> 
> This is: https://github.com/mozilla/pkipolicy/issues/106
> 
> ---
> 
> This is a proposed update to Mozilla's root store policy for version
> 2.6. Please keep discussion in this group rather than on GitHub. Silence
> is consent.
> 
> Policy 2.5 (current version):
> https://github.com/mozilla/pkipolicy/blob/2.5/rootstore/policy.md

Should the text require the English version to be the authoritative version?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: FW: Complying with Mozilla policy on email validation

2018-04-04 Thread Ryan Hurst via dev-security-policy
On Tuesday, April 3, 2018 at 1:17:50 PM UTC-7, Wayne Thayer wrote:
> > I agree that name constraints would be difficult to implement in this
> scenario, but I'm less convinced that section 2.2(2) doesn't permit this.
> It says:
> 
> 
> *For a certificate capable of being used for digitally signing or
> encrypting email messages, the CA takes reasonable measures to verify that
> the entity submitting the request controls the email account associated
> with the email address referenced in the certificate or has been authorized
> by the email account holder to act on the account holder’s behalf.*

I can see that covering it. Maybe this could be provided as an explicit example 
of how that might happen?

> > Another case I think is interesting is that of a delegation of email
> > verification to a third-party. For example, when you do a OAUTH
> > authentication to Facebook it will return the user’s email address if it
> > has been verified. The same is true for a number of related scenarios, for
> > example, you can tell via Live Authentication and Google Authentication if
> > the user's email was verified.
> >
> > The business controls text plausibly would have allowed this use case also.
> >
> > I'm not a fan of expanding the scope of such a vague requirement as
> "business controls", and I'd prefer to have the CA/Browser Forum define
> more specific validation methods, but if section 2.2(2) of our current
> policy is too limiting, we can consider changing it to accommodate this use
> case.

I dislike business controls also, however in this case the LARGE majority of 
authentication on the web happens via OAUTH and federated user authentication 
is a thing we won't se going away. 

It seems broken to have a policy that prohibits this in the case of secure 
email or other related use cases of these certificates.

Maybe this can be addressed through an explicit carve out for the case of 
federated authentication systems that provide a reliable verification of 
control of an email address.

Alternatively, maybe Mozilla should maintain a listing common provider where 
Mozilla says this is allowable (Google, Microsoft, Facebook, and Twitter, for 
example).
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: FW: Complying with Mozilla policy on email validation

2018-04-03 Thread Ryan Hurst via dev-security-policy
On Monday, April 2, 2018 at 1:10:13 PM UTC-7, Wayne Thayer wrote:
> I'm forwarding this for Tim because the list rejected it as SPAM.
> 
> 
> 
> *From:* Tim Hollebeek
> *Sent:* Monday, April 2, 2018 2:22 PM
> *To:* 'mozilla-dev-security-policy' <mozilla-dev-security-policy@
> lists.mozilla.org>
> *Subject:* Complying with Mozilla policy on email validation
> 
> 
> 
> 
> 
> Mozilla policy currently has the following to say about validation of email
> addresses in certificates:
> 
> 
> 
> “For a certificate capable of being used for digitally signing or
> encrypting email messages, the CA takes reasonable measures to verify that
> the entity submitting the request controls the email account associated
> with the email address referenced in the certificate or has been authorized
> by the email account holder to act on the account holder’s behalf.”
> 
> 
> 
> “If the certificate includes the id-kp-emailProtection extended key usage,
> then all end-entity certificates MUST only include e-mail addresses or
> mailboxes that the issuing CA has confirmed (via technical and/or business
> controls) that the subordinate CA is authorized to use.”
> 
> 
> 
> “Before being included and periodically thereafter, CAs MUST obtain certain
> audits for their root certificates and all of their intermediate
> certificates that are not technically constrained to prevent issuance of
> working server or email certificates.”
> 
> 
> 
> (Nit: Mozilla policy is inconsistent in it’s usage of email vs e-mail.  I’d
> fix the one hyphenated reference)
> 
> 
> 
> This is basically method 1 for email certificates, right?  Is it true that
> Mozilla policy today allows “business controls” to be used for validating
> email addresses, which can essentially be almost anything, as long as it is
> audited?
> 
> 
> 
> (I’m not talking about what the rules SHOULD be, just what they are.  What
> they should be is a discussion we should have in a newly created CA/* SMIME
> WG)
> 
> 
> 
> -Tim

Reading this thread and thinking the current text, based on the interpretation 
discussed, does not accommodate a few cases that I think are useful.

For example, if we consider a CA supporting a large mail provider in providing 
S/MIME certificates to all of its customers. In this model, the mail provider 
is the authoritative namespace owner.  

In the context of mail, you can imagine gmail.com or peculiarventures.com as 
examples, both are gmail (as determined by MX records). It seems reasonable to 
me (Speaking as Ryan and not Google here) to allow a CA to leverage this 
internet reality (expressed via MX records) to work with a CA to get S/MIME 
certificates for all of its customers without forcing them through an email 
challenge. 

In this scenario, you could not rely on name constraints because the onboarding 
of custom domains (like peculiarventures.com) happens real time as part of 
account creation. The prior business controls text seemed to allow this case 
but it seems the interpretation discussed here would prohibit it.


Another case I think is interesting is that of a delegation of email 
verification to a third-party. For example, when you do a OAUTH authentication 
to Facebook it will return the user’s email address if it has been verified. 
The same is true for a number of related scenarios, for example, you can tell 
via Live Authentication and Google Authentication if the user's email was 
verified.

The business controls text plausibly would have allowed this use case also.

I think a policy that does not allow a CA to support these use cases would 
severly limit the use cases in which S/MIME could be used and I would like to 
see them considered.

Ryan Hurst
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Following up on Trustico: reseller practices and accountability

2018-03-05 Thread Ryan Hurst via dev-security-policy
On Monday, March 5, 2018 at 11:38:31 AM UTC-8, Ryan Sleevi wrote:
> While these are interesting questions, I think it gets to the heart of
> policy questions, which is how is policy maintained and enforced. Today,
> there’s only one method - distrust.
> 
> So are you suggesting the CA should be distrusted if these “other parties”
> (which may have no observable relationship with the CA) don’t adhere to
> this policy? Are you suggesting the certificates these “other parties” are
> involved with get distrusted?  Or something else?
> 
> Because without teeth, the policy suggestions themselves are hollow.

That is a very valid point. 

Well since I do not have a concrete proposal it is hard to say at this point if 
a CA should be kicked out for non-conformance to a given critera. With that 
said today there are over 20 SHOULDs in the BRs and I can imagine failure to 
meet those should would be considered in aggregate when looking at a distrust 
event.

If nothing else addressing any potential ambiguity would be useful.

> 
> I disagree on that venue suggestion, since here we can actually have
> widespread public participation. I would also suggest that Section 1.3 of
> the Bylaws would no doubt be something constantly having to be pointed out
> in such discussions.
> 

Fair enough, as I am on the plane to CA/Browser Forum event maybe, as a result, 
I had this venue on my mind, I agree this is a fine venue for this discussion.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Following up on Trustico: reseller practices and accountability

2018-03-05 Thread Ryan Hurst via dev-security-policy
I agree with Sleevi on this, the real question on what can and should be done 
here is dependent on who the reseller is an agent of and what role they play in 
the overall ecosystem.

While it is easy to say that resellers are pure marketers with no vested 
interest in security outcomes, and there is some truth to this, the reality is 
far more complex. For one there is no one size fits all definition for a 
reseller, for example:

- Hosting “reseller” - As a hosting provider, for example one that utilizes 
CPANEL, you may be responsible for enrolling for certificates and generating 
keys for users as well as managing the lifecycle, you are clearly acting “as a 
reseller” if you are selling “a certificate” but you are also acting as a 
delegate of the user if you are configuring and managing SSL for them.
- SaaS “reseller” - As a SaaS provider, for example one that hosts Wordpress, 
you may be responsible for enrolling for certificates and generating keys for 
users as well as managing the lifecycle, you are clearly acting “as a reseller” 
if you are selling “a certificate” but you are also acting as a delegate of the 
user if you are configuring and managing SSL for them.
- Marketing “resellers” - As a pure reseller, for example one that offers 
regional sales and marketing support, again you are clearly acting as a 
delegate of the CA by providing marketing and sales support for a vertical, 
region or market segment, but you could very well be providing value added 
services to the user (such as simplfying enrollment and/or SSL configuration) 
and as such are again a delagate of both parties.

As I look at this non-exhaustive list, it seems to me the difference between 
the reseller and a and the more typical SaaS service provider where SSL is 
possibly a paid feature is the sale of a certificate.

With that said, since there are so many different types of “other parties” it 
is probably better to avoid discussing resellers directly and focus on 
responsibilities of “other parties” instead.

For example, today the BRs require that CAs and RAs:
- Require consent to subscriber key archival (section 6.1.2),
- Require the encryption of the subscribers private in transport (section 
6.1.2).

In no particular order here are some questions I have for myself on this topic:

- Should we provide a definition of “other parties”, and “reseller” and make 
sure they are clear so responsibilities of parties are unambiguous?
- In the BRs we currently say “Parties other than the Subscriber SHALL NOT 
archive the Subscriber Private Key” (in Section 6.1.2) should we also say that 
the CAs should be required to demonstrate they have communicated this 
requirement to the other party and get affirmative acknowledgement from the 
“other party” during their audits?
- The BRs currently state subscriber authorization is required for archival but 
there is no text covering what minimal level of authorization expressed. I 
would have thought this not necessary but TrustIco has been arguing users 
should have implicitly known they had this practice even though it was not 
disclosed and there was no explicit consent for archival. While I think that is 
a irresponsible position the text could be made clearer.
- The current BR text talks about RAs generating keys on behalf of the 
subscriber (6.1.2 but it says nothing about other parties?
- Should the BRs be revised to require CAs to have the “other parties” publicly 
disclose if they generate keys for users, how they protect them at rest and in 
transport, and how they capture consent for these practices if at all?
- Though the concept of key archival for RAs and CAs is allowed in the BRs 
(section 6.1.2) it does not require keys be encrypted while in archive,  Should 
this be changed? At the same time should we mandate some minimal level of 
protection that would prevent all user keys being accessed without user consent 
like was done here?
- Today the BRs inconsistently discuss private key archival, for example 
section 6.2.5 talks about CA key archival but not subscriber archival. Should 
we fix this?
- Should we formalize a proof proof of possession mechanism, such as what is 
done in ACME, as an alternative to sharing the actual key as to encourage this 
approach to be used instead of distribution of the actual key?

One thing for us to keep in mind while looking at these issues is we are moving 
to a world where SSL is the default and for that to be true we need automation 
and permissionless SSL deployment (e.g. automation) is necessary for that to be 
a reality.

This discussion is probably better for the CABFORUM public list but since the 
thread started here I thought it best to share my thoughts here.

Ryan Hurst
Google Trust Services
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla’s Plan for Symantec Roots

2018-03-01 Thread Ryan Hurst via dev-security-policy
> >
> > Google requests that certain subCA SPKIs are whitelisted, to ensure
> > continued trust of Symantec-issued certificates that are used by
> > infrastructure that is operated by Google.
> >
> > Is whitelisting the SPKI found in the Google subCA sufficient to achieve
> > the need of trusting Google's server infrastructure?

Kai,

I will do my best to answer this question.

Alphabet has a policy that all of its companies should be getting certificates 
from the Google PKI infrastructure. Right now in the context of certificate 
chains you see that manifested as certificates issued under GIAG2 and GIAG3.

We are actively migrating from GIAG2 (issued under a Symantec owned Root) to 
GIAG3 (issued under a root we own and operate). This transition will be 
complete in August 2018.

Given the size and nature of the Google organization sometimes other CAs are 
used either on accident because the team did not know any better, because the 
organization is part of an acquisition that is not yet integrated or there may 
be some sort of exceptional requirement/situation that necessitates it.

For this, and other reasons, we tell partners that we reserve the right to use 
other roots should the need arise and we publish a list of root certificates we 
may use (https://pki.goog/faq.html see what roots to trust).

As for the use of the With that background nearly all certificates for Alphabet 
(and Google) properties will be issued by a Google operated CA.

In the context of the whitelist, we believe the SPKI approach should be 
sufficient for those applications who also need to whitelist associated CA(s). 

I am also not aware of any Alphabet properties utilizing the DigiCert's Managed 
Partner Infrastructure (beyond one subca they operate that is not in use).

In summary while a SPKI whitelist should work for the current situation 
applications communicating with Alphabet properties should still trust (and 
periodically update to) the more complete list of roots listed in the FAQ.

Ryan Hurst
Google
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Deadline for whitelisting of the Apple/Google subCAs issued by Symantec?

2018-03-01 Thread Ryan Hurst via dev-security-policy
On Thursday, March 1, 2018 at 7:15:52 AM UTC-8, Kai Engert wrote:

> Are the owners of the Apple and Google subCAs able to announce a date,
> after which they will no longer require their Symantec-issued subCAs to
> be whitelisted?

Kai,

We are actively migrating to the Google Trust Services operated root 
certificates and while we would love to provide a concrete date the nature of 
these sorts of deployments makes that hard to provide.

What I can say is that our plan is to be migrated off by the time the Equifax 
root expires August 22nd 2018.

Ryan Hurst
Google

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Allowing WebExtensions to Override Certificate Trust Decisions

2018-02-28 Thread Ryan Hurst via dev-security-policy
On Wednesday, February 28, 2018 at 10:42:25 AM UTC-8, Alex Gaynor wrote:
> If the "fail verification only" option is not viable, I personally think we
> shouldn't expose this to extensions.
> 

I agree, there are far too many ways this will be abused and the cases in which 
it would be useful are not worth the negative consequences to the average 
browser user, at least in my opinion.

Ryan Hurst
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: How do you handle mass revocation requests?

2018-02-28 Thread Ryan Hurst via dev-security-policy
On Wednesday, February 28, 2018 at 11:56:04 AM UTC-8, Ryan Sleevi wrote:
> Assuming Trustico sent the keys to DigiCert, it definitely sounds like even
> if Trustico was authorized to hold the keys (which is a troubling argument,
> given all things), they themselves compromised the keys of their customers,
> and revocation is both correct and necessary. That is, whether or not
> Trustico believed they were compromised before, they compromised their
> customers keys by sending them, and it's both correct and accurate to
> notify the Subscribers that their keys have been compromised by their
> Reseller.

That seems to be the case to me as well.

It also seems that this situation should result in the UAs and/or CABFORUM 
re0visit section 6.1.2 
(https://github.com/cabforum/documents/blob/master/docs/BR.md) in the BRs.

Specifically, this section states:

```
Parties other than the Subscriber SHALL NOT archive the Subscriber Private Key 
without authorization by the Subscriber.

If the CA or any of its designated RAs generated the Private Key on behalf of 
the Subscriber, then the CA SHALL encrypt the Private Key for transport to the 
Subscriber.
```

In this case, TrustIco is not the subscriber, and there is no indication in 
their terms and conditions 
(https://www.trustico.com/terms/terms-and-conditions.php) that they are 
authorized to archive the private key. Yet clearly if they were able to provide 
20k+ private keys to DigiCert they are archiving them. This text seems to cover 
this case clearly but as worded I do not see how audits would catch this 
behavior. I think it may make sense for the CAs to be responsible for 
demonstrating how they and other non-subscribers in the lifecycle flow handle 
this case.

Additionally, it seems if the private keys were provided to DigiCert in a way 
they were verifiable by them they may have been stored in a non-encrypted 
fashion, at a minimum they were likley not generated and protected on an HSM. 
The BRs should probably be revised to specify some minimum level of security to 
be provided in these cases of for these cases to be simply disallowed 
altogether.

Finally, the associated text speaks to RAs but not to the non-subscriber 
(reseller) case, this gap should be addressed minimally.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google OCSP service down

2018-02-25 Thread Ryan Hurst via dev-security-policy
Tim,

I can see value in a ballot on how to clarify incident reporting and other
contact related issues, right now 1.5.2 is pretty sparse in regards to how
to handle this. I would be happy to work with you on a proposal here.

Ryan

On Sun, Feb 25, 2018 at 6:41 AM, Tim Hollebeek <tim.holleb...@digicert.com>
wrote:

> Ryan,
>
> Wayne and I have been discussing making various improvements to 1.5.2
> mandatory for all CAs.  I've made a few improvements to DigiCert's CPSs in
> this area, but things probably still could be better.  There will probably
> be
> a CA/B ballot in this area soon.
>
> DigiCert's 1.5.2 has our support email address, and our Certificate Problem
> Report email (which I recently added).  That doesn't really cover
> everything
> (yet).
>
> It looks like GTS 1.5.2 splits things into security (including CPRs),
> non-security
> requests.
>
> I didn't chase down any other 1.5.2's yet, but it'd be interesting to hear
> what
> other CAs have here.  I suspect most only have one address for everything.
>
> Something to keep in mind once the CA/B thread shows up.
>
> -Tim
>
> > -Original Message-
> > From: dev-security-policy [mailto:dev-security-policy-
> > bounces+tim.hollebeek=digicert@lists.mozilla.org] On Behalf Of Ryan
> > Hurst via dev-security-policy
> > Sent: Wednesday, February 21, 2018 9:53 PM
> > To: mozilla-dev-security-pol...@lists.mozilla.org
> > Subject: Re: Google OCSP service down
> >
> > I wanted to follow up with our findings and a summary of this issue for
> the
> > community.
> >
> > Bellow you will see a detail on what happened and how we resolved the
> issue,
> > hopefully this will help explain what hapened and potentially others not
> > encounter a similar issue.
> >
> > Summary
> > ---
> > January 19th, at 08:40 UTC, a code push to improve OCSP generation for a
> > subset of the Google operated Certificate Authorities was initiated. The
> change
> > was related to the packaging of generated OCSP responses. The first time
> this
> > change was invoked in production was January 19th at 16:40 UTC.
> >
> > NOTE: The publication of new revocation information to all geographies
> can
> > take up to 6 hours to propagate. Additionally, clients and middle-boxes
> > commonly implement caching behavior. This results in a large window where
> > clients may have begun to observe the outage.
> >
> > NOTE: Most modern web browsers “soft-fail” in response to OCSP server
> > availability issues, masking outages. Firefox, however, supports an
> advanced
> > option that allows users to opt-in to “hard-fail” behavior for revocation
> > checking. An unknown percentage of Firefox users enable this setting. We
> > believe most users who were impacted by the outage were these Firefox
> users.
> >
> > About 9 hours after the deployment of the change began (2018-01-20 01:36
> > UTC) a user on Twitter mentions that they were having problems with their
> > hard-fail OCSP checking configuration in Firefox when visiting Google
> > properties. This tweet and the few that followed during the outage
> period were
> > not noticed by any Google employees until after the incident’s
> post-mortem
> > investigation had begun.
> >
> > About 1 day and 22 hours after the push was initiated (2018-01-21 15:07
> UTC),
> > a user posted a message to the mozilla.dev.security.policy mailing list
> where
> > they mention they too are having problems with their hard-fail
> configuration in
> > Firefox when visiting Google properties.
> >
> > About two days after the push was initiated, a Google employee
> discovered the
> > post and opened a ticket (2018-01-21 16:10 UTC). This triggered the
> > remediation procedures, which began in under an hour.
> >
> > The issue was resolved about 2 days and 6 hours from the time it was
> > introduced (2018-01-21 22:56 UTC). Once Google became aware of the
> issue, it
> > took 1 hour and 55 minutes to resolve the issue, and an additional 4
> hours and
> > 51 minutes for the fix to be completely deployed.
> >
> > No customer reports regarding this issue were sent to the notification
> > addresses listed in Google's CPSs or on the repository websites for the
> duration
> > of the outage. This extended the duration of the outage.
> >
> > Background
> > --
> > Google's OCSP Infrastructure works by generating OCSP responses in
> batches,
> > with each batch being made up of the certificates issued by an
> individual CA.
> >
> > In the case of GI

Re: Google OCSP service down

2018-02-21 Thread Ryan Hurst via dev-security-policy
I wanted to follow up with our findings and a summary of this issue for the 
community. 

Bellow you will see a detail on what happened and how we resolved the issue, 
hopefully this will help explain what hapened and potentially others not 
encounter a similar issue.

Summary
---
January 19th, at 08:40 UTC, a code push to improve OCSP generation for a subset 
of the Google operated Certificate Authorities was initiated. The change was 
related to the packaging of generated OCSP responses. The first time this 
change was invoked in production was January 19th at 16:40 UTC. 

NOTE: The publication of new revocation information to all geographies can take 
up to 6 hours to propagate. Additionally, clients and middle-boxes commonly 
implement caching behavior. This results in a large window where clients may 
have begun to observe the outage.

NOTE: Most modern web browsers “soft-fail” in response to OCSP server 
availability issues, masking outages. Firefox, however, supports an advanced 
option that allows users to opt-in to “hard-fail” behavior for revocation 
checking. An unknown percentage of Firefox users enable this setting. We 
believe most users who were impacted by the outage were these Firefox users.

About 9 hours after the deployment of the change began (2018-01-20 01:36 UTC) a 
user on Twitter mentions that they were having problems with their hard-fail 
OCSP checking configuration in Firefox when visiting Google properties. This 
tweet and the few that followed during the outage period were not noticed by 
any Google employees until after the incident’s post-mortem investigation had 
begun. 

About 1 day and 22 hours after the push was initiated (2018-01-21 15:07 UTC), a 
user posted a message to the mozilla.dev.security.policy mailing list where 
they mention they too are having problems with their hard-fail configuration in 
Firefox when visiting Google properties.

About two days after the push was initiated, a Google employee discovered the 
post and opened a ticket (2018-01-21 16:10 UTC). This triggered the remediation 
procedures, which began in under an hour.

The issue was resolved about 2 days and 6 hours from the time it was introduced 
(2018-01-21 22:56 UTC). Once Google became aware of the issue, it took 1 hour 
and 55 minutes to resolve the issue, and an additional 4 hours and 51 minutes 
for the fix to be completely deployed.

No customer reports regarding this issue were sent to the notification 
addresses listed in Google's CPSs or on the repository websites for the 
duration of the outage. This extended the duration of the outage. 

Background
--
Google's OCSP Infrastructure works by generating OCSP responses in batches, 
with each batch being made up of the certificates issued by an individual CA.

In the case of GIAG2, this batch is produced in chunks of certificates issued 
in the last 370 days. For each chunk, the GIAG2 CA is asked to produce the 
corresponding OCSP responses, the results of which are placed into a separate 
.tar file.

The issuer of GIAG2 has chosen to issue new certificates to GIAG2 periodically, 
as a result GIAG2 has multiple certificates. Two of these certificates no 
longer have unexpired certificates associated with them. As a result, and as 
expected, the CA does not produce responses for the corresponding periods.

All .tar files produced during this process are then concatenated with the 
-concatenate command in GNU tar. This produces a single .tar file containing 
all of the OCSP responses for the given Certificate Authority, then this .tar 
file is distributed to our global CDN infrastructure for serving.

A change was made in how we batch these responses, specifically instead of 
outputting many .tar files within a batch, a concatenation was of all tar files 
was produced.

The change in question triggered an unexpected behaviour in GNU tar which then 
manifested as an empty tarball. These "empty" updates ended up being 
distributed to our global CDN, effectively dropping some responses, while 
continuing to serve responses for other CAs.

During testing of the change, this behaviour was not detected, as the tests did 
not cover the scenario in which some chunks did not contain unexpired 
certificates.

Findings

- The outage only impacted sites with TLS certificates issued by the GIAG2 CA 
as it was the only CA that met the required pre-conditions of the bug. 
- The bug that introduced this failure manifested itself as an empty container 
of OCSP responses. The root cause of the issue was an unexpected behavior of 
GNU tar relating to concatenating tar files.
- The outage was observed by revocation service monitoring as  “unknown 
certificate” (HTTP 404) errors. HTTP 404 errors are expected in OCSP responder 
operations; they typically are the result of poorly configured clients. These 
events are monitored and a threshold does exist for an on-call escalation.
- Due to a configuration error the designated Google team did 

Re: Google OCSP service down

2018-01-22 Thread Ryan Hurst via dev-security-policy
On Monday, January 22, 2018 at 1:26:01 AM UTC-8, ihave...@gmail.com wrote:
> Hi,
> 
> Just as an FYI, I am still getting 404. My geographic location is UAE if that 
> helps at all.
> 
> My openssl command:
> openssl ocsp -issuer gtsx1.pem -cert goodr1demopkigoog.crt -url 
> http://ocsp.pki.goog/GTSGIAG3  -CAfile gtsrootr1.pem 
> Error querying OCSP responder
> 77317:error:27075072:OCSP routines:PARSE_HTTP_LINE1:server response 
> error:/BuildRoot/Library/Caches/com.apple.xbs/Sources/OpenSSL098/OpenSSL098-59.60.1/src/crypto/ocsp/ocsp_ht.c:224:Code=404,Reason=Not
>  Found

Tham,

It seems you are not specifying the hostname header which is required by HTTP 
1.1 which is required by RFC 2560:

Here is what a command for that root would look like:
openssl ocsp -issuer r1goodissuer.cer -cert r1good.cer -no_nonce -text -url 
"http://ocsp.pki.goog/GTSGIAG3; -header host ocsp.pki.goog

Ryan
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google OCSP service down

2018-01-21 Thread Ryan Hurst via dev-security-policy
On Sunday, January 21, 2018 at 1:42:59 PM UTC-8, Ryan Hurst wrote:
> On Sunday, January 21, 2018 at 1:29:58 PM UTC-8, s...@gmx.ch wrote:
> > Hi
> > 
> > Thanks for investigating.
> > 
> > I can confirm that the service is now working again for me most of the
> > time, but some queries still fail (may be due load balancing in the
> > backend?).
> > 
> 
> Thank you for your report and confirming you are seeing things starting to 
> work.
> 
> Google operates a global network utilizing many redundant servers and the 
> nature of the way that works is one connection to the next you may be hitting 
> a different cluster of servers. 
> 
> It can take a while for all of these different clusters to receive the 
> associated updates.
> 
> This would explain your inconsistent results.
> 
> I am actively watching this deployment to ensure it completes successfully 
> but at this point, it seems all will continue to roll out as expected.
> 
> As an aside, We are still continuing our post-mortem.

The issue should be 100% resolved now.

As per earlier posts, we will complete the post-mortem and report to the 
community with our findings.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google OCSP service down

2018-01-21 Thread Ryan Hurst via dev-security-policy
On Sunday, January 21, 2018 at 1:29:58 PM UTC-8, s...@gmx.ch wrote:
> Hi
> 
> Thanks for investigating.
> 
> I can confirm that the service is now working again for me most of the
> time, but some queries still fail (may be due load balancing in the
> backend?).
> 

Thank you for your report and confirming you are seeing things starting to work.

Google operates a global network utilizing many redundant servers and the 
nature of the way that works is one connection to the next you may be hitting a 
different cluster of servers. 

It can take a while for all of these different clusters to receive the 
associated updates.

This would explain your inconsistent results.

I am actively watching this deployment to ensure it completes successfully but 
at this point, it seems all will continue to roll out as expected.

As an aside, We are still continuing our post-mortem.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google OCSP service down

2018-01-21 Thread Ryan Hurst via dev-security-policy
> > Is there a known contact to report it (or is someone with a Google hat
> > reading this anyway)?
> 

David,

I am sorry you experienced difficulty in contacting us about this issue. 

We maintain contact details both within our CPS (like other CAs) and at 
https://pki.goog so that people can reach us expeditiously. In the future if 
anyone needs to reach us please use those details.

Google is a large organization and when other teams are contacted (such as DNS) 
we do not have control over when and if those issues will reach us. 

We are actively working on a post mortem on this issue and when it is complete 
we will share it in this thread.

Thanks for your help in this matter,

Ryan Hurst
Product Manager
Google
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google OCSP service down

2018-01-21 Thread Ryan Hurst via dev-security-policy

> 
> We are investigating the issue and will provide a update when that 
> investigation is complete.
> 
> Thank you for letting us know.
> 
> Ryan Hurst
> Product Manager
> Google

I wanted to provide an update to the group. The issue has been identified and a 
roll out of the fix is in progress across all geographies.

I have personally verified the fix in several geographies.

A post mortem will be created and shared with the group as soon as it is ready.

Ryan Hurst
Product Manager
Google
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google OCSP service down

2018-01-21 Thread Ryan Hurst via dev-security-policy
On Sunday, January 21, 2018 at 8:13:30 AM UTC-8, David E. Ross wrote:
> On 1/21/2018 7:47 AM, Paul Kehrer wrote:
> > Is there a known contact to report it (or is someone with a Google hat
> > reading this anyway)?
> 
> On Friday (two days ago), I reported this to dns-ad...@google.com, the
> only E-mail address in the WhoIs record for google.com.
> 
> I received an automated reply indicating that security issues should
> instead be reported to secur...@google.com. I immediately resent
> (Thunderbird's Edit As New Message) to secur...@google.com.
> 
> I then received an automated reply from secur...@google.com that listed
> a variety of Web addresses for reporting various problems.  I replied
> via E-mail to secur...@google.com:
> > Because of the OCSP failure, I am unable to reach any of the google.com
> > Web site cited in your reply.
> 
> Yes, I could disable OCSP checking.  But I my need for Google is
> insufficient for me to browse insecurely.
> 
> By the way, in SeaMonkey 2.49.1 (the latest version) the Google Internet
> Authority G2 certificate appears to be an intermediate, signed by the
> GeoTrust Global CA root.
> 
> There is a pending request (bug #1325532) from Google to add a Google
> root certificate to NSS.  Given the inadequacy of Google's current
> information on reporting security problems, I have doubts whether this
> request should be approved.
> 
> See <https://bugzilla.mozilla.org/show_bug.cgi?id=1325532>.
> 
> -- 
> David E. Ross
> <http://www.rossde.com/>
> 
> President Trump:  Please stop using Twitter.  We need
> to hear your voice and see you talking.  We need to know
> when your message is really your own and not your attorney's.


We are investigating the issue and will provide a update when that 
investigation is complete.

Thank you for letting us know.

Ryan Hurst
Product Manager
Google
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: TLS-SNI-01 and compliance with BRs

2018-01-18 Thread Ryan Hurst via dev-security-policy

> I would presume that the CABforum would be the place to explore further
> details, but it seems that the specifications for the #10 method should be
> reexamined as to what assurances they actually provide with a view to
> revising those specifications.  At least 1 CA so far has found that the
> real world experience of a (presumably) compliant application of method #10
> as it exists today was deficient in mitigating the provision of
> certificates to incorrect/unauthorized parties.

I agree CABFORUM seems to be the right place to get this text clarified.

More concretely I have recently re-reviewed the validation methods and in 
general, think they most need fairly significant clarification.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Updating Root Inclusion Criteria

2018-01-17 Thread Ryan Hurst via dev-security-policy
On Tuesday, January 16, 2018 at 3:46:03 PM UTC-8, Wayne Thayer wrote:
> I would like to open a discussion about the criteria by which Mozilla
> decides which CAs we should allow to apply for inclusion in our root store.
> 
> Section 2.1 of Mozilla’s current Root Store Policy states:
> 
> CAs whose certificates are included in Mozilla's root program MUST:
> > 1.provide some service relevant to typical users of our software
> > products;
> >
> 
> Further non-normative guidance for which organizations may apply to the CA
> program is documented in the ‘Who May Apply’ section of the application
> process at https://wiki.mozilla.org/CA/Application_Process . The original
> intent of this provision in the policy and the guidance was to discourage a
> large number of organizations from applying to the program solely for the
> purpose of avoiding the difficulties of distributing private roots for
> their own internal use.
> 
> Recently, we’ve encountered a number of examples that cause us to question
> the usefulness of the currently-vague statement(s) we have that define
> which CAs to accept, along a number of different axes:
> 
> * Visa is a current program member that has an open request to add another
> root. They only issue a relatively small number of certificates per year to
> partners and for internal use. They do not offer certificates to the
> general public or to anyone with whom they do not have an existing business
> relationship.
> 
> * Google is also a current program member, admitted via the acquisition of
> an existing root, but does not currently, to the best of our knowledge,
> meet the existing inclusion criteria, even though it is conceivable that
> they would issue certificates to the public in the future.
> 
> * There are potential applicants for CA status who deploy a large number of
> certificates, but only on their own infrastructure and for their own
> domains, albeit that this infrastructure is public-facing rather than
> company-internal.
> 
> * We have numerous government CAs in the program or in the inclusion
> process that only intend to issue certificates to their own institutions.
> 
> * We have at least one CA applying for the program that (at least, it has
> been reported in the press) is controlled by an entity which may wish to
> use it for MITM.
> 
> There are many potential options for resolving this issue. Ideally, we
> would like to establish some objective criteria that can be measured and
> applied fairly. It’s possible that this could require us to define
> different categories of CAs, each with different inclusion criteria. Or it
> could be that we should remove the existing ‘relevance’ requirement and
> inclusion guidelines and accept any applicant who can meet all of our other
> requirements.
> 
> With this background, I would like to encourage everyone to provide
> constructive input on this topic.
> 
> Thanks,
> 
> Wayne

Wayne,

I recall facing this topic at Microsoft when I was defining the root policy for 
them. At the time I failed to effectively come up with language that would 
capture all of the use cases we felt were important. This is why we ended up 
with what was at the time a vague statement on broad value to Microsoft 
consumers.

With that said, despite the challenges associated with the tasks, I agree this 
is an area where clarity is needed.

Since Google's PKI was mentioned as an example, I can publicly state that the 
plan is for Google to utilize the Google Trust Services infrastructure to 
satisfy its SSL certificate needs. While I can not announce specific product 
roadmaps I can say that this includes the issuance of certificates for Google 
offerings involving hosting of products and services for customers.

Ryan Hurst
Product Manager 
Google
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Possible Issue with Domain Validation Method 9 in a shared hosting environment

2018-01-15 Thread Ryan Hurst via dev-security-policy
Sleevi,

Valid point, no intention to confuse, I have no current affiliation with
GlobalSign, though I once did.

The documentation that described the protocol seems to no longer be online,
the behavior is observable and has been discussed in the validation working
group within the CABFORUM so it is not a secret.

Ryan

On Sun, Jan 14, 2018 at 7:10 AM, Ryan Sleevi <r...@sleevi.com> wrote:

>
>
> On Sat, Jan 13, 2018 at 8:46 PM, Ryan Hurst via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
>> On Friday, January 12, 2018 at 6:10:00 PM UTC-8, Matt Palmer wrote:
>> > On Fri, Jan 12, 2018 at 02:52:54PM +, Doug Beattie via
>> dev-security-policy wrote:
>> > > I’d like to follow up on our investigation and provide the community
>> with some more information about how we use Method 9.
>> > >
>> > > 1)  Client requests a test certificate for a domain (only one
>> FQDN)
>> >
>> > Does this test certificate chain to a publicly-trusted root?  If so, on
>> what
>> > basis are you issuing a publicly-trusted certificate for a name which
>> > doesn't appear to have been domain-control validated?  If not, doesn't
>> this
>> > test certificate break the customer's SSL validation for the period the
>> > certificate is installed, while you do the validation?
>> >
>> > - Matt
>>
>> The certificate comes from a private PKI, not public one.
>
>
> Matt: The Baseline Requirements provide a definition of Test Certificate
> that applies to 3.2.2.4.9 that already addresses your concerns:
>
> Test Certificate: A Certificate with a maximum validity period of 30 days
> and which: (i) includes a critical
> extension with the specified Test Certificate CABF OID (2.23.140.2.1), or
> (ii) is issued under a CA where there
> are no certificate paths/chains to a root certificate subject to these
> Requirements.
>
> Ryan: I think it'd be good to let GlobalSign answer, or, if the answer is
> available publicly, to point them out. This hopefully helps avoid confusion
> :)
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Possible Issue with Domain Validation Method 9 in a shared hosting environment

2018-01-13 Thread Ryan Hurst via dev-security-policy
On Friday, January 12, 2018 at 6:10:00 PM UTC-8, Matt Palmer wrote:
> On Fri, Jan 12, 2018 at 02:52:54PM +, Doug Beattie via 
> dev-security-policy wrote:
> > I’d like to follow up on our investigation and provide the community with 
> > some more information about how we use Method 9.
> > 
> > 1)  Client requests a test certificate for a domain (only one FQDN)
> 
> Does this test certificate chain to a publicly-trusted root?  If so, on what
> basis are you issuing a publicly-trusted certificate for a name which
> doesn't appear to have been domain-control validated?  If not, doesn't this
> test certificate break the customer's SSL validation for the period the
> certificate is installed, while you do the validation?
> 
> - Matt

The certificate comes from a private PKI, not public one.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Dashboard and Study on CAA Adoption

2017-12-15 Thread Ryan Hurst via dev-security-policy
On Friday, December 15, 2017 at 7:10:11 AM UTC-8, Quirin Scheitle wrote:
> Dear all,
> 
> some colleagues and I want to share an academic study on CAA we have been 
> working on in the past months. 
> We hope that our findings can provide quantitative data to assist further 
> discussion, such as the “CAA-simplification” draft at IETF and work at the 
> validation-wg at CABF.
> We also give specific recommendations how *we think* that CAA can be improved.
> 
> The results, paper, and a dashboard tracking CAA adoption are available under 
> 
> https://caastudy.github.io/
> 
> [Please note that the paper discusses facts as of Nov 30]
> We will be happy to elaborate some aspects further, the paper does not 
> discuss all the details. 
> We have discussed previous drafts with various individuals in this community 
> and thank them for their inputs.
> 
> Kind regards
> Quirin and team

This is great work. Thank you.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA generated keys

2017-12-15 Thread Ryan Hurst via dev-security-policy
On Friday, December 15, 2017 at 1:34:30 PM UTC-8, Matthew Hardeman wrote:
> On Friday, December 15, 2017 at 3:21:54 PM UTC-6, Ryan Hurst wrote:
>  
> > Unfortunately, the PKCS#12 format, as supported by UAs and Operating 
> > Systems is not a great candidate for the role of carrying keys anymore. You 
> > can see my blog post on this topic here: http://unmitigatedrisk.com/?p=543
> > 
> > The core issue is the use of old cryptographic primitives that barely live 
> > up to the equivalent cryptographic strengths of keys in use today. The 
> > offline nature of the protection involved also enables an attacker to grind 
> > any value used as the password as well.
> > 
> > Any plan to allow a CA to generate keys on behalf of users, which I am not 
> > against as long as there are strict and auditable practices associated with 
> > it, needs to take into consideration the protection of those keys in 
> > transit and storage.
> > 
> > I also believe any language that would be adopted here would clearly 
> > addresses cases where a organization that happens to operate a CA but is 
> > also a relying party. For example Amazon, Google and Apple both operate 
> > WebTrust audited CAs but they also operate cloud services where they are 
> > the subscriber of that CA. Any language used would need to make it clear 
> > the relative scopes and responsibilities in such a case.
> 
> I had long wondered about the PKCS#12 issue.  To the extent that any file 
> format in use today is convenient for delivering a package of certificates 
> including a formal validation chain and associated private key(s), PKCS#12 is 
> so convenient and fairly ubiquitous.
> 
> It is a pain that the cryptographic and integrity portions of the format are 
> showing their age -- at least, as you point out, in the manner in which 
> they're actually implemented in major software today.

So I have read this thread in its entirety now and I think it makes sense for 
it to reset to first principles, specifically:

What are the technological and business goals trying to be achieved,
What are the requirements derived from those goals,
What are the negative consequences of those goals.

My feeling is there is simply an abstract desire to allow for the CA, on behalf 
of the subject, to generate the keys but we have not sufficiently articulated a 
business case for this.

In my experience building and working with embedded systems I, like Peter, have 
found it is possible to build a sufficient pseudo random number generator on 
these devices, In practice however deployed devices commonly either do not do 
so or seed them poorly. 

This use case is one where transport would likely not need to be PKCS#12 given 
the custom nature of these solutions.

At the same time, these devices are often provisioned in a production line and 
the key generation could just as easily (and probably more appropriately) 
happen there.

In my experience as a CA the desire to do server side key generation almost 
always stems from a desire to reduce the friction for customers to acquire 
certificates for use in regular old web servers. Seldom does this case come up 
with network appliances as they do not support the PKCS#12 format normally. 
While the reduction of friction is a laudable goal, it seems the better way to 
do that would be to adopt a protocol like ACME for certificate lifecycle 
managment.

As I said in a earlier response I am not against the idea of server side key 
generation as long as:
There is a legitimate business need,
This can be done in a way that the CA does not have access to the key,
The process in which that this is done is fully transparent and auditable,
The transfer of the key is done in a way that is sufficiently secure,
The storage of the key is done in a way that is sufficiently secure,
We are extremely clear in how this can be done securely.

Basically I believe due to the varying degrees of technical background and 
skill in the CA operator ecosystem allowing this without being extremely is 
probably a case of the cure is worse than the ailment.

With that background I wonder, is this even worth exploring?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA generated keys

2017-12-15 Thread Ryan Hurst via dev-security-policy
On Tuesday, December 12, 2017 at 1:08:24 PM UTC-8, Jakob Bohm wrote:
> On 12/12/2017 21:39, Wayne Thayer wrote:
> > On Tue, Dec 12, 2017 at 7:45 PM, Jakob Bohm via dev-security-policy <
> > dev-security-policy@lists.mozilla.org> wrote:
> > 
> >> On 12/12/2017 19:39, Wayne Thayer wrote:
> >>
> >>> The outcome to be avoided is a CA that holds in escrow thousands of
> >>> private keys used for TLS. I don’t think that a policy permitting a CA to
> >>> generate the key pair is bad as long as the CA doesn’t hold on to the key
> >>> (unless  the certificate was issued to the CA or the CA is hosting the
> >>> site).
> >>>
> >>> What if the policy were to allow CA key generation but require the CA to
> >>> deliver the private key to the Subscriber and destroy the CA’s copy prior
> >>> to issuing a certificate? Would that make key generation easier? Tim, some
> >>> examples describing how this might be used would be helpful here.
> >>>
> >>>
> >> That would conflict with delivery in PKCS#12 format or any other format
> >> that delivers the key and certificate together, as users of such
> >> services commonly expect.
> >>
> >> Yes, it would. But it's a clear policy. If the requirement is to deliver
> > the key at the same time as the certificate, then how long can the CA hold
> > the private key?
> > 
> > 
> 
> Point is that many end systems (including Windows IIS) are designed to
> either import certificates from PKCS#12 or use a specific CSR generation
> procedure.  If the CA delivered the key and cert separately, then the
> user (who is apparently not sophisticated enough to generate their own
> CSR) will have a hard time importing the key+cert into their system.
> 
> > 
> >> It would also conflict with keeping the issuing CA key far removed from
> >> public web interfaces, such as the interface used by users to pick up
> >> their key and certificate, even if separate, as it would not be fun to
> >> have to log in twice with 1 hour in between (once to pick up key, then
> >> once again to pick up certificate).
> >>
> >> I don't think I understand this use case, or how the proposed policy
> > relates to the issuing CA.
> > 
> 
> If the issuing CA HSM is kept away from online systems and processes
> vetted issuance requests only in a batched offline manner, then a user
> responding to a message saying "your application has been accepted,
> please log in with your temporary password to retrieve your key and
> certificate" would have to download the key, after which the CA can
> delete key and queue the actual issuance to the offline CA system, and
> only after that can the user actually download their certificate.
> 
> Another thing with similar effect is the BR requirement that all the
> OCSP responders must know about issued certificates, which means that
> both the serial number and a hash of the signed certificate must be
> replicated to all the OCSP machines before the certificate is delivered.
> (One of the good OCSP extensions is to include a hash of the valid
> certificate in the OCSP response, thus allowing the relying party
> software to check that a "valid" response is actually for the
> certificate at hand).
> 
> 
> 
> 
> > 
> >> It would only really work with a CSR+key generation service where the
> >> user receives the key at application time, then the cert after vetting.
> >> And many end systems cannot easily import that.
> >>
> >> Many commercial CAs could accommodate a workflow where they deliver the
> > private key at application time. Maybe you are thinking of IOT scenarios?
> > Again, some use cases describing the problem would be helpful.
> > 
> 
> One major such use case is IIS or Exchange at the subscriber end.
> Importing the key and cert at different times is just not a feature of
> Windows server.
> 
> > 
> >> A policy allowing CAs to generate key pairs should also include provisions
> >>> for:
> >>> - The CA must generate the key in accordance with technical best practices
> >>> - While in possession of the private key, the CA must store it securely
> >>>
> >>> Wayne
> >>>
> >>>
> >>
> 
> 
> 
> Enjoy
> 
> Jakob
> -- 
> Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
> Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
> This public discussion message is non-binding and may contain errors.
> WiseMo - Remote Service Management for PCs, Phones and Embedded

I agree that the "right way(tm)" is to have the keys generated in a HSM, the 
keys exported in ciphertext and for this to be done in a way that the CA can 
not decrypt the keys.

Technically the PKCS#12 format would allow for such a model as you can encrypt 
the keybag to a public key (in a certificate. You could, for example generate a 
key in a HSM, export it encrypted to a public key, and the CA would never see 
the key. 

This has several issues, the first is, of course, you must trust the CA not to 
use a different key; this could be addressed by requiring the code performing 
this logic to be made public, 

Re: CA generated keys

2017-12-15 Thread Ryan Hurst via dev-security-policy
On Tuesday, December 12, 2017 at 11:31:18 AM UTC-8, Tim Hollebeek wrote:
> > A policy allowing CAs to generate key pairs should also include provisions
> > for:
> > - The CA must generate the key in accordance with technical best practices
> > - While in possession of the private key, the CA must store it securely
> 
> Don't forget appropriate protection for the key while it is in transit.  I'll 
> look a bit closer at the use cases and see if I can come up with some 
> reasonable suggestions.
> 
> -Tim

Unfortunately, the PKCS#12 format, as supported by UAs and Operating Systems is 
not a great candidate for the role of carrying keys anymore. You can see my 
blog post on this topic here: http://unmitigatedrisk.com/?p=543

The core issue is the use of old cryptographic primitives that barely live up 
to the equivalent cryptographic strengths of keys in use today. The offline 
nature of the protection involved also enables an attacker to grind any value 
used as the password as well.

Any plan to allow a CA to generate keys on behalf of users, which I am not 
against as long as there are strict and auditable practices associated with it, 
needs to take into consideration the protection of those keys in transit and 
storage.

I also believe any language that would be adopted here would clearly addresses 
cases where a organization that happens to operate a CA but is also a relying 
party. For example Amazon, Google and Apple both operate WebTrust audited CAs 
but they also operate cloud services where they are the subscriber of that CA. 
Any language used would need to make it clear the relative scopes and 
responsibilities in such a case.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: On the value of EV

2017-12-11 Thread Ryan Hurst via dev-security-policy
On Monday, December 11, 2017 at 12:41:02 PM UTC-8, Paul Wouters wrote:
> On Mon, 11 Dec 2017, James Burton via dev-security-policy wrote:
> 
> > EV is on borrowed time
> 
> You don't explain why?
> 
> I mean domain names can be confusing or malicious too. Are domain names
> on borrowed time?
> 
> If you remove EV, how will the users react when paypal or their bank is
> suddenly no longer "green" ? Are we going to teach them again that
> padlocks and green security come and go and to ignore it?
> 
> Why is your cure (remove EV) better than fixing the UI parts of EV?
> 
> Paul

The issues with EV are much larger than UI. It needs to be revisited and a 
honest and achievable set of goals need to be established and the processes and 
procedures used pre-issuance and post-issuance need to be defined in support 
those goals. Until thats been done I can not imagine any browser would invest 
in new UI and education of users for this capability.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: On the value of EV

2017-12-11 Thread Ryan Hurst via dev-security-policy
Stripe, Inc could very well be a road striping company.

This may have situationally been the equivalent of a misleading certificate but 
the scenario of name collisions is real.

Ryan Hurst
On Monday, December 11, 2017 at 11:39:57 AM UTC-8, Tim Hollebeek wrote:
> Nobody is disputing the fact that these certificates were legitimate given 
> the rules that exist today.
> 
> However, I don't believe "technically correct, but intentionally misleading" 
> information should be included in certificates.  The question is how best to 
> accomplish that.
> 
> -Tim
> 
> -Original Message-
> From: Jonathan Rudenberg [mailto:jonat...@titanous.com] 
> Sent: Monday, December 11, 2017 12:34 PM
> To: Tim Hollebeek <tim.holleb...@digicert.com>
> Cc: Ryan Sleevi <r...@sleevi.com>; 
> mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Re: On the value of EV
> 
> 
> > On Dec 11, 2017, at 14:14, Tim Hollebeek via dev-security-policy 
> > <dev-security-policy@lists.mozilla.org> wrote:
> > 
> > 
> > It turns out that the CA/Browser Validation working group is currently 
> > looking into how to address these issues, in order to tighten up 
> > validation in these cases.
> 
> This isn’t a validation issue. Both certificates were properly validated and 
> have correct (but very misleading information) in them. Business entity names 
> are not unique, so it’s not clear how validation changes could address this.
> 
> I think it makes a lot of sense to get rid of the EV UI, as it can be 
> trivially used to present misleading information to users in the most 
> security-critical browser UI area. My understanding is that the research done 
> to date shows that EV does not help users defend against phishing attacks, it 
> does not influence decision making, and users don’t understand or are 
> confused by EV.
> 
> Jonathan

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Welcome Wayne Thayer to Mozilla!

2017-11-27 Thread Ryan Hurst via dev-security-policy
That is great!

On Monday, November 27, 2017 at 4:04:09 PM UTC-8, Kathleen Wilson wrote:
> All,
> 
> I am pleased to announce that Wayne Thayer is now a Mozilla employee, 
> and will be working with me on our CA Program!
> 
> Many of you know Wayne from his involvement in this discussion forum and 
> in the CA/Browser Forum, as a representative for the Go Daddy CA. Wayne 
> was involved in Go Daddy's CA program from the beginning, so he has a 
> deep understanding of CA policies, audits, and standards.
> 
> Some of the things Wayne will be working on in his new role include:
> + Review of root inclusion/update requests in discussion.
> + Investigate more complex root inclusion/update requests.
> + Help with CA mis-issuance investigations, bugs, and discussions.
> + Lead prioritization, effort, and discussions to update Mozilla Root 
> Store Policy and CCADB Policy. (transition from Gerv over time)
> + Represent Mozilla in the CA/Browser Forum, along with Gerv.
> 
> I have added Wayne to the Policy_Participants wiki page:
> https://wiki.mozilla.org/CA/Policy_Participants
> 
> Welcome, Wayne!
> 
> Thanks,
> Kathleen

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: CAs not compliant with CAA CP/CPS requirement

2017-09-08 Thread Ryan Hurst via dev-security-policy
Responding from my personal account but I can confirm that Google Trust 
Services does check CAA and our policy was updated earlier today to reflect 
that.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Regarding CA requirements as to technical infrastructure utilized in automated domain validations, etc. (if any)

2017-08-29 Thread Ryan Hurst via dev-security-policy
On Monday, August 28, 2017 at 1:15:55 AM UTC-7, Nick Lamb wrote:
> I think that instead Ryan H is suggesting that (some) CAs are taking 
> advantage of multiple geographically distinct nodes to run the tests from one 
> of the Blessed Methods against an applicant's systems from several places on 
> the Internet at once. This mitigates against attacks that are able to disturb 
> routing only for the CA or some small corner of the Internet containing the 
> CA. For example my hypothetical 17 year-old at the ISP earlier in the thread 
> can't plausibly also be working at four other ISPs around the globe.
> 
> This is a mitigation not a fix because a truly sophisticated attacker can 
> obtain other certificates legitimately to build up intelligence about the 
> CA's other perspective points on the Internet and then attack all of them 
> simultaneously. It doesn't involve knowing much about Internet routing, 
> beyond the highest level knowledge that connections from very distant 
> locations will travel by different routes to reach the "same" destination.

Thanks, Nick, that is exactly what I was saying.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Regarding CA requirements as to technical infrastructure utilized in automated domain validations, etc. (if any)

2017-08-25 Thread Ryan Hurst via dev-security-policy
Dimitris,

I think it is not accurate to characterize this as being outside of the CAs 
controls. Several CAs utilize multiple network perspectives and consensus to 
mitigate these risks. While this is not a total solution it is fairly effective 
if the consensus pool is well thought out.

Ryan
On Thursday, August 24, 2017 at 5:45:11 AM UTC-7, Dimitris Zacharopoulos wrote:
> On 26/7/2017 3:38 πμ, Matthew Hardeman via dev-security-policy wrote:
> > On Tuesday, July 25, 2017 at 1:00:39 PM UTC-5,birg...@princeton.edu  wrote:
> >> We have been considering research in this direction. PEERING controls 
> >> several ASNs and may let us use them more liberally with some convincing. 
> >> We also have the ASN from Princeton that could be used with cooperation 
> >> from Princeton OIT (the Office of Information Technology) where we have 
> >> several contracts. The problem is not the source of the ASNs but the 
> >> network anomaly the announcement would cause. If we were to hijack the 
> >> prefix of a cooperating organization, the PEERING ASes might have their 
> >> announcements filtered because they are seemingly launching BGP attacks. 
> >> This could be fixed with some communication with ISPs, but regardless 
> >> there is a cost to launching such realistic attacks. Matthew Hardeman 
> >> would probably know more detail about how this would be received by the 
> >> community, but this is the general impression I have got from engaging 
> >> with the people who run the PEERING framework.
> > I have some thoughts on how to perform such experiments while mitigating 
> > the likelihood of significant lasting consequence to the party helping 
> > ingress the hijack to the routing table, but you correctly point out that 
> > the attack surface is large and the one consistent feature of all 
> > discussion up to this point on the topic of BGP hijacks for purpose of 
> > countering CA domain validation is that none of those discuss have, up to 
> > this point, expressed doubt as to the risks or the feasibility of carrying 
> > out these risks.  To that ends, I think the first case that would need to 
> > be made to further that research is whether anything of significance is 
> > gained in making the attack more tangible.
> >
> >> So far we have not been working on such an attack very much because we are 
> >> focusing our research more on countermeasures. We believe that the attack 
> >> surface is large and there are countless BGP tricks an adversary could use 
> >> to get the desired properties in an attack. We are focusing our research 
> >> on simple and countermeasures CAs can implement to reduce this attack 
> >> space. We also aim to use industry contacts to accurately asses the false 
> >> positive rates of our countermeasures and develop example implementations.
> >>
> >> If it appears that actually launching such a realistic attack would be 
> >> valuable to the community, we certainty could look into it further.
> > This is the question to answer before performing such an attack.  In 
> > effect, who is the audience that needs to be impressed?  What criteria must 
> > be met to impress that audience?  What benefits in furtherance of the work 
> > arise from impressing that audience?
> >
> > Thanks,
> >
> > Matt Hardeman
> > ___
> > dev-security-policy mailing list
> > dev-security-policy@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-security-policy
> 
> That was a very interesting topic to read. Unfortunately, CAs can't do 
> much to protect against network hijacking because most of the 
> counter-measures lie in the ISPs' side. However, the CAs could request 
> some counter-measures from their ISPs.
> 
> Best practices for ISPs state that for each connected peer, the ISP need 
> to apply a prefix filter that will allow announcements for only 
> legitimate prefixes that the peer controls/owns. We can easily imagine 
> that this is not performed by all ISPs. Another solution that has been 
> around for some time, is RPKI 
>  
> along with BGP Origin Validation 
> .
>  
> Of course, we can't expect all ISPs to check for Route Origin 
> Authorizations (ROAs) but if the major ISPs checked for ROAs, it would 
> improve things a lot in terms of securing the Internet.
> 
> So, in order to minimize the risk for a CA or a site owner network from 
> being hijacked, if a CA/site owner has an address space that is Provider 
> Aggregatable (PA) (this means the ISP "owns" the IP space), they should 
> check that their upstream network provider has properly created the ROAs 
> for the CA/site operator's network prefix(es) in the RIR authorized 
> list, and that they have configured their routers to validate ROAs for 
> each prefix. If the CA/site operator has a Provider Independent (PI) 
> 

Re: Criticism of Mozilla Re: Google Trust Services roots

2017-03-10 Thread Ryan Hurst via dev-security-policy

Most are not directed at me so I won’t respond to each item, but for several
I think I can provide some additional context, see below:

> * Manner of transfer:  As we learned from Ryan H., a second HSM was 
> introduced for the transfer of the private key meaning that for a period of 
> time 2 copies of the private key were in existence.  Presumably one copy 
> was destroyed at some point, but I'm not familiar with any relevant 
> standards or requirements to know when/how that takes place.  Whatever the 
> case may be, this situation seems to fall outside of the Root Transfer 
> Policy as I now read it.  Also, did GlobalSign ever confirm to Mozilla that 
> they are no longer in possession of or otherwise have access to the private 
> key for those 2 roots? 

A few things are relevant to this comment. First, when designing a key 
management 
program for keys that may live ten or twenty years it is extremely important 
one builds a
disaster recovery plan. Such plans require duplicate copies of keys exist. 
Basically no 
responsible CA would not have backups of their keys.

Additionally given the reliability and performance requirements issuing CAs 
also, almost 
always, are deployed with a cluster of HSMs.

The point of mentioning the above is that having multiple copies of keys is a 
standard practice.

Regarding who has control over the associated keys, you are correct, as is the 
standard 
practice (this is my 8th transfer in my professional career) the process of 
transfer Involved 
reviewing the history and associated artifacts of the keys and ensuring, in the 
presence
of our auditors, all copies not belonging to Google were destroyed.

While I can not speak for GlobalSign I can state that I do know they notified 
all root relevant
programs that they no longer have control of the associated keys.


> * Conduct of the transfer:  I think an expectation should be set that the 
> "current holder of the trust" must be the one to drive the transfer.  Trust 
> can be handed to someone else; reaching in and taking trust doesn't sound 
> very...trustworthy?  To that end, I think the policy should state that the 
> current root holder must do more than merely notify Mozilla about the 
> change in ownership; the holder (and their auditor) must provide the 
> audits, attestations, and answers to questions that come up.  Only after 
> the transfer is complete would the "new holder" step in to perform those 
> duties. 

It is the expectation of the Mozilla Program as well as the Microsoft Program 
(and others)
that the current holder of the trust drives the transfer. That was what 
happened in this case
aswell.

As was noted in the original thread Mozilla does not publicly require 
permission to be secured but
does so privately and in this case that permission was secured, at least 
implicitly since we discussed
with Mozilla our purchase numerous times before terms were reached. Other 
programs, such as Microsoft make this requirement public so we explicitly 
secured their permission before finalizing terms as well. 

While securing such permission complicates the the process I think the value to 
the ecosystem is warrents the complication and I think it makes sense for 
Mozilla to formalize their requirement to secure permission before a transfer.


> * Public notification:  I appreciate that confidentiality is required when 
> business transactions are being discussed but at some point, in the 
> interest of transparency, the public must be notified of the transfer.  I 
> think this is implied (or assumed) in the current policy, but it might be 
> good to state explicitly that a public announcement must be made.  I would 
> add that making an announcement at a CABF meeting is all well and good, but 
> considering that most people on the Internet are not able to attend those 
> meetings it would be good if an announcement could be made in other forums 
> as well. 

This representation misrepresents what notification has taken place, for others 
I suggest 
reading the other thread for a more accurate representation.

To the specific policy suggestion, the fact that changes in the Mozilla program 
are all
tracked via public channels like the bug database and this forum mean that 
today public 
notice is already mandated. 

There may be value in requiring something “larger” than that, but defining that 
in a concrete
way is hard. In our case when we published our blog post it was picked up by 
many technical
publications but that is because we are Google. In historic transfers of keys, 
the actors in the
transfer were not as visible as Google and as such their public notices were, 
well... .not noticed.

One thing that could be a reasonable step is to require that on their document 
repository, for 
some period of time after a transfer they maintain notice there. I am not sure 
this materially
Moves the bar forward in that, I can say I have seen the web traffic for many 
repository pages for
Some of the larger 

Re: Google Trust Services roots

2017-03-09 Thread Ryan Hurst via dev-security-policy

> Of all these, Starfield seems to be the only case where a single CA
> name now refers to two different current CA operators (GoDaddy and
> Amazon).  All the others are cases of complete takeover.  None are
> cases where the name in the certificate is a still operating CA
> operator, but the root is actually operated by a different entity
> entirely.

That is true, but my point is that one can not rely on the name in root 
certificates, when certs are made to be good for well over a decade the concept 
of name continuity just doesn't hold.

> Also, I don't see Google on that list.

I noticed that too, Ill be reaching out to Microsoft to make sure its updated.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Trust Services roots

2017-03-09 Thread Ryan Hurst via dev-security-policy
On Thursday, March 9, 2017 at 9:00:21 PM UTC-8, Peter Kurrasch wrote:
> By definition, a CPS is the authoritative document on what root
> certificates a CA operates and how they go about that operation.  If the
> GlobalSign CPS has been updated to reflect the loss of their 2 roots,
> that's fine.  Nobody is questioning that.
> 
> What is being questioned is whether updating the GlobalSign CPS is
> sufficient to address the needs, concerns, questions, or myriad other
> issues that are likely to come up in the minds of GlobalSign subscribers
> and relying parties--and, for that matter, Google's own subscribers and
> relying parties.  To that, I think the answer must be: "no, it's not
> enough".  Most people on the internet have never heard of a CPS and of
> those who have, few will have ever read one and fewer still will have read
> the GlobalSign CPS.

Again while I can not speak for GlobalSign I can say that there has been far 
more public notice than a simple CP/CPS update. 

In addition to the Google Blog post about the acquisition 
(https://security.googleblog.com/2017/01/the-foundation-of-more-secure-web.html),
 the purchase was picked up by many high profile technology news sources, some 
of which included:
-  https://www.theregister.co.uk/2017/01/27/google_root_ca/
-  
http://www.infoworld.com/article/3162102/security/google-moves-into-root-certificate-authority-business.html
- http://www.securityweek.com/google-launches-its-own-root-certificate-authority

Also this topic has been discussed at great length in numerous forums around 
the web. 

This is above and beyond the public notification that is built into the various 
root programs such as:
> The Google Trust Services CP/CPs lists GlobalSign as subordinates
> The Google Trust Services website has a link to the GlobalSign CP/CPS as well 
> as their audit reports.
> The Mozilla bug on this topic discusses the change in ownership,
> The Mozilla CA registry will also reference the change in ownership,
> The Microsoft CA registry will also reference the change in ownership,
> The Mozilla Salesforce instance will reference the change in ownership,
> This public thread discusses the change in ownership.

I am not sure there is much more meaningful options of notification left.

Additionally as stated, EV badges will still correctly reflect that it is 
GlobalSign who issues the associated certificates, and not Google.

The only opportunity for confusion comes from those who look at the 
certificates themselves and missed all of the above notifications.

It is also important to note that this is a very common situation, to see how 
common it is visit the page Microsoft maintains for Root Program members - 
https://social.technet.microsoft.com/wiki/contents/articles/37425.microsoft-trusted-root-certificate-program-participants-as-of-march-9-2017.aspx

You will notice the first column is the name of the current owner and the 
second column is the name in the certificate.

A few you will notice are:

Amazon,   Starfield Services Root Certificate Authority - G2
Asseco Data Systems S.A. (previously Unizeto Certum), Certum CA
Entrust, Trend Micro 1
Entrust, Trend Micro 2
Entrust, Trend Micro 3
Entrust, Trend Micro 4  
Comodo, The USERTrust Network™
Comodo, USERTrust (Client Authentication / Secure Email)
Comodo, USERTrust (Code Signing)
Comodo, USERTrust RSA Certification Authority
Comodo, UTN-USERFirst-Hardware
Symantec / GeoTrust
Symantec / Thawte   
Symantec / VeriSign
Trustwave, XRamp Global Certification Authority

And more...

While I sincerely want to make sure there are no surprises, given how common it 
is for names in root certificates not to match the current owner, those who are 
looking at certificate chains should not be relying on the value in the root 
certificate in the first place wrong in very significant situations. 

Ryan
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Trust Services roots

2017-03-08 Thread Ryan Hurst via dev-security-policy
> Jakob: An open question is how revocation and OCSP status for the 
> existing intermediaries issued by the acquired roots is handled. 

Google is responsible for producing CRLs for from these roots. We are also 
currently
relying on the OCSP responder infrastructure of GlobalSign for this root but are
In the process of migrating that inhouse.

> Jakob: Does GTS sign regularly updated CRLs published at the (GlobalSign) 
> URLs 
> listed in the CRL URL extensions in the GlobalSign operated non-expired 
> intermediaries? 

At this time Google produces CRLs and works with GlobalSign to publish those 
CRLs.

> Jakob: Hopefully these things are answered somewhere in the GTS CP/CPS for 
> the 
> acquired roots. 

This level of detail is not typically included in a CPS, for example, a service 
may change 
Which internet service provider or CDN service they use and not need update 
their CP/CPS.


> Jakob: Any relying party seeing the existing root in a chain would see the 
> name GlobalSign in the Issuer DN and naturally look to GlobalSign's 
> website and CP/CPS for additional information in trying to decide if 
> the chain should be trusted. 

The GlobalSign CPS indicates that the R2 and R4 are no longer under their 
control.

Additionally given the long term nature of CA keys, it is common for the DN not 
to accurately 
represent the organization that controls it. As I mentioned in an earlier 
response in the 90’s I 
created roots for a company called Valicert that has changed hands several 
times, additionally
Verisign, now Symantec in this context has a long history of acquiring CAs and 
as such they 
have CA certificates with many different names within them.

> Jakob: A relying party might assume, without detailed checks, that these 
> roots 
> are operated exclusively by GlobalSign in accordance with GlobalSign's 
> good reputation. 

As the former CTO of GlobalSign I love hearing about their good reputation ;)

However I would say the CP/CPS is the authoritative document here and since
 GMO GlobalSign CP/CPS clearly states the keys are no longer in their control I 
believe this
Should not be an issue.

> Jakob: Thus a clear notice that these "GlobalSign roots" are no longer 
> operated by GlobalSign at any entrypoint where a casual relying party 
> might go to check who "GlobalSign R?" is would be appropriate. 

I would argue the CA’s CP/CPS’s are the authoritative documents here and would
Satisfy this requirement.

> Jakob: If possible, making Mozilla products present these as "Google", not 
> "GlobalSign" in short-form UIs (such as the certificate chain tree-like 
> display element).  Similarly for other root programs (for example, the 
> Microsoft root program could change the "friendly name" of these). 

I agree with Jakob here, given the frequency in which roots change hands, it 
would make
sense to have an ability to do this. Microsoft maintains this capability that 
is made available
to the owner.

There are some limitations relative to where this domain information is used, 
for example
 in the case of an EV certificate, if Google were to request Microsoft  use 
this capability the
EV badge would say verified by Google. This is because they display the root 
name for the 
EV badge. However, it is the subordinate CA in accordance with its CP/CPS that 
is responsible
for vetting, as such the name displayed in this case should be GlobalSign.

Despite these limitations, it may make sense in the case of Firefox to maintain 
a similar capability.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Trust Services roots

2017-03-08 Thread Ryan Hurst via dev-security-policy
> pzb: Policy Suggestion A) When transferring a root that is EV enabled, it 
> should be clearly stated whether the recipient of the root is also 
> receiving the EV policy OID(s). 

> Gerv: I agree with this suggestion; we should update 
> https://wiki.mozilla.org/CA:RootTransferPolicy , and eventually 
> incorporate it into the main policy when we fix 
> https://github.com/mozilla/pkipolicy/issues/57 . 

I think this is good.


> Gerv: https://wiki.mozilla.org/CA:RootTransferPolicy says that "The 
> organization who is transferring ownership of the root certificate’s 
> private key must ensure that the transfer recipient is able to fully 
> comply with Mozilla’s CA Certificate Policy. The original organization 
> will continue to be responsible for the root certificate's private key 
> until the transfer recipient has provided Mozilla with their Primary 
> Point of Contact, CP/CPS documentation, and audit statement (or opinion 
> letter) confirming successful transfer of the root certificate and key." 

> Gerv: I would say that an organization which has acquired a root certificate 
> in the program and which has provided Mozilla with the above-mentioned 
> information is thereby a member of the program. As the policy says that 
> the transferring entity continues to be responsible until the 
> information is provided, that seems OK to me. 

This seems reasonable to me also.

> Gerv: This position would logically lead to the position that a root 
> inclusion 
> request from an organization which does not have any roots is also, 
> implicitly, an application to become a member of the program but the two 
> things are distinct. One can become a member of the program in other 
> ways. Membership is sort of something that happens to one automatically 
> when one successfully achieves ownership of an included root. 

This seems reasonable to me also.


> pzb: Policy Suggestion B) Require that any organization wishing to become 
> a member of the program submit a bug with links to content 
> demonstrating compliance with the Mozilla policy.  Require that this 
> be public prior to taking control of any root in the program. 

> Gerv: We do require this, but not publicly. I note and recognise Ryan's 
> concern about requiring advance disclosure of private deals. I could see 
> a requirement that a transferred root was not allowed to issue anything 
> until the appropriate paperwork was publicly in place. Would that be 
> suitable? 

Could you clarify what you mean by appropriate paperwork?


> pzb: Policy Suggestion C) Recognize that root transfers are distinct from 
> the acquisition of a program member.  Acquisition of a program 
> member (meaning purchase of the company) is a distinctly different 
> activity from moving only a private key, as the prior business 
> controls no longer apply in the latter case. 

> Gerv: https://wiki.mozilla.org/CA:RootTransferPolicy does make this 
distinction, I feel - how could it be better made? 

After re-reading this text I personally think this is clear.


> pzb: Policy Suggestion D) Moving from being a RA to a CA or moving from 
> being a single tier/online (i.e. Subordinate-only) CA to being a 
> multi-tier/root CA requires a PITRA 

> Gerv: Again, would this be covered by a requirement that no issuance was 
> permitted from a transferred root until all the paperwork was in place, 
> including appropriately-scoped audits? This might lead to a PITRA, but 
> would not have to. 

This seems reasonable to me also.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Trust Services roots

2017-03-08 Thread Ryan Hurst via dev-security-policy
> jacob: Could a reasonably condition be that decision authority, actual and 
> physical control for a root are not moved until proper root program 
> coordination has been done (an action which may occur after/before the 
> commercial conclusion of a transaction).  From a business perspective 
> this could be comparable to similar requirements imposed on some 
> physical objects that can have public interest implications. 

Microsoft has a similar requirement in their program, we had to get permission
from them before we could finalize commercial terms for this acquisition. 
I personally think this is a good policy and one I think Mozilla should adopt 
as well.

It adds more complexity to these acquisitions in that one needs to get the 
approvals
from multiple parties but I think that the value to the ecosystem warrants 
this complexity.


> Jacob: For clarity could Google and/or GTS issue a dedicated CP/CPS pair for 
> the brief period where Google (not GTS) had control of the former 
> GlobalSign root (such a CP/CPS would be particularly simple given that 
> no certificates were issued).  Such as CP/CPS should also clarify any 
> practices and procedures for signing revocation related data (CRLs, 
> OCSP responses, OCSP responder certificates) from that root during the 
> transition.  The CP/CPS would also need to somehow state that the 
> former GlobalSign issued certificates remain valid, though no further 
> such certificates were issued in this interim period. 

> Similarly could Google and/or GTS issue a dedicated CP/CPS pair for the 
> new roots during the brief period where Google (not GTS) had control of 
> those new roots. 

While we want to work with the community to provide assurances we followed
best practices and the required policies in this transfer I do not think this 
would provide
any further insights.

Before the transfer we, and our auditors, reviewed the CP/CPS, as well as the 
policies 
and procedures associated with the the management of these keys, and found them 
to be
both compliant with both the requirements and best practices. In other words,
both we, and our auditors, are stating, as supported by the opinion letter, 
that we believe the 
Google CP/CPS covered these keys during this period.

If we created a new CP/CPS for that period it would, at best, be a subset of 
the 
Google CP/CPS and offer no new information other than the omission of a few 
details.

Could you maybe clarify what your goals are with this request, with that we can 
potentially 
propose an alternate approach to address those concerns. 

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Trust Services roots

2017-03-08 Thread Ryan Hurst via dev-security-policy
> pzb: According to the opinion letter:
> "followed the CA key generation and security requirements in its:
> Google Internet Authority G2 CPS v1.4" (hyperlink omitted)

> According to that CPS, "Key Pairs for the Google Internet Authority
> are generated and installed in accordance with the contract between
> Google and GeoTrust, Inc., the Root CA."

> Are you asserting that the authority for the key generation process
> the new Google roots is "the contract between Google and GeoTrust,
> Inc."?

No, that is not the intent of that statement, it is a good catch. This is 
simply a poorly worded statement.

To clarify our acquisition of these keys and certificates are independent of 
our agreement with GeoTrust, Inc. 

The Intent of that statement is to say that the technical requirements of that 
contract, which in essence refer to meeting the WebTrust requirements, were 
followed.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Trust Services roots

2017-03-07 Thread Ryan Hurst via dev-security-policy
> pzb: I appreciate you finally sending responses.  I hope you appreciate

> that they are clearly not adequate, in my opinion.  Please see the

> comments inline.

Again, sorry for the delay in responding, I will be more prompt moving
forward.

> pzb: This does not resolve the concern.  The BRs require an "an unbroken

> sequence of audit periods".  Given that GlobalSign clearly cannot make

> any assertion about the roots after 11 August 2016, you would have a

> gap from 11 August 2016 to 30 September 2016 in your sequence of audit

> periods if your next report runs 1 October 2016 to 30 September 2017.


I understand your point but this is not entirely accurate. Our strategy, to
ensure a smooth transition, which was reviewed with the auditors and root
program administrators was that we take possession of the root key material
and manage it offline, in accordance with our existing WebTrust audit and
the “Key Storage, Backup and Recovery Criterion”.  It was our, and EY's
opinion that the existing controls and ongoing WebTrust audits were
sufficient given this plan and scope.

As such, during the period in question, the existing audits provide an
un-broken sequence of audit periods.

That said, we will follow-up with our auditors to see if it is possible to
extend the scope of our 2017 audit to also cover this interval to ensure
the community has further assurances of continuity.

> pzb: Based on my personal experience, it is possible to negotiate a deal

> and set a closing date in the future.  This is standard for many

> acquisitions; you frequently see purchases announced with a closing

> date in the future for all kinds of deals.  The gap between signing

> the deal and closing gives acquirers the opportunity to complete the

> steps in B.

As I stated, I think that moving forward this could be a good policy
change, I am hesitant to see any user agent adopt policies that are overly
prescriptive of commercial terms between two independent parties.


> pzb: You appear to be confusing things here.  "Subordinate CA Certificate

> Life Cycle Management" is the portion of the WebTrust criteria that

> covers the controls around issuing certificates with the cA component

> of the basicConstraints extension set to true.  It has nothing to do

> with operating a subordinate CA.

I am familiar with the "Subordinate CA Certificate Life Cycle Management"
controls

I just should have been more explicit in my earlier response.

These keys were generated and stored in accordance with Asset
Classification and Management Criterion, and Key Storage, Backup and
Recovery Criterion.

Before utilizing the associated keys in any activity covered by the
“Subordinate CA

Certificate Life Cycle Management” criterion all associated policies and
procedures were

created, tested and then reviewed by our auditors. Additionally, those
auditors were

present during the associated ceremony. All such activities which will be
covered under

our 2017 audit.

This is similar to how a CA can, and does, revise and extend their policies
between

audits to cover new products and services.

This is consistent with the approach we discussed, and had approved with
the various root program administrators.


> pzb: You have stated that the Google CPS (not the GTS CP/CPS) was the

> applicable CPS for your _root CAs_ between 11 August 2016 and 8

> December 2016.  The Google CPS makes these statements.  Therefore, you

> are stating that the roots (not just GIA G2) were only permitted to

> issue Certificates to Google and Google Affiliates.

Correct, these roots were not used to issue certificates at all until last
week and when one was used, it was used to issue a subordinate CA
certificate to Google.

Though we do not have a product or service to announce currently, we can
say we will expand the  use of GTS beyond GIAG2, at which time policies,
procedures, CP and CPS will be updated accordingly. This progression makes
sense as we're moving from a constrained intermediate to a Root.

> Mozilla has consistently taken the position that roots that exclusively
issue to a

> single company are not acceptable in the root program.

Google and its affiliate companies are more than a single company.

Additionally, clearly the intent of this rule is to prevent thousands of
organizations issuing a handful of certificates polluting the root store.

In the case of Google and its Affiliate companies, we operate products and
services for our

customers. This is similar to how Amazon and a number of other root
operators operate

products and services for their customers, the core difference being the
breadth of user

facing products we have.

> This does not address the question.  The Google CPS clearly states

> that it only covers the GIA G2 CA.  You have stated that the Google

> CPS (not the GTS CP/CPS) was the applicable CPS for your _root CAs_

> between 11 August 2016 and 8 December 2016.  This puts your statement

> at adds with what is written in 

Re: Google Trust Services roots

2017-03-06 Thread Ryan Hurst via dev-security-policy
> Gerv: Which EV OID are you referring to, precisely? 

I was referring to the GlobalSign EV Certificate Policy OID 
(1.3.6.1.4.1.4146.1.1) but more concretely I meant any and all EV related OIDs, 
including the CAB Forum OID of 2.23.140.1.1.

> Gerv: Just to be clear: GlobalSign continues to operate at least one subCA 
> under a root which Google has purchased, and that root is EV-enabled, 
> and the sub-CA continues to do EV issuance (and is audited as such) but 
> the root is no longer EV audited, and nor is the rest of the hierarchy? 

Yes, that is correct.

> Gerv: Can you tell us what the planned start/end dates for the audit period 
> of 
> that annual audit are/will be? 

Our audit period is October 1st to the end of September. The associated report 
will be released between October and November, depending on our auditors 
schedules. 

> Gerv: Are the Google roots and/or the GlobalSign-acquired roots currently 
> issuing EE certificates? Were they issuing certificates between 11th 
August 2016 and 8th December 2016? 

No they were not issuing certificates between 11th August 2016 and 8th December 
2016.

We generated our first certificate, a subordinate CA, last week, that CA is not 
yet in use.

Ryan Hurst
Product Manager 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Trust Services roots

2017-03-06 Thread Ryan Hurst via dev-security-policy
ppropriate in this
case. Google and Google Affiliates operate some of the most popular and
frequented sites on the web, as part of that Google often hosts customer
applications and content.

As I understand it, the goal of the Mozilla root program is to enable sites
just like these to offer their services over SSL. Enabling Google to do so
for its own properties and its customers seems well within the intent of
the program.


> pzb: The Google CPS says it only covers Google Internet Authority G2.
> 5) Is there a version of the CPS that covers the GS roots?

After a review of the GTS CPS, it became clear we were not sufficiently
clear when the transition from the GIAG2 CPS to the GTS CPS happened, as
per above we have since made a text clarification we hope addresses this
question:

“Prior to 11 August 2016, the Roots R2, R4, GTS Root R1, GTS Root R2, GTS
Root R3 and GTS Root R4 were operated by GMO GlobalSign, Inc. according to
GMO GlobalSign, Inc.’s Certificate Policy and Certification Practice
Statement. Between 11 August 2016 and 8 December 2016, Google Inc. operated
these Roots according to Google Inc.’s Certification Practice Statement. As
of 9 December 2016, Google Trust Services LLC operates these Roots under
Google Trust Services LLC’s Certificate Policy and Certification Practice
Statement.”


Ryan Hurst
Product Manager
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Trust Services roots

2017-02-09 Thread Ryan Hurst via dev-security-policy
oogle has operated 
a WebTrust audited subordinate CA under Symantec for quite a long time. As part 
of this they have maintained audited facilities, and procedures appropriate for 
offline key management, CRL/OCSP generation, and other related activities. 
Based on this, and the timing of both our audit, and key transfer all parties 
concluded it would be sufficient to have the auditors provide an opinion letter 
about the transfer of the keys and have those keys covered by the subsequent 
annual audit. 

We have provided these letters directly to the root programs and have recently 
secured permission from the auditors to release them publicly (I will add them 
to the bug).

For those not familiar with the process, If Google had never been a WebPKI CA, 
this situation would have been addressed with a pre-issuance audit and a 
subsequent full audit 90 days later. 

Since Google is a long-time WebTrust audited CA and our audits and acquisition 
were going to happen approximately at the same time, this would have provided 
no new evidence to the root programs or the community. 

The purpose of the audit is to provide assurances to the root program and 
community that best practices are being followed and the relying parties best 
interests are being met. In this case following the procedure defined for an 
new CA would not have aided that goal.

As an example, consider the case of a WebPKI trusted root with one key already 
trusted, they can generate a new key and include it in its next audit without 
the need to do the pre-issuance audit.

I think this is an appropriate position to take and an opportunity to clarify 
the Mozilla root program to better inform similar cases in the future.

Ryan Hurst
Google, Inc.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Trust Services roots

2017-02-09 Thread Ryan Hurst via dev-security-policy
as
operated a WebTrust audited subordinate CA under Symantec for quite a long
time. As part of this they have maintained audited facilities, and
procedures appropriate for offline key management, CRL/OCSP generation, and
other related activities. Based on this, and the timing of both our audit,
and key transfer all parties concluded it would be sufficient to have the
auditors provide an opinion letter about the transfer of the keys and have
those keys covered by the subsequent annual audit.

We have provided these letters directly to the root programs and have
recently secured permission from the auditors to release them publicly (I
will add them to the bug).

For those not familiar with the process, If Google had never been a WebPKI
CA, this situation would have been addressed with a pre-issuance audit and
a subsequent full audit 90 days later.

Since Google is a long-time WebTrust audited CA and our audits and
acquisition were going to happen approximately at the same time, this would
have provided no new evidence to the root programs or the community.

The purpose of the audit is to provide assurances to the root program and
community that best practices are being followed and the relying parties
best interests are being met. In this case following the procedure defined
for an new CA would not have aided that goal.

As an example, consider the case of a WebPKI trusted root with one key
already trusted, they can generate a new key and include it in its next
audit without the need to do the pre-issuance audit.

I think this is an appropriate position to take and an opportunity to
clarify the Mozilla root program to better inform similar cases in the
future.


It's my hope these answers sufficiently addressed your concerns, if not let
me know if there are any clarifications I can make.


Thanks again,

Ryan Hurst

Google, Inc.


On Thu, Feb 9, 2017 at 2:06 AM, Gervase Markham <g...@mozilla.org> wrote:

> On 09/02/17 05:31, Peter Bowen wrote:
> > Third, the Google CPS says Google took control of these roots on
> > August 11, 2016.  The Mozilla CA policy explicitly says that a bug
> > report must be filed to request to be included in the Mozilla CA
> > program.
>
> But the Mozilla CA policy does not require that the organization on the
> receiving end of a root transfer must re-apply for inclusion for
> already-included certificates.
>
> > It was not until December 22, 2016 that Google requested
> > inclusion as a CA in Mozilla's CA program
> > (https://bugzilla.mozilla.org/show_bug.cgi?id=1325532).  This does not
> > appear to align with Mozilla requirements for public disclosure.
>
> We require disclosure of root ownership transfer, but not _public_
> disclosure. Kathleen would need to speak regarding dates, but I know
> Mozilla was made aware of these transfers significantly before the
> inclusion request was filed.
>
> Apart from this, however, it seems at first glance that the other
> assertions made in Peter's post here in mozilla.dev.security.policy are
> correct. So CCing Ryan Hurst of GTS for a response.
>
> Gerv
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Remediation Plan for WoSign and StartCom

2016-10-19 Thread Ryan Hurst
On Wednesday, October 19, 2016 at 12:58:49 AM UTC-7, Kurt Roeckx wrote:
> I at least have some concerns about the current gossip draft and talked 
> a little to dkg about this. I should probably bring this up on the trans 
> list.
> 

Please do, we would like to see this brought to closure soon and we want to 
make sure all feedback is considered.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Remediation Plan for WoSign and StartCom

2016-10-19 Thread Ryan Hurst
It is true, that without gossip, CT is dependent on browsers monitoring the log 
ecosystem, this is one reason why in the Chrome policy the one Google log is 
required.

I would argue, with the monitoring Google does and the one Google log policy 
that this risk is mitigated sufficiently, even without gossip.

Gossip is needed, as is Firefox's own implementation of CT verification, which 
is actively in the works, but given the above mitigations I still believe this 
extra requirement is not necessary.

Ryan
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Remediation Plan for WoSign and StartCom

2016-10-18 Thread Ryan Hurst
All,

I do not understand the desire to require StartCom / WoSign to not utilize 
their own logs as part of the associated quorum policy. 

Certificate Transparency's idempotency is for not dependent on the practices of 
the operator. By requiring the use of a third-party log (in this case Google's) 
and requiring that the logs are public,  CT "works" as expected.

There appears to be an argument being made that this restriction comes from the 
fact that Firefox does not yet have CT support, I would argue that this is not 
material. My justification for this argument is that today, Firefox depends on 
SafeBrowsing, this is a Google-provided service and Firefox uses it to protect 
users from malicious sites.

This is not significantly different from the way Chrome (and others) rely on 
the wonderful Mozilla Trusted Root Program.

Based on this it seems reasonable to allow them to use the same logs they use 
for EV.

Ryan
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla Root Store Elsewhere (Was Re: StartCom & Qihoo Incidents)

2016-10-18 Thread Ryan Hurst
Tom,

On the topic of tooling I have a console tool, and library, that can be used to 
parse and filter various certificate stores, you can find it here: 
https://github.com/PeculiarVentures/tl-create

Ryan
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: WoSign: updated report and discussion

2016-10-11 Thread Ryan Hurst
On Tuesday, October 11, 2016 at 1:28:42 AM UTC-7, Gervase Markham wrote:

> I presume you mean "WoSign" here? I'm not aware of significant failures
> at StartCom prior to the acquisition. But then you go on to talk about
> due diligence in acquisition, so I'm confused. What failures at StartCom
> pre-acquisition are you thinking of?

No I meant Startcom. I was not referring to a specific issue. I was simply 
stating that when you buy something, you get the good, and the bad and that 
includes you tainting your purchase with your own issues.  This is not unique 
to the WebPKI ecosystem, this is the way acquisitions work.
 
>> Or that they used a different codebase for the CMS. But saying "it's
>> just luck" is an un-refutable statement. StartCom was not involved in
>> most of the issues; many of the ones on the list happened even before
>> the acquisition. We can only work with the issues we have, not ones that
>> might have hypothetically happened if the "luck" had been different.

Given how manual the process was and that the keys were under both logically 
and physically the control of WoSign the different code base is somewhat 
immaterial. Control is control.

My statement about “luck” is an attempt to speak to that. 


>> I think this is a matter for the CAB Forum.

I think that is a bad position to take.  Certainly the trustworthiness of an 
operator is of paramount importance to Mozilla when considering to make an 
accommodation on behalf of the issuer?

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: WoSign: updated report and discussion

2016-10-10 Thread Ryan Hurst
Gerv,

Again, this mail represents my own personal beliefs and does not necessarily 
represent the beliefs of my employer, Google, or Let’s Encrypt where I am an 
advisor.

I agree an appropriate response depends on the facts, so as you say, it depends.

I also believe there are a few core questions that are relevant to “what it 
depends on”, these include:
Is it reasonable for the operational and technical failures StartCom made prior 
to the acquisition to be handled as a separate incident?
Did the operational changes that occurred after the acquisition impact the 
trustworthiness of StartCom as an independent entity?
How severe is the failure of both WoSign and StartCom to notify the root 
programs of the change of ownership?
Should the misrepresentation of facts regarding the acquisition and other 
issues, by both parties, have an impact on the faith in any claims made by the 
two organizations?

On the first question, I can see arguments in both directions. 

When a company is purchased, you inherit both the assets and liabilities of 
that organization. This is why due diligence is such an important part of 
acquisitions. In short, under this line of reasoning, if Qihoo/Wosign failed to 
do sufficient due diligence as part of the acquisition, this is their problem 
and not the problem of the WebPKI. In other words, with this line of thinking 
treating both sets of issues as one “incident” could be seen as reasonable and 
expected.

The alternative view would be to say that the most severe issues were a 
function of WoSign’s leadership and technical practices. This, combined with 
StartCom’s past good practices, might carry sufficient weight to justify 
special casing the StartCom issues.

I struggle with this second view. To understand why let’s look at DigiCert’s 
acquisition of the Verizon PKI business. We all know how poorly Verizon managed 
that infrastructure, it was, a liability to the WebPKI. 

I am confident that if DigiCert had not taken on the burden to repair their 
dysfunction Verizon would have been distrusted. In this respect my view is that 
DigiCert spent the trust and goodwill they had earned in the past for a grace 
period to clean up the Verizon mess.

In the case of Qihoo/WoSign/Startcom the prior goodwill is, in what is for all 
intents and purposes, a non-existent organization (Startcom). I say this 
because it is now under new ownership and new management. In other words, the 
new management has no equivalent goodwill to spend.


On the second question, based on Xiaosheng’s email, it seems the CA and OCSP 
services have been under the administrative and operational control of WoSign 
since December 2015. It also seems the RA (the CMS) system has been in a shared 
control situation for what we can only assume is the same period. 

These are the material systems covered by Webtrust audits, the others while 
potentially relevant are arguably not material to the issuance of SSL 
certificates. 

Since the most severe issues boil down to the operational and technical 
practices of WoSign, and the systems were under the control of WoSign since 
last year, it seems it was only luck of the draw that saved the involvement of 
StartCom in the other issues.


On the third question, I would argue that this is the smallest of the 
identified issues since both organizations were members of the root program, 
had active WebTrust audits, and contracts in place with various root stores. I 
say this because I believe that given these facts it is likely Mozilla and 
Microsoft would have raised no concerns and as a result this would have been a 
non-issue.

This is not to say their total failure to notify is acceptable, just that the 
larger issue in my mind is the repeated misrepresentation about this 
transaction.


On the fourth question, while this is not the technical or contractual 
requirement of the Mozilla root program, being truthful is the foundation of 
any good relationship. The “goodwill” one gets in a relationship is always a 
function of the quality of that relationship. 

It is also my understanding Qihoo, WoSign, and Startcom were all voting in the 
CAB/Forum during this period, in essence, giving one organization three votes. 
This may have been an oversight but it also puts into question the integrity of 
these organizations.

As such, it seems to me, that offering goodwill to organizations that has a 
history of acting in bad faith (on purpose or otherwise) without other 
mitigating factors sets a bad precedent.


In summary, I am still inclined to say the right response is to treat the two 
sets of incidents as one.  The gestures being made by Qihoo are the right ones 
to be made, but they do not nullify the past actions.

Instead, I believe, the good faith steps being made by Qihoo to address this 
situation should be given heavy weight in any resubmission process.

I would add that Mozilla should update its policies to make it clear how 
important the ownership notification 

Re: WoSign: updated report and discussion

2016-10-07 Thread Ryan Hurst
All,

What I am about to say represents my own personal beliefs and are not 
necessarily the same beliefs of my employer, Google, or Let’s Encrypt where I 
am an advisor.

I have been involved in the WebPKI since its inception. In WebPKI, a 
Certificate Authority has a conflicted role, it is responsible for acting in 
the best interest of the Relying Party but its existence is dependent on that 
of the Subscriber.

This is true of all CAs, even Let’s Encrypt, where its existence is dependent 
on donations from large organizations who, in many cases, utilize their service.

This model only works when there is a consequence for CA’s that violate the 
interests of the Relying Party. 

That begs the question of what are the interests of a Relying Party in the 
context of the WebPKI? I would say the relying party expects CAs:
- To understand the deeply technical nature of X.509 and the web,
- To deploy products and services that securely support secure communications 
on the web,
- To operate their services in such a way that they are verifiable by 
third-parties,
- To act in a trustworthy and transparent way.

As I look at what has happened in this particular case, despite recent 
gestures, it is clear to me that WoSign has not lived up to these expectations.

While I have a ton of admiration for Eddy and the way that the independent 
StartCom operated, StartCom is a corporate entity and not an individual. 
Moreover, given that for the last year it has had numerous technical lapses, 
and its leadership misrepresented the material facts about its operation, it 
also has largely failed on these points.

This begs the question of what should be done in this case. I believe the 
answer there is buried in the role of the browsers in the WebPKI ecosystem 
where they represent the interests of relying parties.

To this end, when I managed the Microsoft Root Program I did my best to guide 
my decisions by the following tenets:
- I fight for the relying party,
- I fight for the WebPKI ecosystem,
- I must be predictable and fair, 
- I must encourage the ecosystem to evolve to meet changing needs,
- I must comply with all legal and regulatory obligations.

In this case, it seems to me that WoSign’s purchase of StartCom, short of the 
lies and subterfuge (which I do not mean to trivialize), is not materially 
different than Symantec’s ownership of Thawte, RapidSSL, or GeoTrust brands.

In past actions, against Symantec, there were no carve outs for the different 
brands. As such, it would seem that to do so for WoSign and StartCom would not 
be an action consistent with the principals I tried to live by as a manager of 
a root program.

That then takes us to the structural changes proposed by Qihoo. I should say 
that I personally have faith in Inigo as a leader who would do the right thing 
for the WebPKI and believe that overall these changes seem like the right 
gestures to be making. They do not, however, negate the facts in question.

It seems to me based on this thread that Mozilla, or more specifically Gerv, is 
inclined to treat StartCom differently, I can assume this is because:
- StartCom prior to its acquisition had a positive brand reputation,
- He agrees that the new leadership would likely act in the right interest of 
the WebPKI.

The problem is that this sets a dangerous precedent. Let’s assume a similar 
situation happens in the future with another CA who owns multiple brands. Would 
you ignore the violations of the rules and allow them to carve off one brand 
because you liked who they would let manage it if you do?

I would hope the answer is no.

I would say that holding them equally accountable is the right thing to do, 
since for the time in question, they were equivalently managed and operated. 

To offer much more than that would not be fair or in the best interest of the 
WebPKI ecosystem.

Ryan
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Sanctions short of distrust

2016-09-06 Thread Ryan Hurst
On Tuesday, September 6, 2016 at 7:54:14 AM UTC-7, Jakob Bohm wrote:
> On 06/09/2016 16:43, Martin Rublik wrote:
> > On Tue, Sep 6, 2016 at 2:16 PM, Jakob Bohm  wrote:
> >
> >> Here are a list of software where I have personally observed bad OCSP
> >> stapling support:
> >>
> >> IIS for Windows Server 2008 (latest IIS supporting pure 32 bit
> >> configurations): No obvious (if any) OCSP stapling support.
> >
> >
> > AFAIK IIS 7.0 supports OCSP stapling and it is enabled by default, for more
> > information see https://unmitigatedrisk.com/?p=95 or
> > https://www.digicert.com/ssl-support/windows-enable-ocsp-stapling-on-server.htm
> >
> 
> 
> Nice surprise (if true), this was unreasonably well hidden, for example
> there is no indication of this in any relevant parts of the
> administration user interface.  I'll have to device a test to check if
> it actually does staple OCSP on our servers.
> 
> Enjoy
> 
> Jakob
> -- 
> Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
> Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
> This public discussion message is non-binding and may contain errors.
> WiseMo - Remote Service Management for PCs, Phones and Embedded

It is true. Windows (and IIS as a result) was the first to support OCSP 
stapling and has the most robust support for it. Sleevi has a nice summary OCSP 
stapling issues here - https://gist.github.com/sleevi/5efe9ef98961ecfb4da8

Lets start a new thread to discuss OCSP stapling vs re-using this one.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy