Re: Terms and Conditions that use technical measures to make it difficult to change CAs

2020-03-16 Thread Burton via dev-security-policy
A customer should able have the choice to change their CA provider without
threats of revocation by the CA. It’s definitely an abuse of the revocation
function.

I do understand terms and conditions are in normal circumstances legally
binding once signed by a customer but this practice is abuse of trust
between the customer and the CA. The CA is acting in bad faith.

I suggest Mozilla to send a strongly worded signed letter to every CAs
highlighting the abuse of revocation function and say whoever it is must to
stop immediately or face consequences.

On Mon, 16 Mar 2020 at 23:51, Matt Palmer via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Mon, Mar 16, 2020 at 09:06:17PM +, Tim Hollebeek via
> dev-security-policy wrote:
> > I'd like to start a discussion about some practices among other
> commercial
> > CAs that have recently come to my attention, which I personally find
> > disturbing.  While it's perfectly appropriate to have Terms and
> Conditions
> > associated with digital certificates, in some circumstances, those Terms
> and
> > Conditions seem explicitly designed to prevent or hinder customers who
> wish
> > to switch to a different certificate authority.  Some of the most
> disturbing
> > practices include the revocation of existing certificates if a customer
> does
> > not renew an agreement, which can really hinder a smooth transition to a
> new
> > provider of digital certificates, especially since the customer may not
> have
> > anticipated the potential impact of such a clause when they first signed
> the
> > agreement.  I'm particularly concerned about this behavior because it
> seems
> > to be an abuse of the revocation system, and imposes costs on everyone
> who
> > is trying to generate accurate and efficient lists of revoked
> certificates
> > (e.g. Firefox).
> >
> > I'm wondering what the Mozilla community thinks about such practices.
>
> Utterly reprehensible, and should be called out loudly whenever it's found.
>
> However, it might be tricky for Mozilla itself to create and enforce such a
> prohibition, since it gets deep into the relationship between a CA and its
> customer.  I know there are already several requirements around what must
> go
> into a Subscriber Agreement in the BRs, etc, but they're a lot narrower
> than
> a blanket "thou shalt not put anything in there that restricts a customer's
> ability to move to a competitor", and a narrow ban on individual practices
> would be easily gotten around by a CA that was out to lock in their
> customers.
>
> I recognise that it can be tricky for a CA to (be seen to) criticise their
> competitors' business practices, but this really is a case where public
> awareness of these kinds of shady practices are probably the best defence
> against them.  Get enough people up in arms, hopefully hit the shonkster in
> the hip pocket, and it'll encourage them to rethink the wisdom of this kind
> of thing.
>
> - Matt
>
> --
> A polar bear is a rectangular bear after a coordinate transform.
>
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: ssl.com: Certificate with Debian weak key

2020-03-16 Thread Matt Palmer via dev-security-policy
On Mon, Mar 16, 2020 at 12:11:57PM -0700, Chris Kemmerer via 
dev-security-policy wrote:
> On Wednesday, March 11, 2020 at 5:41:00 PM UTC-5, Matt Palmer wrote:
> > On Wed, Mar 11, 2020 at 10:46:05AM -0700, Chris Kemmerer via 
> > dev-security-policy wrote:
> > > On Tuesday, March 10, 2020 at 8:44:49 PM UTC-5, Matt Palmer wrote:
> > > > On Tue, Mar 10, 2020 at 01:48:49PM -0700, Chris Kemmerer via 
> > > > dev-security-policy wrote:
> > > For what it's worth, we believe that the current language in the BRs could
> > > be less ambiguous as far as the Debian weak keys are concerned.  For
> > > example, it seems that the community's expectations are for CAs to detect
> > > and block weak Debian keys generated by vulnerable RNG using OpenSSL in
> > > popular architectures.
> > 
> > The problem with using the argument that "the BRs are ambiguous" to try and
> > defend a breach of them is that there are always potential ambiguities in
> > all language -- in many ways, "ambiguity is in the eye of the beholder"
> > ("ambiguer"?).  My understanding of the consensus from past discussions on
> > this list is that if a CA believes there is an ambiguity in the BRs, the
> > correct action is to raise that in the CA/B Forum *before* they fall foul of
> > it.
> > 
> > CAs should be reading the BRs, as I understand it, in a "defensive" mode,
> > looking for requirements that could be read multiple ways, and when they are
> > found, the CA needs to ensure either that they are complying with the
> > strictest possible reading, or else bringing the ambiguity to the attention
> > of the CA/B Forum and suggesting less ambiguous wording.
> > 
> > At any rate, it would be helpful to know what, precisely, SSL.com's
> > understanding of this requirement of the BRs prior to the commencement of
> > this incident.  Can you share this with us?  Presumably SSL.com did a
> > careful analysis of all aspects of the BRs, and came to a conclusion as to
> > precisely what was considered "compliant".  With regards to this
> > requirement, what was SSL.com's position as to what was necessary to be
> > compliant with this aspect of the BRs?
> 
> We have already described our understanding of the expectations expressed
> in BR 6.1.1.3 and the steps we took to comply with it.

Sorry, I must have missed the description of SSL.com's understanding.  Could
you quote or reference it here, for clarity?

> Our implementation did not meet these expectations, as it was missing
> direct checks of keys matching the "openssl-blacklist" package. 
> Immediately upon our coming to this understanding,

This is SSL.com's post-incident understanding; what I believe is important
to also know is SSL.com's *pre*-incident understanding of the BR
requirements.

> > > That could be added in the BRs:
> > > 
> > > Change:
> > > "The CA SHALL reject a certificate request if the requested Public Key
> > > does not meet the requirements set forth in Sections 6.1.5 and 6.1.6 or if
> > > it has a known weak Private Key (such as a Debian weak key, see
> > > http://wiki.debian.org/SSLkeys)"
> > > 
> > > to something like:
> > > "The CA SHALL reject a certificate request if the requested Public Key
> > > does not meet the requirements set forth in Sections 6.1.5 and 6.1.6 or if
> > > it has a known weak Private Key using, as a minimum set the Debian weak
> > > keys produced by OpenSSL in i386 and x64 architectures (see
> > > http://wiki.debian.org/SSLkeys)"
> > 
> > It would appear that SSL.com is a member in good standing of the CA/B 
> > Forum. 
> > Is there any intention on the part of SSL.com to propose this change as a
> > ballot?  While you're at it, if you could include a fix for the issue
> > described in https://github.com/cabforum/documents/issues/164, that would be
> > appreciated, since it is the same sentences that need modification, and for
> > much the same reasons.
> 
> Yes, this is reasonable, and we treated such key as compromised, revoking
> it within 24 hours.
> 
> We would support a ballot that makes this clear.  We also monitor the
> discussion in https://github.com/cabforum/documents/issues/164.

As Ryan mentioned, "we would support a ballot" is not the same as "we intend
to propose a ballot".  Thank you for clarifying that SSL.com intends to
propse a ballot in your reply to Ryan.

> > > Then, it would be clear that all CAs would need to block "at least" the
> > > vulnerable keys produced by OpenSSL and could add other keys produced by
> > > OpenSSH or other applications if they wanted a "more complete" list.
> > 
> > Well, you're still missing the rnd/nornd/noreadrnd variations, and there's
> > no specification as to what key sizes are considered the bare minimum.
> 
> We mention these variations in bug 1620772, but thank you for repeating it
> here for completeness.
>
> For the record, this fact (“there's no specification as to what key sizes
> are considered the bare minimum”) is exactly our point too.

I think you misunderstood my point here.  I was 

Re: Terms and Conditions that use technical measures to make it difficult to change CAs

2020-03-16 Thread Matt Palmer via dev-security-policy
On Mon, Mar 16, 2020 at 09:06:17PM +, Tim Hollebeek via dev-security-policy 
wrote:
> I'd like to start a discussion about some practices among other commercial
> CAs that have recently come to my attention, which I personally find
> disturbing.  While it's perfectly appropriate to have Terms and Conditions
> associated with digital certificates, in some circumstances, those Terms and
> Conditions seem explicitly designed to prevent or hinder customers who wish
> to switch to a different certificate authority.  Some of the most disturbing
> practices include the revocation of existing certificates if a customer does
> not renew an agreement, which can really hinder a smooth transition to a new
> provider of digital certificates, especially since the customer may not have
> anticipated the potential impact of such a clause when they first signed the
> agreement.  I'm particularly concerned about this behavior because it seems
> to be an abuse of the revocation system, and imposes costs on everyone who
> is trying to generate accurate and efficient lists of revoked certificates
> (e.g. Firefox).
> 
> I'm wondering what the Mozilla community thinks about such practices.

Utterly reprehensible, and should be called out loudly whenever it's found.

However, it might be tricky for Mozilla itself to create and enforce such a
prohibition, since it gets deep into the relationship between a CA and its
customer.  I know there are already several requirements around what must go
into a Subscriber Agreement in the BRs, etc, but they're a lot narrower than
a blanket "thou shalt not put anything in there that restricts a customer's
ability to move to a competitor", and a narrow ban on individual practices
would be easily gotten around by a CA that was out to lock in their
customers.

I recognise that it can be tricky for a CA to (be seen to) criticise their
competitors' business practices, but this really is a case where public
awareness of these kinds of shady practices are probably the best defence
against them.  Get enough people up in arms, hopefully hit the shonkster in
the hip pocket, and it'll encourage them to rethink the wisdom of this kind
of thing.

- Matt

-- 
A polar bear is a rectangular bear after a coordinate transform.

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Terms and Conditions that use technical measures to make it difficult to change CAs

2020-03-16 Thread Tim Hollebeek via dev-security-policy
 

Hello,

 

I'd like to start a discussion about some practices among other commercial
CAs that have recently come to my attention, which I personally find
disturbing.  While it's perfectly appropriate to have Terms and Conditions
associated with digital certificates, in some circumstances, those Terms and
Conditions seem explicitly designed to prevent or hinder customers who wish
to switch to a different certificate authority.  Some of the most disturbing
practices include the revocation of existing certificates if a customer does
not renew an agreement, which can really hinder a smooth transition to a new
provider of digital certificates, especially since the customer may not have
anticipated the potential impact of such a clause when they first signed the
agreement.  I'm particularly concerned about this behavior because it seems
to be an abuse of the revocation system, and imposes costs on everyone who
is trying to generate accurate and efficient lists of revoked certificates
(e.g. Firefox).

 

I'm wondering what the Mozilla community thinks about such practices.

 

-Tim

 



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: ssl.com: Certificate with Debian weak key

2020-03-16 Thread Chris Kemmerer via dev-security-policy
On Monday, March 16, 2020 at 2:46:46 PM UTC-5, Ryan Sleevi wrote:
> On Mon, Mar 16, 2020 at 3:12 PM Chris Kemmerer via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
> 
> > > It would appear that SSL.com is a member in good standing of the CA/B
> > Forum.
> > > Is there any intention on the part of SSL.com to propose this change as a
> > > ballot?  While you're at it, if you could include a fix for the issue
> > > described in https://github.com/cabforum/documents/issues/164, that
> > would be
> > > appreciated, since it is the same sentences that need modification, and
> > for
> > > much the same reasons.
> >
> > Yes, this is reasonable, and we treated such key as compromised, revoking
> > it within 24 hours.
> >
> > We would support a ballot that makes this clear. We also monitor the
> > discussion in https://github.com/cabforum/documents/issues/164.
> >
> 
> While you answered "Yes", you followed with a different clarification. "We
> would support" seems different from "Is there any intention on SSL.com to
> propose"

Yes, we are happy to propose a ballot change to the BR language pertaining to 
this issue.

> 
> > We have responded as to how we interpreted and implemented this
> > requirement. Our blacklist did not include the entire set of Debian weak
> > keys. We have been completely transparent about this issue and we have
> > improved our weak key detection mechanism to include the openssl-blacklist
> > package.
> >
> > We examined similar failures/incidents, such as:
> >
> > -   https://bugzilla.mozilla.org/show_bug.cgi?id=1472052
> > -   https://bugzilla.mozilla.org/show_bug.cgi?id=1435770
> > -
> > https://community.letsencrypt.org/t/2017-09-09-late-weak-key-revocation/42519
> >
> > The last one shows that at least one more CA had a similar interpretation
> > of the BRs.
> >
> 
> Having read these, while I appreciate SSL.com highlighting them, I fail to
> see them supporting the claim being made here. Could you please elaborate
> why/how you believe this statement to be true? They seem rather remarkably
> different, and certainly, the last one highlights a CA actively working to
> be comprehensive, while much of SSL.com's reply seems to be "We shouldn't
> have to be comprehensive"

We were merely pointing out previous discussions on this topic. I can see why 
you may have such an impression, but "we shouldn't have to be comprehensive" is 
not what we are thinking or believe. On the contrary, we've increased our 
efforts to be as comprehensive as possible, and we will continue to expand our 
weak keys list even after this issue is closed.

We believe in a robustly secure ecosystem, not in half measures.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: ssl.com: Certificate with Debian weak key

2020-03-16 Thread Ryan Sleevi via dev-security-policy
On Mon, Mar 16, 2020 at 3:12 PM Chris Kemmerer via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> > It would appear that SSL.com is a member in good standing of the CA/B
> Forum.
> > Is there any intention on the part of SSL.com to propose this change as a
> > ballot?  While you're at it, if you could include a fix for the issue
> > described in https://github.com/cabforum/documents/issues/164, that
> would be
> > appreciated, since it is the same sentences that need modification, and
> for
> > much the same reasons.
>
> Yes, this is reasonable, and we treated such key as compromised, revoking
> it within 24 hours.
>
> We would support a ballot that makes this clear. We also monitor the
> discussion in https://github.com/cabforum/documents/issues/164.
>

While you answered "Yes", you followed with a different clarification. "We
would support" seems different from "Is there any intention on SSL.com to
propose"


> We have responded as to how we interpreted and implemented this
> requirement. Our blacklist did not include the entire set of Debian weak
> keys. We have been completely transparent about this issue and we have
> improved our weak key detection mechanism to include the openssl-blacklist
> package.
>
> We examined similar failures/incidents, such as:
>
> -   https://bugzilla.mozilla.org/show_bug.cgi?id=1472052
> -   https://bugzilla.mozilla.org/show_bug.cgi?id=1435770
> -
> https://community.letsencrypt.org/t/2017-09-09-late-weak-key-revocation/42519
>
> The last one shows that at least one more CA had a similar interpretation
> of the BRs.
>

Having read these, while I appreciate SSL.com highlighting them, I fail to
see them supporting the claim being made here. Could you please elaborate
why/how you believe this statement to be true? They seem rather remarkably
different, and certainly, the last one highlights a CA actively working to
be comprehensive, while much of SSL.com's reply seems to be "We shouldn't
have to be comprehensive"
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: ssl.com: Certificate with Debian weak key

2020-03-16 Thread Chris Kemmerer via dev-security-policy
On Wednesday, March 11, 2020 at 5:41:00 PM UTC-5, Matt Palmer wrote:
> On Wed, Mar 11, 2020 at 10:46:05AM -0700, Chris Kemmerer via 
> dev-security-policy wrote:
> > On Tuesday, March 10, 2020 at 8:44:49 PM UTC-5, Matt Palmer wrote:
> > > On Tue, Mar 10, 2020 at 01:48:49PM -0700, Chris Kemmerer via 
> > > dev-security-policy wrote:
> > > > For the purpose of identifying whether a Private Key is weak, SSL.com 
> > > > uses
> > > > a set of Debian weak keys that was provided by our CA software vendor as
> > > > the basis for our blacklist.
> > > 
> > > I think it's worth getting additional, *very* detailed, information from
> > > your CA software vendor as to where *they* got their Debian weak key list
> > > from.  That appears to be the fundamental breakdown here -- you relied on 
> > > a
> > > third-party to give you good service, and they didn't.  So I think that
> > > digging into your vendor's practices is an important line of enquiry to go
> > > down.
> > 
> > As mentioned on our report, we used that list as a basis, and paid
> > attention to augment it with other weak keys from available blacklists,
> 
> So presumably if there are other Mozilla-trusted CAs using the same CA
> vendor, who *are* doing the bare minimum and just using the CA vendor's key
> list, they're even more vulnerable to a potential misissuance.  As you
> mentioned that your CA software vendor does read this list, I *really* hope
> they speak up soon so we can figure out how they got their key list.
> 
> > weak keys from available blacklists, even for the ROCA vulnerability.
> 
> Sidenote: my understanding of ROCA is that it is of a different form to the
> Debian weak key problem, in that you can't a priori enumerate ROCA-impacted
> keys, but can only identify them as you find them.  As such, my
> understanding is that there isn't, and in fact *cannot*, be a comprehensive
> "blacklist", as such, of keys affected by ROCA.
> 
> Is your understanding of the ROCA vulnerability different to my description
> above, and if not, can you explain how a "blacklist"-based approach is a
> suitable mitigation for avoiding issuance of certificates using
> ROCA-impacted private keys?
> 
> (Conversely, if it *is* possible get a comprehensive list of ROCA-impacted
> keys, I know what I'm doing this weekend...)

ROCA vulnerability detection is part of our “weak keys detection mechanism”, 
not part of a blacklist. Our original language “we do have a weak keys 
detection mechanism in place, it does detect Debian weak keys (although it's 
not perfect) and it also detects ROCA vulnerable keys” makes that clear.

> 
> > For what it's worth, we believe that the current language in the BRs could
> > be less ambiguous as far as the Debian weak keys are concerned.  For
> > example, it seems that the community's expectations are for CAs to detect
> > and block weak Debian keys generated by vulnerable RNG using OpenSSL in
> > popular architectures.
> 
> The problem with using the argument that "the BRs are ambiguous" to try and
> defend a breach of them is that there are always potential ambiguities in
> all language -- in many ways, "ambiguity is in the eye of the beholder"
> ("ambiguer"?).  My understanding of the consensus from past discussions on
> this list is that if a CA believes there is an ambiguity in the BRs, the
> correct action is to raise that in the CA/B Forum *before* they fall foul of
> it.
> 
> CAs should be reading the BRs, as I understand it, in a "defensive" mode,
> looking for requirements that could be read multiple ways, and when they are
> found, the CA needs to ensure either that they are complying with the
> strictest possible reading, or else bringing the ambiguity to the attention
> of the CA/B Forum and suggesting less ambiguous wording.
> 
> At any rate, it would be helpful to know what, precisely, SSL.com's
> understanding of this requirement of the BRs prior to the commencement of
> this incident.  Can you share this with us?  Presumably SSL.com did a
> careful analysis of all aspects of the BRs, and came to a conclusion as to
> precisely what was considered "compliant".  With regards to this
> requirement, what was SSL.com's position as to what was necessary to be
> compliant with this aspect of the BRs?

We have already described our understanding of the expectations expressed in BR 
6.1.1.3 and the steps we took to comply with it. We provide details in bug 
1620772, which remains the primary channel for this issue. As always, we 
attempt to be as transparent as possible, because we strongly feel that this is 
exactly the approach which better serves the ecosystem.
Our implementation did not meet these expectations, as it was missing direct 
checks of keys matching the "openssl-blacklist" package. Immediately upon our 
coming to this understanding, we initiated development of a fix to meet this 
expectation. This fix was tested and pushed to production last Friday. In 
parallel, we are conducting an analysis of which 

Re: About upcoming limits on trusted certificates

2020-03-16 Thread Ryan Sleevi via dev-security-policy
On Mon, Mar 16, 2020 at 11:13 AM Doug Beattie 
wrote:

> For clarity, I think we need to discuss all the knobs along with proposed
> effective dates and usage periods so we get the whole picture.
>

I disagree with this framing, as I have pointed out it's been repeatedly
used disingenuously by some CAs in the past as a means to suggest no change
can be made sooner than a date. I refuse to engage in such a game that
suggests to take options off the table or suggests changes will not be made
sooner than X. While I appreciate the importance of setting milestones, I
don't think we need to set a hard milestone for lifetime in order to have a
productive and useful conversation about data reuse.


> The max validity period of the certificate has been the one receiving the
> most discussion recently, yet that’s missing from your counter proposal.
> Don’t you view that as a critical data item to put on the table, even if
> less important (in your opinion) than domain validation re-use?.
>

I don't view that as being necessary to nail down a timetable for, no. As I
previously explained, the ability to regularly revalidate the domain and
organization information makes it much easier to alter lifetime with no
impact. While it's been receiving the most attention, it's largely been
because the focus has been because the assumption is domain validation
reuse is or should be static, when we know that's quite the opposite from a
security perspective. It's true that reduced reuse does not guarantee that
reducing lifetimes are easier, but aligning with the previously provided
timeline will address the most vocal objections.


> Did you add Public key as a new knob, meaning that Applicants must change
> their public key according to some rule?
>

No, this is not a new knob. As I mentioned in my previous e-mail, which
addressed this, it's one of the pieces provided by the subscriber. This is
the knob that reducing lifetimes affects. Lifetimes, in the absence of
touching data reuse, only affords certainty with changes to the certificate
profile and the ability to replace public keys (e.g. in the event of
compromise / misissuance). You can see this most easily by examining TLS
Delegated Credentials, which intentionally creates a minimal-lifetime
"certificate-like" thing to ensure /the ability/ to timely rotate public
keys, even though it does not /require/ that rotation. That is, the reduced
lifetime of delegated credentials makes it possible, and it's otherwise not
possible without a reduced lifetime.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: About upcoming limits on trusted certificates

2020-03-16 Thread Doug Beattie via dev-security-policy
For clarity, I think we need to discuss all the knobs along with proposed 
effective dates and usage periods so we get the whole picture.  The max 
validity period of the certificate has been the one receiving the most 
discussion recently, yet that’s missing from your counter proposal.  Don’t you 
view that as a critical data item to put on the table, even if less important 
(in your opinion) than domain validation re-use?.  

 

Did you add Public key as a new knob, meaning that Applicants must change their 
public key according to some rule?

 

 

From: Ryan Sleevi  
Sent: Monday, March 16, 2020 10:27 AM
To: Doug Beattie 
Cc: r...@sleevi.com; Kathleen Wilson ; 
mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: About upcoming limits on trusted certificates

 

No, I don't think we should assume anything, since it doesn't say anything 
about lifetime :)

 

The value of reduced certificate lifetimes is only fully realized with a 
corresponding reduction in data reuse.

 

If you think about a certificate, there are three main pieces of information 
that come from a subscriber:

- The public key

- The domain name

- (Optionally) The organization information

 

In addition, there are rules about how a CA validates this information (e.g. 
the BR validation requirements)

This information is then collected, into a certificate, using a certificate 
profile (e.g. what the BRs capture in Section 7).

 

Reducing the lifetime of certificates, in isolation, helps with the agility of 
the public key and the agility of the profile, but does not necessarily help 
with the agility of the validation requirements nor the accuracy of the domain 
name or organization information. BygoneSSL is an example of the former being 
an issue, while issuing certificates for organizations that no longer exist/are 
legally recognized is an example of the latter being an issue.

 

These knobs - lifetime, domain validation, org validation - can be tweaked 
independently, but tweaking one without the others limits the value. For 
example, reducing domain validation reuse, without reducing lifetime, still 
allows for long-lived certs to be issued for otherwise-invalid domain names, 
which means you're not getting the security benefits of the validation reuse 
reduction. Introducing improved domain validation methods, for example, isn't 
helped by reducing lifetime or organization data reuse, because you can still 
reuse the 'old' validations using the less secure means. So all three are 
linked, even if all three can be independently adjusted.

 

I outlined a timetable on how to reduce the latter two (domain and organization 
validation). Reducing the latter two helps meaningfully reduce lifetimes, to 
take advantage of those reductions, but that can be independently adjusted. In 
particular, reducing lifetimes makes the most sense when folks are accustomed 
to regular validations, which is why it's important to reduce domain validation 
frequency. That effort complements reductions in lifetimes, and helps remove 
the concerns being raised.

 

On Mon, Mar 16, 2020 at 10:04 AM Doug Beattie mailto:doug.beat...@globalsign.com> > wrote:

Are we to assume that the maximum certificate validity remains at 398 days?

 

From: Ryan Sleevi mailto:r...@sleevi.com> > 
Sent: Monday, March 16, 2020 10:02 AM
To: Doug Beattie 
Cc: r...@sleevi.com  ; Kathleen Wilson 
mailto:kwil...@mozilla.com> >; 
mozilla-dev-security-pol...@lists.mozilla.org 
 
Subject: Re: About upcoming limits on trusted certificates

 

Hi Doug,

 

Perhaps it got mangled by your mail client, but I think I had that covered?

 

I've pasted it again, below.

 

Counter proposal:

April 2021: 395 day domain validation max

April 2021: 366 day organization validation max 

April 2022: 92 day domain validation max

September 2022: 31 day domain validation max

April 2023: 3 day domain validation max

April 2023: 31 day organization validation max

September 2023: 6 hour domain validation max

 

As mentioned in the prior mail (and again, perhaps it was eaten by a grueful 
mail-client)

This sets an entirely timeline that encourages automation of domain validation, 
reduces the risk of stale organization data (which many jurisdictions require 
annual review), and eventually moves to a system where request-based 
authentication is the norm, and automated systems for organization data is 
used. If there are jurisdictions that don't provide their data in a machine 
readable format, yes, they're precluded. If there are organizations that don't 
freshly authenticate their domains, yes, they're precluded.
Now, it's always possible to consider shifting from an account-authenticated 
model to a key-authenticated-model (i.e. has this key been used with this 
domain), since that's a core objective of domain revalidation, but I can't see 
wanting to get to an end state where that duration is greater than 

Re: About upcoming limits on trusted certificates

2020-03-16 Thread Ryan Sleevi via dev-security-policy
No, I don't think we should assume anything, since it doesn't say anything
about lifetime :)

The value of reduced certificate lifetimes is only fully realized with a
corresponding reduction in data reuse.

If you think about a certificate, there are three main pieces of
information that come from a subscriber:
- The public key
- The domain name
- (Optionally) The organization information

In addition, there are rules about how a CA validates this information
(e.g. the BR validation requirements)
This information is then collected, into a certificate, using a certificate
profile (e.g. what the BRs capture in Section 7).

Reducing the lifetime of certificates, in isolation, helps with the agility
of the public key and the agility of the profile, but does not necessarily
help with the agility of the validation requirements nor the accuracy of
the domain name or organization information. BygoneSSL is an example of the
former being an issue, while issuing certificates for organizations that no
longer exist/are legally recognized is an example of the latter being an
issue.

These knobs - lifetime, domain validation, org validation - can be tweaked
independently, but tweaking one without the others limits the value. For
example, reducing domain validation reuse, without reducing lifetime, still
allows for long-lived certs to be issued for otherwise-invalid domain
names, which means you're not getting the security benefits of the
validation reuse reduction. Introducing improved domain validation methods,
for example, isn't helped by reducing lifetime or organization data reuse,
because you can still reuse the 'old' validations using the less secure
means. So all three are linked, even if all three can be independently
adjusted.

I outlined a timetable on how to reduce the latter two (domain and
organization validation). Reducing the latter two helps meaningfully reduce
lifetimes, to take advantage of those reductions, but that can be
independently adjusted. In particular, reducing lifetimes makes the most
sense when folks are accustomed to regular validations, which is why it's
important to reduce domain validation frequency. That effort complements
reductions in lifetimes, and helps remove the concerns being raised.

On Mon, Mar 16, 2020 at 10:04 AM Doug Beattie 
wrote:

> Are we to assume that the maximum certificate validity remains at 398 days?
>
>
>
> *From:* Ryan Sleevi 
> *Sent:* Monday, March 16, 2020 10:02 AM
> *To:* Doug Beattie 
> *Cc:* r...@sleevi.com; Kathleen Wilson ;
> mozilla-dev-security-pol...@lists.mozilla.org
> *Subject:* Re: About upcoming limits on trusted certificates
>
>
>
> Hi Doug,
>
>
>
> Perhaps it got mangled by your mail client, but I think I had that covered?
>
>
>
> I've pasted it again, below.
>
>
>
> Counter proposal:
>
> April 2021: 395 day domain validation max
>
> April 2021: 366 day organization validation max
>
> April 2022: 92 day domain validation max
>
> September 2022: 31 day domain validation max
>
> April 2023: 3 day domain validation max
>
> April 2023: 31 day organization validation max
>
> September 2023: 6 hour domain validation max
>
>
>
> As mentioned in the prior mail (and again, perhaps it was eaten by a
> grueful mail-client)
>
> This sets an entirely timeline that encourages automation of domain
> validation, reduces the risk of stale organization data (which many
> jurisdictions require annual review), and eventually moves to a system
> where request-based authentication is the norm, and automated systems for
> organization data is used. If there are jurisdictions that don't provide
> their data in a machine readable format, yes, they're precluded. If there
> are organizations that don't freshly authenticate their domains, yes,
> they're precluded.
> Now, it's always possible to consider shifting from an
> account-authenticated model to a key-authenticated-model (i.e. has this key
> been used with this domain), since that's a core objective of domain
> revalidation, but I can't see wanting to get to an end state where that
> duration is greater than 30 days, at most, of reuse, because of the
> practical risks and realities of key compromises. Indeed, if you look at
> the IETF efforts, such as Delegated Credentials or STAR, the industry
> evaluation of risk suggests 7 days is likely a more realistic upper bound
> for authorization of binding a key to a domain before requiring a fresh
> challenge.
>
>
>
> Hopefully that helps!
>
>
>
> On Mon, Mar 16, 2020 at 9:53 AM Doug Beattie 
> wrote:
>
> Ryan,
>
>
>
> In your counter proposal, could you list your proposed  milestone dates
> and then for each one specify the max validity period, domain re-use period
> and Org validation associated with those dates?As it stands, Org
> validation requires CA to verify that address is the Applicant’s address
> and that typically involves a direct exchange with a person at the
> organization via a Reliable Method of Communication.  It’s not clear how we
> address that if 

RE: About upcoming limits on trusted certificates

2020-03-16 Thread Doug Beattie via dev-security-policy
Are we to assume that the maximum certificate validity remains at 398 days?

 

From: Ryan Sleevi  
Sent: Monday, March 16, 2020 10:02 AM
To: Doug Beattie 
Cc: r...@sleevi.com; Kathleen Wilson ; 
mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: About upcoming limits on trusted certificates

 

Hi Doug,

 

Perhaps it got mangled by your mail client, but I think I had that covered?

 

I've pasted it again, below.

 

Counter proposal:

April 2021: 395 day domain validation max

April 2021: 366 day organization validation max 

April 2022: 92 day domain validation max

September 2022: 31 day domain validation max

April 2023: 3 day domain validation max

April 2023: 31 day organization validation max

September 2023: 6 hour domain validation max

 

As mentioned in the prior mail (and again, perhaps it was eaten by a grueful 
mail-client)

This sets an entirely timeline that encourages automation of domain validation, 
reduces the risk of stale organization data (which many jurisdictions require 
annual review), and eventually moves to a system where request-based 
authentication is the norm, and automated systems for organization data is 
used. If there are jurisdictions that don't provide their data in a machine 
readable format, yes, they're precluded. If there are organizations that don't 
freshly authenticate their domains, yes, they're precluded.
Now, it's always possible to consider shifting from an account-authenticated 
model to a key-authenticated-model (i.e. has this key been used with this 
domain), since that's a core objective of domain revalidation, but I can't see 
wanting to get to an end state where that duration is greater than 30 days, at 
most, of reuse, because of the practical risks and realities of key 
compromises. Indeed, if you look at the IETF efforts, such as Delegated 
Credentials or STAR, the industry evaluation of risk suggests 7 days is likely 
a more realistic upper bound for authorization of binding a key to a domain 
before requiring a fresh challenge.

 

Hopefully that helps!

 

On Mon, Mar 16, 2020 at 9:53 AM Doug Beattie mailto:doug.beat...@globalsign.com> > wrote:

Ryan,

 

In your counter proposal, could you list your proposed  milestone dates and 
then for each one specify the max validity period, domain re-use period and Org 
validation associated with those dates?As it stands, Org validation 
requires CA to verify that address is the Applicant’s address and that 
typically involves a direct exchange with a person at the organization via a 
Reliable Method of Communication.  It’s not clear how we address that if we 
move to anything below a year.



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: About upcoming limits on trusted certificates

2020-03-16 Thread Ryan Sleevi via dev-security-policy
Hi Doug,

Perhaps it got mangled by your mail client, but I think I had that covered?

I've pasted it again, below.

Counter proposal:
April 2021: 395 day domain validation max
April 2021: 366 day organization validation max
April 2022: 92 day domain validation max
September 2022: 31 day domain validation max
April 2023: 3 day domain validation max
April 2023: 31 day organization validation max
September 2023: 6 hour domain validation max

As mentioned in the prior mail (and again, perhaps it was eaten by a
grueful mail-client)

> This sets an entirely timeline that encourages automation of domain
> validation, reduces the risk of stale organization data (which many
> jurisdictions require annual review), and eventually moves to a system
> where request-based authentication is the norm, and automated systems for
> organization data is used. If there are jurisdictions that don't provide
> their data in a machine readable format, yes, they're precluded. If there
> are organizations that don't freshly authenticate their domains, yes,
> they're precluded.
> Now, it's always possible to consider shifting from an
> account-authenticated model to a key-authenticated-model (i.e. has this key
> been used with this domain), since that's a core objective of domain
> revalidation, but I can't see wanting to get to an end state where that
> duration is greater than 30 days, at most, of reuse, because of the
> practical risks and realities of key compromises. Indeed, if you look at
> the IETF efforts, such as Delegated Credentials or STAR, the industry
> evaluation of risk suggests 7 days is likely a more realistic upper bound
> for authorization of binding a key to a domain before requiring a fresh
> challenge.


Hopefully that helps!

On Mon, Mar 16, 2020 at 9:53 AM Doug Beattie 
wrote:

> Ryan,
>
>
>
> In your counter proposal, could you list your proposed  milestone dates
> and then for each one specify the max validity period, domain re-use period
> and Org validation associated with those dates?As it stands, Org
> validation requires CA to verify that address is the Applicant’s address
> and that typically involves a direct exchange with a person at the
> organization via a Reliable Method of Communication.  It’s not clear how we
> address that if we move to anything below a year.
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: About upcoming limits on trusted certificates

2020-03-16 Thread Doug Beattie via dev-security-policy
Ryan,

 

In your counter proposal, could you list your proposed  milestone dates and 
then for each one specify the max validity period, domain re-use period and Org 
validation associated with those dates?As it stands, Org validation 
requires CA to verify that address is the Applicant’s address and that 
typically involves a direct exchange with a person at the organization via a 
Reliable Method of Communication.  It’s not clear how we address that if we 
move to anything below a year.

 

 

 

From: Ryan Sleevi  
Sent: Friday, March 13, 2020 9:23 PM
To: Doug Beattie 
Cc: Kathleen Wilson ; 
mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: About upcoming limits on trusted certificates

 

On Fri, Mar 13, 2020 at 2:38 PM Doug Beattie via dev-security-policy 
mailto:dev-security-policy@lists.mozilla.org> > wrote:

When we moved to SHA2 knew of security risks so the timeline could be 
justified, however, I don’t see the same pressing need to move to annual domain 
revalidation and 1 year max validity for that matter. 

 

I can understand, and despite several years of effort, it appears that we will 
be just as unlikely to make forward progress. 

 

When we think about the issuance models, we need to keep the Enterprise 
approach in mind where domains are validated against a specific account or 
profile within an account and then issuance can happen using any valid domain 
or subdomain of those registered with the account.  Splitting the domain 
validation from issuance permits different teams to handle this and to manage 
the overall policy.  Domains can be validated at any time by anyone and not 
tied to the issuance of a specific certificate which makes issuance less prone 
to errors.  

 

This is a security risk, not a benefit. It creates significant risk that the CA 
systems, rather than strongly authenticating a request, move to a model of 
weakly authenticating a user or account. I can understand why CAs would prefer 
this, and potentially why Subscribers would too: it's convenient for them, and 
they're willing to accept the risk individually. However, we need to keep in 
mind the User approach in mind when thinking about whether these are good. For 
users, this introduces yet more risk into the system.

 

For example, if an account on a CA system is compromised, the attacker can 
issue any certificate for any of the authorized domains. Compare this to a 
model of fresh authentication for requests, in which the only certificates that 
can be issued are the ones that can be technically verified. Similarly, users 
must accept that a CA that deploys a weak authentication system, any domains 
which use that CA are now at risk if the authentication method used is weak.

 

When we put the user first, we can see that those Enterprise needs simply shift 
the risk/complexity from the Enterprise and to the User. It's understandable 
why the Enterprise might prefer that, but we must not fool ourselves into 
thinking the risk is not there. Root Stores exist to balance the risk 
(collectively) to users, and to reject such attempts to shift the cost or 
burden onto them.

 

If your driving requirement to reduce the domain validation reuse is the 
BygoneSSL, then the security analysis is flawed.  There are so many things have 
to align to exploit domain ownership change that it's impactable, imo. Has this 
ever been exploited? 

 

Yes, this was covered in BygoneSSL. If you meant to say impractical, then your 
risk analysis is flawed, but I suspect we'll disagree. This sort of concern has 
been the forefront of a number of new technologies, such as HTTP/2, Signed 
Exchanges, and the ORIGIN frame. Heck, even APNIC has seen these as _highly 
practical_ concerns: 
https://blog.apnic.net/2019/01/09/be-careful-where-you-point-to-the-dangers-of-stale-dns-records/
 . Search PowerDNS.

 

Would it make sense (if even possible) to track the level of automation and set 
a threshold for when the periods are changed?  Mozilla and Google are tracking 
HTTPS adoption and plan to hard block HTTP when it reaches a certain threshold. 
 Is there a way we can track issuance automation?  I'm guessing not, but that 
would be a good way to reduce validity based on web site administrators embrace 
of automation tools.

 

I'm glad you appreciated the efforts of Google and Mozilla, as well as others, 
to make effort here. However, I think suggesting transparency alone is likely 
to have much impact is to ignore what actually happened. That transparency was 
accompanied by meaningful change to promote HTTPS and discourage HTTP, and 
included making the necessary, sometimes controversial, changes to prioritize 
user security over enterprise needs. For example, browsers would not launch new 
features over insecure HTTP, insecure traffic was flagged and increasingly 
alerted on, and ultimately blocked.

 

I think if that's the suggestion, then the quickest solution is to have the CA 
indicate, within the certificate, 

Re: About upcoming limits on trusted certificates

2020-03-16 Thread Gijs Kruitbosch via dev-security-policy

On 14/03/2020 18:53, Nick Lamb wrote:

my assumption is that at
best such a patch would be in the big pile of volunteer stuff maybe
nobody has time to look at.


Tangential: perhaps there's an aspect of phrasing here that is confusing 
me, but this reads to me as suggesting we don't review/work with 
volunteer code contributions, and I'd like to be explicit and say that 
we do our best to do so and I am unaware of big piles of un-looked-at 
volunteer-contributed patches (having been such a volunteer myself in 
the past).


I can't speak for the crypto team (though it looks like Kathleen has 
relayed an answer for the concrete bug you asked about), but if you know 
of Firefox patches that are sitting without due attention, please feel 
free to nudge me. And no, that approach might in theory not scale, which 
is why other folks are building better tooling to ensure we don't end up 
with trees falling in forests unheard, as it were. But in the meantime, 
feel free to ping me (off-list).


~ Gijs
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy