Re: Symantec Update on SubCA Proposal

2017-07-20 Thread Rick Andrews via dev-security-policy
On Thursday, July 20, 2017 at 12:31:56 PM UTC-7, Gervase Markham wrote:
> Hi Steve,
> 
> Thanks for posting this. I appreciate the level of detail provided,
> which is useful in giving us a basis for discussion. It's a little
> regrettable, though, that it was published a couple of weeks after we
> were led to expect it...

In our June 1 post, we stated that we would update the community after the end 
of the month. Considering the community’s request for detail in our response, 
we wanted our update to reflect our latest discussions with RFP respondents, 
which took place during the first two weeks of July.  These discussions have 
directly informed our proposed dates as described in our post.  We also felt it 
was important to collect feedback from both Google and Mozilla (which we have 
done) on our draft timing proposal before submitting it to the community for 
consideration given that Google and Mozilla authored / endorsed the SubCA 
proposal.

> One note before we start: Symantec's business dealings regarding its CA
> business are not of concern to Mozilla other than relating to the
> "change of ownership or control" provisions in Mozilla policy (policy
> 2.5 section 8). However, if dates are proposed or agreed for
> implementation of the consensus plan, we would not expect those dates to
> be renegotiated because of a change of ownership or control.
> 
> Am I right in saying that, in order to hit these dates you are
> proposing, you would strongly desire to get consensus on them by August 1st?

Symantec would like to reach consensus on the totality of the SubCA proposal, 
including final dates, as soon as possible.  This is in the best interest of 
all.  Our proposed dates assume we are able to finalize negotiation of 
contracts with the selected Managed CA partner(s), which incorporate final 
agreed-upon dates by the community, by no later than July 31, 2017.

> On 18/07/17 19:22, Steve Medin wrote:
> > New Certificate Issuance: We believe the dates for transition of validation 
> > and issuance to the Managed CA that are both aggressive and achievable are 
> > as follows:
> > 
> > - Implement the Managed CA by December 1, 2017 (changed from August 8, 
> > 2017);
> > 
> > - Managed CA performs domain validation for all new certificates by 
> > December 1, 2017 (changed from November 1, 2017); and
> > 
> > - Managed CA performs full validation for all certificates by February 1, 
> > 2018. Prior to this date, reuse of Symantec authenticated organization 
> > information would be allowable for certificates of <13 months in validity.
> 
> To summarise for those reading along: this represents a change of a
> little less than 4 months for the first date, 1 month for the second
> date, and the third date is as originally proposed.

This is correct. We have worked with our RFP respondents to put together an 
aggressive but achievable plan that delivers on the spirit of the original 
proposal.

> Steve: to be clear, this means that browsers could implement a block on
> certificates from Symantec's existing PKI as follows: after December
> 1st, 2017, they could dis-trust all certificates with a notBefore
> greater than December 1st 2017?

Correct. However, as we indicated in our update, with a change of this 
magnitude we believe that there will likely be material compatibility and 
interoperability issues that will only come to light once server operators 
begin the transition to the Managed CA issued certificates. Recognizing this, 
we recommend that we establish a clear process to evaluate exception requests 
that includes consultations with the browsers to handle such corner cases.

> Given the explanations Symantec has given as to why these dates are
> reasonable, and the effort required to stand up the new PKI, I am minded
> to accept them, particularly as they have managed to hit the third
> originally-proposed date on the nose. However, I am still open to
> community input.
> 
> > Replacement of Unexpired Certificates Issued Before June 1, 2016: There are 
> > two major milestones that must be achieved after implementation of the 
> > Managed CA in order to replace unexpired certificates issued before June 1, 
> > 2016 that do not naturally expire before the distrust date(s) in the SubCA 
> > proposal. Those include the full revalidation of certificate information 
> > and then the customer replacement of those certificates. 
> 
> That is not necessarily so. The customers could replace their
> certificates using new, CT-logged certificates from Symantec's old
> infrastructure. This doesn't require any revalidation or any change in
> the certificate chain, so should have excellent compatibility
> properties, and it's something that could begin today.

While this is true under the terms of the SubCA proposal, we do not believe 
this is consistent with the spirit of Google’s and Mozilla’s prior commentary 
on their intent regarding the SubCA proposal, which is to limit the issuance of 
Symantec certif

Re: Regarding CA requirements as to technical infrastructure utilized in automated domain validations, etc. (if any)

2017-07-20 Thread Matthew Hardeman via dev-security-policy
On Thursday, July 20, 2017 at 3:32:29 PM UTC-5, Ryan Sleevi wrote:

> Broadly, yes, but there's unfortunately a shade of IP issues that make it
> more difficult to contribute as directly as Gerv proposed. Gerv may accept
> any changes to the Mozilla side, but if the goal is to modify the Baseline
> Requirements, you'd need to sign the IPR policy of the CA/B Forum and join
> as an Interested Party before changes.

I think at this phase that it makes sense to better flesh out within the 
Mozilla dev security community what kinds of mitigations might be economically 
feasible, provide measurable benefit of sufficient counterbalancing value, and 
practical details for implementation before trying to bring a ballot before the 
CAB forum.  Having said that, I did read over the IPR policy agreement and at 
first blush saw nothing that would prevent myself or my company from becoming 
signatories should the idea take off.

> 
> And realize that the changes have to be comprehensible by those with
> limited to know background in technology :)

Certainly.  I would expect that the target audience is more compliance / audit 
/ accounting than it is network engineering.  Even still, I have to believe it 
is possible to describe the specific modes of risk and counterbalancing 
mitigations in a framework that those of that skillset can evaluate.

> The question about the validity/reuse of this information is near and dear
> to Googles' heart (hence Ballots 185 and 186) and the desire to reduce this
> time substantially exists. That said, the Forum as a whole has mixed
> feelings on this, and so it's still an active - and separate - point of
> discussion.

I mentioned it mostly as I was curious if the argument that time-of-issuance 
(or thereabout) DNS queries of an issuance blocking nature pertaining to CAA 
had been utilized as yet as an argument that more revalidation might now be 
appropriate because the revalidation is merely a minor extra burden to the 
queries already being run for CAA.  (There's already a blocking DNS query on 
records of the subject domain holding up issuance, so what's a few more records 
on the same domain?)
 
> That said, I think it's worthwhile to make sure the threat model, more than
> anything, is defined and articulated. If the threat model results in us
> introducing substantive process, but without objective security gain, then
> it may not be as worthwhile. Enumerating the threats both addressed and
> unaddressible are thus useful in that scope.

Can you provide any good reference or point to an example of what you would 
regard an exceptionally well described definition and articulation of a thread 
model within the certificate issuance space generally?  I feel I have a quite 
solid grasp of the various weak points in the network infrastructure and 
network operations aspects that underpin the technological measures involved in 
domain validation.  I am interested in taking that knowledge and molding that 
into a view which might best permit this community to assess my thoughts on the 
matter, weigh pros and cons, and help guide proposals for mitigation.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Regarding CA requirements as to technical infrastructure utilized in automated domain validations, etc. (if any)

2017-07-20 Thread Matthew Hardeman via dev-security-policy
On Thursday, July 20, 2017 at 8:13:23 PM UTC-5, Nick Lamb wrote:

> On Friday, 21 July 2017 01:13:15 UTC+1, Matthew Hardeman  wrote:
> > As easily as that, one could definitely get a certificate issued without 
> > breaking most of the internet, without leaving much of a trace, and without 
> > failing domain validation.

> One trace this would leave, if done using Let's Encrypt or several other 
> popular CAs, is a CT log record. Google has pushed back its implementation 
> date, but it seems inevitable at this point that certificates for ordinary 
> web sites (as opposed to HTTPS APIs, SMTP, IRC, and so on) will need to be 
> submitted for CT if you expect them to work much beyond this year. The most 
> obvious way to achieve this is for the CA to submit automatically during or 
> immediately after issuance.

Indeed, I should have better qualified my "without leaving much of a trace".  
CT logging would absolutely catch this issuance from many CAs today (and 
presumably from all of the publicly trusted CAs sometime in 2018.)  I think the 
public at large, the security community, and even the CA community owe a debt 
of gratitude to the Googlers who brought CT to bear and the (primarily) 
Googlers who are doing / have done so much to drive its adoption.  In the 
period since around 2014, as CT logs depth of coverage has increased, entire 
categories of misissuance and significant misdeeds have come to light and 
proper responses have been made.

Having said that, I meant to refer particularly that if carefully orchestrated 
there would likely be no forensic trace that could be reconstructed to identify 
the party who successfully acquired the certificate.  Indeed, but for the fact 
of the issuance, should a domain owner who knows that the certificate was not 
requested properly become aware of the issuance, it is unlikely that a skilled 
and brief hijack would even get noticed for someone to begin trying to 
investigate.  Even still, in many instances a skilled attacker would be able to 
scope the advertisement carefully enough that a single validation point could 
be tricked while managing to miss having the route propagate to any of the 
systems that might meaningfully notice and log the anomalous route 
advertisement.

What I meant by "without a trace" is that in the case of an artful and 
limited-in-scope IP hijack, a CA exploited as in my hypothetical might well be 
called to present evidence in a court room.  Should they need to attest to the 
veracity that this certificate represents the party in control of eff.org on 
the date of the issuance, said CA would likely present an evidence file which 
perfectly supports that it was properly issued to the party in control of 
eff.org.  Nothing in the CA's logs -- at least presently -- would be expected 
to hint at a routing anomaly.

Further, from the network side of the equation, I strongly suspect that if a 
CA, questioning whether there was an anomalous route for the few minutes 
surrounding request and issuance of the suspect certificate would, even mere 
days after the issuance, be able to get any conclusive log data from their ISP 
as to 1) if a route change encompassing the IPs in question occurred at all at 
said specific time and (even less likely) 2) which of their peers introduced 
what prefix specifically and when.  To the extent that MOST service providers 
log individual announcement / withdrawal of individual prefixes, (many don't 
persist this for any time period at all), these logs are terribly ephemeral.  
It's voluminous and generally is of low value save for in the moment.  It's 
also in a category of records that the ISP has every incentive to have for 
their own diagnostics and debugging for a very brief period and every incentive 
NOT to keep for longer than very strictly needed as when knowled
 ge that you have it gets out, it will lead to burdensome requests to produce 
it.

> Now, most likely the EFF (if your example) does not routinely check CT logs, 
> and doesn't subscribe to any service which monitors the logs and reports new 
> issuances. But a high value target certainly _should_ be doing this, and it 
> significantly closes the window.

In as far as they're a major sponsor and participant in the Let's Encrypt 
community (they author/maintain the certbot client, right?), I would be shocked 
if they didn't monitor such things, at least periodically.

> DNSSEC is probably the wiser precaution if you're technically capable of 
> deploying it, but paying somebody to watch CT and tell you about all new 
> issuances for domains you control doesn't require any technical steps, which 
> makes it the attractive option if you're protective of your name but not 
> capable of bold technical changes.

Sadly, I did some quick checks on quite several top domain names.  Out of a 
quick search of Twitter, Amazon, Google, Ebay, and PayPal, only paypal.com has 
implemented DNSSEC.  I presume there must still be too many badly configured or 

Re: Regarding CA requirements as to technical infrastructure utilized in automated domain validations, etc. (if any)

2017-07-20 Thread Nick Lamb via dev-security-policy
On Friday, 21 July 2017 01:13:15 UTC+1, Matthew Hardeman  wrote:
> As easily as that, one could definitely get a certificate issued without 
> breaking most of the internet, without leaving much of a trace, and without 
> failing domain validation.

One trace this would leave, if done using Let's Encrypt or several other 
popular CAs, is a CT log record. Google has pushed back its implementation 
date, but it seems inevitable at this point that certificates for ordinary web 
sites (as opposed to HTTPS APIs, SMTP, IRC, and so on) will need to be 
submitted for CT if you expect them to work much beyond this year. The most 
obvious way to achieve this is for the CA to submit automatically during or 
immediately after issuance.

Now, most likely the EFF (if your example) does not routinely check CT logs, 
and doesn't subscribe to any service which monitors the logs and reports new 
issuances. But a high value target certainly _should_ be doing this, and it 
significantly closes the window.

DNSSEC is probably the wiser precaution if you're technically capable of 
deploying it, but paying somebody to watch CT and tell you about all new 
issuances for domains you control doesn't require any technical steps, which 
makes it the attractive option if you're protective of your name but not 
capable of bold technical changes.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Regarding CA requirements as to technical infrastructure utilized in automated domain validations, etc. (if any)

2017-07-20 Thread Ryan Sleevi via dev-security-policy
On Thu, Jul 20, 2017 at 8:13 PM, Matthew Hardeman via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:
>
> My purpose in writing this was to illustrate just how easily someone with
> quite modest resources and the right skill set can presently overcome the
> technical checks of DNS based domain validation (which includes things such
> as HTTP validation).
>

Sure, and this was an excellent post for that. But I note that you
discounted, for example, registry attacks (which are, sadly, all too
common).

And I think that's an important thing to consider. The use of BGP attacks
against certificate issuance are well-known and long-documented, and I also
agree that it's not something we've mitigated by policy. I also appreciate
the desire to improve issuance practices - after all, it's telling that
it's only 2017 before we're likely to finally do away with "CA invents
whatever method to validate it wants" - but I think we should also look
holistically at the threat scenario.

I mention these not because I wouldn't want to see mitigations for these,
but to make sure that the mitigations proposed are both practical and
realistic, and that they look holistically at the threat model. In a
holistic look, one which accepts that the registry can easily be
compromised (and/or other forms of DNSSEC shenanigans), it may be that the
solution is better invested on detection than prevention.

I'll write separately in a less sensationalized post to describe each risk
> factor and appropriate mitigations.
>
> In closing I wish to emphasize that Let's Encrypt was only chosen for this
> example because it was convenient as I already had a client installed and
> also literally free for me to perform multiple validations and certificate
> issuances.  (Though I could do that with Comodo's domain validation 3 month
> trial product too, couldn't I?)  A couple of extra checks strongly suggest
> that quite several other CAs which issue domain validation products could
> be just as easily subverted.  As yet, I have not identified a CA which I
> believe is well prepared for this level of network manipulation.  To their
> credit, it is clear to me that the people behind Let's Encrypt actual
> recognize this risk (on the basis of comments I've seen in their discussion
> forums as well as commentary in some of their recent GitHub commits.)
> Furthermore, there is evidence that they are working toward a plan which
> would help mitigate the risks of this kind of attack.  I reiterate again
> that nothing in this article highlights a risk surfaced by Let's Encrypt
> that isn't also exposed by every other DV issuing CA I've scrutinized.


Agreed. However, we're still figuring out with CAs how not to follow
redirects when validating requests, so we've got some very, very
low-hanging fruit in the security space to improve on. And this improvement
is, to some extent, a limited budget, so we want to go for the big returns.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Regarding CA requirements as to technical infrastructure utilized in automated domain validations, etc. (if any)

2017-07-20 Thread Matthew Hardeman via dev-security-policy
One (Hypothetical) Concrete Example of a Practical DNS Validation Attack:

(Author's note:  I've chosen for this example to utilize the Let's Encrypt CA 
as the Certificate Authority involved and I have chosen as a target for 
improper validation the domain eff.org.  Neither of these is in any way 
endorsing what I have documented here.  Neither is aware of the scenario I am 
painting here.  I have NOT actually carried out a route hijack attack in order 
to get a certificate for eff.org.  I DO NOT intend to do so.  I have laid out 
the research methodology and data points of interest that one who would seek to 
get a certificate for eff.org illegitimately would need.)

The target:  eff.org

In order to validate as eff.org, one needs to -- at a minimum -- be positioned 
to temporarily answer DNS queries on behalf of eff.org.  Assuming that the DNS 
root servers, the .org TLD servers and the registrar for eff.org are not to be 
compromised, the best mechanism to accomplish answering for eff.org in the DNS 
will be to hijack the IP space for the authoritative name servers for the 
eff.org zone.

First, we must find them.

appleprov1:~ mhardeman$ dig +trace -t NS eff.org

; <<>> DiG 9.8.3-P1 <<>> +trace -t NS eff.org
;; global options: +cmd
.   161925  IN  NS  a.root-servers.net.
.   161925  IN  NS  b.root-servers.net.
.   161925  IN  NS  c.root-servers.net.
.   161925  IN  NS  d.root-servers.net.
.   161925  IN  NS  e.root-servers.net.
.   161925  IN  NS  f.root-servers.net.
.   161925  IN  NS  g.root-servers.net.
.   161925  IN  NS  h.root-servers.net.
.   161925  IN  NS  i.root-servers.net.
.   161925  IN  NS  j.root-servers.net.
.   161925  IN  NS  k.root-servers.net.
.   161925  IN  NS  l.root-servers.net.
.   161925  IN  NS  m.root-servers.net.
;; Received 228 bytes from 10.47.52.1#53(10.47.52.1) in 330 ms

org.172800  IN  NS  a0.org.afilias-nst.info.
org.172800  IN  NS  a2.org.afilias-nst.info.
org.172800  IN  NS  b0.org.afilias-nst.org.
org.172800  IN  NS  b2.org.afilias-nst.org.
org.172800  IN  NS  c0.org.afilias-nst.info.
org.172800  IN  NS  d0.org.afilias-nst.org.
;; Received 427 bytes from 198.97.190.53#53(198.97.190.53) in 154 ms

eff.org.86400   IN  NS  ns1.eff.org.
eff.org.86400   IN  NS  ns2.eff.org.
;; Received 93 bytes from 2001:500:b::1#53(2001:500:b::1) in 205 ms

eff.org.7200IN  NS  ns1.eff.org.
eff.org.7200IN  NS  ns6.eff.org.
eff.org.7200IN  NS  ns2.eff.org.
;; Received 127 bytes from 69.50.225.156#53(69.50.225.156) in 79 ms


(Further research suggests ns6.eff.org is presently non-responsive or is in 
some special role - I would guess a hidden master, considering that the .org 
delegation servers only refer out to ns1.eff.org and ns2.eff.org.)

Is eff.org DNSSEC protected?  Asking "dig +trace -t DNSKEY eff.org" will reveal 
no DNSKEY records returned.  No DNSSEC for this zone.  See also dnsviz.net for 
such lookups.

So, all I need to do is hijack the IP space for ns1.eff.org and ns2.eff.org -- 
and very temporarily -- to get a certificate issued for eff.org.

(Author's further note:  I'll grant that eff.org is probably on various 
peoples' high value domain list and thus would likely get policy blocked for 
other reasons regardless of successful domain validation.  This is, after all, 
only an example.  I also wish to set out that I give IPv4 examples below, 
knowing one would also need to work on the IPv6 angle as well.  I do not 
explore this here, the principles are the same.)

Now, we need to know what IP space to hijack:

dig -t A ns1.eff.org yields:
;; ANSWER SECTION:
ns1.eff.org.3269IN  A   173.239.79.201

dig -t A ns2.eff.org yields:
;; ANSWER SECTION:
ns2.eff.org.6385IN  A   69.50.225.156


Ultimately, to succeed in getting a DNS TXT record domain validation from Let's 
Encrypt for eff.org, we will need to _very briefly_ take over the IP space and 
be able to receive and answer DNS queries for 173.239.79.201 and 69.50.225.156. 
 This is probably far easier for a great number of people than many would 
believe.

Let's understand more about those two IP addresses and how the network space 
containing those two IP addresses is advertised to the broader internet.  I 
will utilize the University of Oregon's Route Views project for this:

route-views>show ip bgp 173.239.79.201 
BG

Re: Guang Dong Certificate Authority (GDCA) root inclusion request

2017-07-20 Thread Kathleen Wilson via dev-security-policy
Thanks to all of you who reviewed and commented on this request from Guangdong 
Certificate Authority (GDCA) to include the GDCA TrustAUTH R5 ROOT certificate, 
turn on the Websites trust bit, and enabled EV treatment. 

I believe that all of the concerns that were raised in this discussion have 
been properly addressed, and I will state my intent to approve this request in 
the bug.

https://bugzilla.mozilla.org/show_bug.cgi?id=1128392

I am now closing this discussion. Any further follow-up should be added 
directly to the bug.

Thanks,
Kathleen

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Regarding CA requirements as to technical infrastructure utilized in automated domain validations, etc. (if any)

2017-07-20 Thread Ryan Sleevi via dev-security-policy
On Thu, Jul 20, 2017 at 4:23 PM, Matthew Hardeman via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> I would be willing to take a stab at this if the subject matter is of
> interest and would be willing to commit some time to work on it providing
> that it would appear a convenient time to discuss and contemplate the
> matter.  Can anyone give me a sense of whether the matter of the potential
> vulnerabilities that I see here -- and of the potential mitigations I might
> suggest -- are of interest to the community?
>

Broadly, yes, but there's unfortunately a shade of IP issues that make it
more difficult to contribute as directly as Gerv proposed. Gerv may accept
any changes to the Mozilla side, but if the goal is to modify the Baseline
Requirements, you'd need to sign the IPR policy of the CA/B Forum and join
as an Interested Party before changes.

And realize that the changes have to be comprehensible by those with
limited to know background in technology :)


> Quite separately, it appears that 3.2.2.8's "As part of the issuance
> process..." text would strongly suggest that CAA record checking be
> performed upon each instance of certificate issuance.  I presume that
> applies even in the face of a CA which might be relying upon previous DNS /
> HTTP domain validation.  I grant that the text goes on to say that issuance
> must occur within the greater of 8 hours or the CAA TTL, but it does appear
> that the intent is that CAA records be queried for each instance of
> issuance and for each SAN dnsName.  If this is the intent and ultimately
> the practice and we are already requiring blocking reliance on DNS query
> within the process of certificate issuance, should the validity of domain
> validation itself be similarly curtailed?  My argument is that if we are
> placing a blocking reliance upon both the CA's DNS validation
> infrastructure AS WELL AS the target domain's authoritative DNS
> infrastructure during the course of the certificate issuance process
>  , then there is precious little extra point of failure in just requiring
> that domain validation occur with similarly reduced validity period.
>

This is indeed a separate issue. Like patches, it's best to take as small
as you can go.

The question about the validity/reuse of this information is near and dear
to Googles' heart (hence Ballots 185 and 186) and the desire to reduce this
time substantially exists. That said, the Forum as a whole has mixed
feelings on this, and so it's still an active - and separate - point of
discussion.


> > > I believe there would be a massive improvement in the security of DNS
> query and HTTP client fetch type validations if the CA were required to
> execute multiple queries (ideally at least 3 or 4), sourced from different
> physical locations (said locations having substantial network and
> geographic distance between them) and each location utilizing significantly
> different internet interconnection providers.
> >
> > How could such a requirement be concretely specced in an auditable way?
>
> I can certainly propose a series of concrete specifications / requirements
> as to a more resilient validation infrastructure.  I can further propose a
> list of procedures for validating point-in-time compliance of each of the
> requirements in the aforementioned list.  Further, I can propose a list of
> data points / measurements / audit data that might be recorded as part of
> the validation record data set by the CA at the time of validation which
> could be used to provide strong support that the specifications /
> requirements are being followed through the course of operations.  If those
> were written up and presented does that begin to address your question?


I think it's worth exploring.

Note that there's a whole host of process involved:

- Change the CA/B documents (done through the Validation WG, at present -
need to minimally execute an IPR agreement before even members can launder
ballots for you)
- Change to the WebTrust TF audit criteria (which would involve
collaboration with them, and in general, they're not a big fan of precise
auditable controls)
- Change to the ETSI audit criteria (similar collaboration)

Alternatively, if exploring the Mozilla side, it's fairly easy to make it
up as you go along - which is not a criticism of the root store policy, but
praise :) You just may not get as much feedback.

That said, I think it's worthwhile to make sure the threat model, more than
anything, is defined and articulated. If the threat model results in us
introducing substantive process, but without objective security gain, then
it may not be as worthwhile. Enumerating the threats both addressed and
unaddressible are thus useful in that scope.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: dNSName containing '/' / low serial number entropy

2017-07-20 Thread Stephen Davidson via dev-security-policy
Hello:

Siemens Issuing CA Internet Server 2016 was taken offline upon this report
while Siemens and QuoVadis investigate.  It will not issue certificates
until the problem is resolved.

Kind regards, Stephen Davidson
QuoVadis




-Original Message-
From: dev-security-policy
[mailto:dev-security-policy-bounces+s.davidson=quovadisglobal@lists.mozi
lla.org] On Behalf Of Charles Reiss via dev-security-policy
Sent: Tuesday, July 18, 2017 7:26 PM
To: mozilla-dev-security-pol...@lists.mozilla.org
Subject: dNSName containing '/' / low serial number entropy

https://crt.sh/?id=174827359 is a certificate issued by D-TRUST SSL Class 3
CA 1 2009 containing the DNS SAN 'www.lbv-gis.brandenburg.de/lbvagszit'
(containing a '/') with a notBefore in April 2017.

The certificate also seems to have a short certificate serial number, which
cannot include 64 bits of entropy. Many certificates issued by this CA
appears to use large serial numbers (e.g. [1]). But there are certificates
with much shorter sequential-looking serial numbers with notBefores shortly
before [2] and after [3] this certificate's and as recent as 4 July 2017
[4].

[1] https://crt.sh/?id=137090990 , https://crt.sh/?id=124715040 [2]
https://censys.io/certificates/4445455caca3e9cf2ab2b673304487cb220871aa6d5ac
1bf03827f74609c3646
[3]
https://censys.io/certificates/8d08033efe732e8fb6c2f3257c52b500af991bd1f363f
fd6e29ec1812a943cd9
[4] https://crt.sh/?id=173758922


I did a cursory check on censys.io to see if there were other cases of short
serial numbers in certificates with recent notBefores that are trusted by
Mozilla:

- Digidentity Services CA - G2 (https://crt.sh/?caid=868 ; chains to Staat
der Nederlanden Root CA - G2) has issued certificates which serial numbers
that appear to be of the form 0x1000 + sequential counter with
notBefores as recent as 8 June 2017.

- Siemens Issuing CA Internet Server 2016 (https://crt.sh/?caid=26087 ;
chains to QuoVadis Root CA 2 G3) has issued certificates with 4-byte serial
numbers with notBefores as recent as 11 July 2017, though they do not appear
to be assigned sequentially.

D-Trust and QuoVadis both indicated no problems complying with version
2.4.1 of Mozilla's certificate policies (which requires, among other things,
64 bits of serial number entropy) by 1 June 2017 when they replied to
Mozilla's April CA communication. The Government of the Netherlands
indicated they needed a delay for CPS translation only.

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Regarding CA requirements as to technical infrastructure utilized in automated domain validations, etc. (if any)

2017-07-20 Thread Matthew Hardeman via dev-security-policy
On Thursday, July 20, 2017 at 9:39:40 AM UTC-5, Gervase Markham wrote:

> Your point, in the abstract, is a reasonable one, but so is your further
> point about trade-offs. The only way we can really make progress is for
> you to propose specific changes to the language, and we can then discuss
> the trade-offs of each.

I would be willing to take a stab at this if the subject matter is of interest 
and would be willing to commit some time to work on it providing that it would 
appear a convenient time to discuss and contemplate the matter.  Can anyone 
give me a sense of whether the matter of the potential vulnerabilities that I 
see here -- and of the potential mitigations I might suggest -- are of interest 
to the community?

> Certainly for CAA, we don't allow broken DNSSEC to fail open. I hope
> that will be true of DNS-based validation methods - either after 190
> passes, or soon after that.

A requirement that if the zone is configured for DNSSEC that any domain 
validation technology that relies in any part upon DNS lookups (i.e. direct DNS 
validations as well as HTTP validations) must succeed only if DNSSEC validation 
of the lookups succeeds would strengthen the requirements with very little cost 
or negative consequence.  This would, of course, only improve the security 
posture of domain validations for those domains configured for DNSSEC, but that 
is still a significant benefit.  Much like CAA, it gives those holding and/or 
managing high value domains significant power to restrict domain hijacking for 
purposes of acquiring certificates.

Quite separately, it appears that 3.2.2.8's "As part of the issuance 
process..." text would strongly suggest that CAA record checking be performed 
upon each instance of certificate issuance.  I presume that applies even in the 
face of a CA which might be relying upon previous DNS / HTTP domain validation. 
 I grant that the text goes on to say that issuance must occur within the 
greater of 8 hours or the CAA TTL, but it does appear that the intent is that 
CAA records be queried for each instance of issuance and for each SAN dnsName.  
If this is the intent and ultimately the practice and we are already requiring 
blocking reliance on DNS query within the process of certificate issuance, 
should the validity of domain validation itself be similarly curtailed?  My 
argument is that if we are placing a blocking reliance upon both the CA's DNS 
validation infrastructure AS WELL AS the target domain's authoritative DNS 
infrastructure during the course of the certificate issuance process
 , then there is precious little extra point of failure in just requiring that 
domain validation occur with similarly reduced validity period.

> > I believe there would be a massive improvement in the security of DNS query 
> > and HTTP client fetch type validations if the CA were required to execute 
> > multiple queries (ideally at least 3 or 4), sourced from different physical 
> > locations (said locations having substantial network and geographic 
> > distance between them) and each location utilizing significantly different 
> > internet interconnection providers.
> 
> How could such a requirement be concretely specced in an auditable way?

I can certainly propose a series of concrete specifications / requirements as 
to a more resilient validation infrastructure.  I can further propose a list of 
procedures for validating point-in-time compliance of each of the requirements 
in the aforementioned list.  Further, I can propose a list of data points / 
measurements / audit data that might be recorded as part of the validation 
record data set by the CA at the time of validation which could be used to 
provide strong support that the specifications / requirements are being 
followed through the course of operations.  If those were written up and 
presented does that begin to address your question?

Thanks,

Matt Hardeman
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Certificate with invalid dnsName

2017-07-20 Thread Stephen Davidson via dev-security-policy
Hello:

Thanks for pointing these out.  Regarding the two problematic certificates
noted below chained to QuoVadis:

Changes were made to our systems last year dealing these very issues, and it
appears that these remaining certificates were not revoked.  They will now
be revoked.  
Leading hyphens and reallywildcards are now rejected by our systems.

Regards, Stephen
QuoVadis


-Original Message-
From: dev-security-policy
[mailto:dev-security-policy-bounces+s.davidson=quovadisglobal@lists.mozi
lla.org] On Behalf Of Charles Reiss via dev-security-policy
Sent: Wednesday, July 19, 2017 10:30 PM
To: mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: Certificate with invalid dnsName

On 07/19/2017 06:03 PM, Tom wrote:
> Following that discovery, I've search for odd (invalid?) DNS names.
> Here is the list of certificated I've found, it may overlap some 
> discovery already reported.
> If I'm correct, theses certificate are not revoked, not expired, and 
> probably trusted by Mozilla (crt.sh issuer are marked trusted by 
> Mozilla, but not all).

Annotating these certs:

> Starting with *:

I believe this cert is presently untrusted by Mozilla due to revocation of
all paths to the Federal PKI:
> https://crt.sh/?id=7211484*eis.aetc.af.mil

chains to StartCom (and all of these from StartCom are minor compared to 
StartCom's other problems):
> https://crt.sh/?id=10714112*g10.net-lab.net

chains to Baltimore CyberTrust Root (DigiCert):
> https://crt.sh/?id=48682944*nuvolaitaliana.it

chains to StartCom:
> https://crt.sh/?id=15736178*assets.blog.cn.net.ru
> https://crt.sh/?id=17295812*dev02.calendar42.com
> https://crt.sh/?id=15881220*dev.1septem.ru
> https://crt.sh/?id=15655700*assets.blog.cn.net.ru
> https://crt.sh/?id=17792808*quickbuild.raptorengineering.io


> 
> Starting with -:

chains to QuoVadis:
> https://crt.sh/?id=54285413
> -d1-datacentre-12g-console-2.its.deakin.edu.au

chains to StartCom:
> https://crt.sh/?id=78248795-1ccenter.777chao.com


> 
> Multiple *.:

chains to QuoVadis:
> https://crt.sh/?id=13299376*.*.victoria.ac.nz

I believe this cert is presently trusted by Mozilla only via a 
technically constrained subCA:
> https://crt.sh/?id=44997156*.*.rnd.unicredit.it

chains to Swisscom:
> https://crt.sh/?id=5982951*.*.int.swisscom.ch


> 
> Internals TLD:

chains to Baltimore CyberTrust Root (DigiCert):
> https://crt.sh/?id=33626750a1.verizon.test

I believe this cert is presently untrusted by Mozilla due to revocation 
of the relevant subCA:
> https://crt.sh/?id=33123653DAC38997VPN2001A.trmk.corp

chains to Certplus (DocuSign):
> https://crt.sh/?id=42475510naccez.us.areva.corp

I believe these presently lack an unrevoked, unexpired trust path in 
Mozilla:
> https://crt.sh/?id=10621703collaboration.intra.airbusds.corp
> https://crt.sh/?id=48726306zdeasaotn01.dsmain.ds.corp
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec Update on SubCA Proposal

2017-07-20 Thread Gervase Markham via dev-security-policy
Hi Steve,

Thanks for posting this. I appreciate the level of detail provided,
which is useful in giving us a basis for discussion. It's a little
regrettable, though, that it was published a couple of weeks after we
were led to expect it...

One note before we start: Symantec's business dealings regarding its CA
business are not of concern to Mozilla other than relating to the
"change of ownership or control" provisions in Mozilla policy (policy
2.5 section 8). However, if dates are proposed or agreed for
implementation of the consensus plan, we would not expect those dates to
be renegotiated because of a change of ownership or control.

Am I right in saying that, in order to hit these dates you are
proposing, you would strongly desire to get consensus on them by August 1st?

On 18/07/17 19:22, Steve Medin wrote:
> New Certificate Issuance: We believe the dates for transition of validation 
> and issuance to the Managed CA that are both aggressive and achievable are as 
> follows:
> 
> - Implement the Managed CA by December 1, 2017 (changed from August 8, 2017);
> 
> - Managed CA performs domain validation for all new certificates by December 
> 1, 2017 (changed from November 1, 2017); and
> 
> - Managed CA performs full validation for all certificates by February 1, 
> 2018. Prior to this date, reuse of Symantec authenticated organization 
> information would be allowable for certificates of <13 months in validity.

To summarise for those reading along: this represents a change of a
little less than 4 months for the first date, 1 month for the second
date, and the third date is as originally proposed.

Steve: to be clear, this means that browsers could implement a block on
certificates from Symantec's existing PKI as follows: after December
1st, 2017, they could dis-trust all certificates with a notBefore
greater than December 1st 2017?

Given the explanations Symantec has given as to why these dates are
reasonable, and the effort required to stand up the new PKI, I am minded
to accept them, particularly as they have managed to hit the third
originally-proposed date on the nose. However, I am still open to
community input.

> Replacement of Unexpired Certificates Issued Before June 1, 2016: There are 
> two major milestones that must be achieved after implementation of the 
> Managed CA in order to replace unexpired certificates issued before June 1, 
> 2016 that do not naturally expire before the distrust date(s) in the SubCA 
> proposal. Those include the full revalidation of certificate information and 
> then the customer replacement of those certificates. 

That is not necessarily so. The customers could replace their
certificates using new, CT-logged certificates from Symantec's old
infrastructure. This doesn't require any revalidation or any change in
the certificate chain, so should have excellent compatibility
properties, and it's something that could begin today. In fact, as I
understand it, Symantec has already been encouraging their customers to
do exactly this.

This would, of course, mean, that those certificates would need
replacing again at some point before the final total dis-trust of the
current Symantec PKI.

This activity would need to start during the December holiday season
when many organizations impose infrastructure blackout periods.  As
such, we believe that the only achievable timing for this transition is
after the holiday season. We understand that browsers may want to
technically enforce this transition and that multiple milestones may be
undesirable from a coding perspective. In order to accommodate a
simplified and cost efficient transition schedule (especially for
organizations that currently have certificates with notBefore dates of
both June 1, 2015 and June 1, 2016) and to allow impacted organizations
the time, as they will likely need to replace, test and operationalize
these replacement certificates in their infrastructure, we recommend
consolidating Chrome's distrust dates to a single date of May 1, 2018.
This would mean that Chrome's distrust of Symantec certificates issued
before June 1, 2015 would change from August 31, 2017 to May 1, 2018,
and that Chrome's distrust of Symantec certificates issued before June
1, 2016 would change from January 18, 2018 to May 1, 2018.

A key date for Mozilla is when we can tell our software to dis-trust any
certificate issued by the Symantec current PKI which was issued before
June 1st 2016, because certificates issued after that are guaranteed
(pretty much) to be in CT, and therefore are a bounded and known set.
Therefore pushing that date out to May 1st 2018 seems like a negative
from our perspective.

A two-stage strategy such as the one outlined above seems to us to be
worth investigating, as it would allow us to give Symantec more time to
transition its customers from the current to the new PKI (something
which might come with compatibility risk, as you have correctly noted)
without having to bear the risk of continuing to t

Faking a key compromise event with franken-keys

2017-07-20 Thread J.C. Jones via dev-security-policy
All,

Today Hanno Böck blogged about performing surgery on ASN.1-encoded RSA
private keys to make them appear to correspond to a target certificate's
public key, and using the franken-key file to appear to legitimately hold
the private key of that target certificate.

https://blog.hboeck.de/archives/888-How-I-tricked-Symantec-with-a-Fake-Private-Key.html

The franken-key is quite convincing to casual inspection. Always check when
making trust decisions.

J.C.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certificate with invalid dnsName issued from Baltimore

2017-07-20 Thread Myers, Kenneth (10421) via dev-security-policy
I've contacted the DHS PKI PMO and informed the DoD PKI PMO of the mis-issued 
certificates.


Kenneth Myers
Supporting the GSA Federal PKI Management Authority
Manager
Protiviti | 1640 King Street | Suite #400 | Alexandria | VA 22314 US | 
Protiviti.com
NOTICE: Protiviti is a global consulting and internal audit firm composed of 
experts specializing in risk and advisory services. Protiviti is not licensed 
or registered as a public accounting firm and does not issue opinions on 
financial statements or offer attestation services. This electronic mail 
message is intended exclusively for the individual or entity to which it is 
addressed. This message, together with any attachment, may contain confidential 
and privileged information. Any views, opinions or conclusions expressed in 
this message are those of the individual sender and do not necessarily reflect 
the views of Protiviti Inc. or its affiliates. Any unauthorized review, use, 
printing, copying, retention, disclosure or distribution is strictly 
prohibited. If you have received this message in error, please immediately 
advise the sender by reply email message to the sender and delete all copies of 
this message. Thank you.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Validation of Domains for secure email certificates

2017-07-20 Thread Doug Beattie via dev-security-policy
Hi Gerv,

OK, I see your point.  We'll come up with what we think are reasonable methods 
and document that in the CPS.  That should work better than Gerv's vacation 
thoughts!

Doug

> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+doug.beattie=globalsign@lists.mozilla.org] On Behalf Of
> Gervase Markham via dev-security-policy
> Sent: Thursday, July 20, 2017 10:58 AM
> To: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Re: Validation of Domains for secure email certificates
> 
> Hi Doug,
> 
> On 20/07/17 13:04, Doug Beattie wrote:
> > Since there is no BR equivalent for issuance of S/MIME certificates (yet),
> this is all CAs have to go on.  I was curious if you agree that all of these
> methods meet the above requirement:
> 
> As you might imagine, this question puts me in a difficult position. If I say
> that a certain method does meet the requirement, I am making Mozilla policy
> up on the fly (and while on holiday ;-). If I say it does not, I would perhaps
> panic a load of CAs into having to update their issuance systems for fear of
> being dinged for misissuance.
> 
> It is unfortunate that there is no BR equivalent for email. However, I'm not
> convinced that the best way forward is for Mozilla to attempt to write one by
> degrees in response to questioning from CAs :-) I think the best thing for you
> to do is to look at your issuance processes and ask yourself whether you
> would be willing to stand up in a court of law and assert that they were
> "reasonable measures". When thinking about that, you could perhaps ask
> yourself whether you were doing any things which had been specifically
> outlawed or deprecated in an SSL context by the recent improvements in
> domain validation on that side of the house.
> 
> Gerv
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Regarding CA requirements as to technical infrastructure utilized in automated domain validations, etc. (if any)

2017-07-20 Thread Jakob Bohm via dev-security-policy

On 20/07/2017 16:39, Gervase Markham wrote:

On 18/07/17 17:51, Matthew Hardeman wrote:

The broader point I wish to make is that much can be done do improve the 
strength of the various subset of the 10 methods which do rely solely on 
network reliant automated validation methodologies.  The upside would be a 
significant, demonstrable increase in difficulty for even well placed ISP 
admins to compromise a compliant CAs validation processes.  The downside would 
be increases in cost and complexity borne by the compliant CA.


Your point, in the abstract, is a reasonable one, but so is your further
point about trade-offs. The only way we can really make progress is for
you to propose specific changes to the language, and we can then discuss
the trade-offs of each.


I noticed that too.  I assume it is still tied up in IPR hell?


No. IPR issues are solved. We are currently in arguments about what, if
any, additional necessary fixes to the text should go into the "restore
the text" ballot and what should go into a subsequent ballot, along with
the question of whether and which existing domain validations to
grandfather in and which to require that they be redone.


I would advocate a level playing field here.  This would have the bonus upside 
of helping to fix bad DNSSEC deployments.  If broken DNSSEC broke ability to 
get a certificate anywhere, either the incorrect deployment would likely be 
rolled back in the worst case or fixed in the best.


Certainly for CAA, we don't allow broken DNSSEC to fail open. I hope
that will be true of DNS-based validation methods - either after 190
passes, or soon after that.


I believe there would be a massive improvement in the security of DNS query and 
HTTP client fetch type validations if the CA were required to execute multiple 
queries (ideally at least 3 or 4), sourced from different physical locations 
(said locations having substantial network and geographic distance between 
them) and each location utilizing significantly different internet 
interconnection providers.


How could such a requirement be concretely specced in an auditable way?



This could be audited as part of general security/implementation
auditing.  Also, the CA could/should log the list of deployed probes
that checked/softfailed each domain as part of the usual evidence
logging.

As this would probably require most CAs to set up additional "probe
servers" at diverse locations, while still maintaining the high
auditable level of network security, a longer than usual phase in for
such a requirement would be in order.  (I am thinking mostly of smaller
CAs here, whose security may have been previously based on keeping
everything except off-line backups in one or two secure buildings).

A new requirement would be that as part of the 10 approved methods:
 - All DNS lookups should be done from at least 5 separate locations
  with Internet connectivity from different ISPs.  4 out of 5 must
  return the same result before that result is used either directly
  or as part of a second step.
 - All repeatable network connections (such as HTTP probes and whois
  lookups) must be done from 5 separate locations with Internet
  connectivity from different ISPs using DNS results checked as above,
  again 4 out of 5 must agree.
 - All difficult to repeat network connections (such as sending mails),
  must be done from randomly selected locations chosen out of at least
  4 that are simultaneously available (not down) and have Internet
  connection from different ISPs.  And still using DNS results checked
  as above.

The exact number of and details of the separate locations should be kept
secret, except for the auditors and a small number of CA employees, so
that attackers will not know when and where to set up man-in-the middle
network attacks such that 80% of the probes are fooled.

Implementation examples (not requirements):

In practice, a CA would typically set up 5 "probe" servers around the
geographic area served (which may be a country, continent or the world),
each capable of relaying the relevant network traffic from the central
validation system.  If one "probe" goes off line, validation can
continue, but with 0 failures allowed, while if two out of 5 go down,
validation cannot be done (thus some CAs may want to use 10 or more
locations for added redundancy).

The "probe" servers could be relatively simple VPN boxes, carefully
hardened and audited and then encased in welded shut steel boxes before
being transported to 3rd party data centers.  Central software
continuously verifies that it is talking to a box with a known
private/public key and that various network tests confirm that the box
is still connected to the expected remote network as seen both from
inside and outside.  A CA employee should also be dispatched to
physically check after any power or connectivity failure, but this may
be delayed by a few days.

Keeping extra probes and not always using all of them can also help hide
the comple

Re: Validation of Domains for secure email certificates

2017-07-20 Thread Jakob Bohm via dev-security-policy

On 20/07/2017 14:04, Doug Beattie wrote:

Gerv,




In general, it is common to have an S/MIME certificate for an e-mail
account that does *not* belong to the domain owner.  This is especially
true if the domain is a public/shared/ISP e-mail domain and is set up to
allow some way for the e-mail user to access the raw RFCxx22 messages
(e.g. IMAP, POP3, SMTP, darkmail, proprietary protocols).

In such cases, issuing the S/MIME cert to the domain owner would be
*inappropriate*, possibly even misissuance.




Mozilla Policy 2.5 states this:



For a certificate capable of being used for digitally signing or encrypting 
email messages, the CA takes reasonable measures to verify that the entity 
submitting the request controls the email account associated with the email 
address referenced in the certificate or has been authorized by the email 
account holder to act on the account holder's behalf.




Notice how the above language refers exclusively to the e-mail account,
not the domain.



Since there is no BR equivalent for issuance of S/MIME certificates (yet), this 
is all CAs have to go on.  I was curious if you agree that all of these methods 
meet the above requirement:



1.   On a per request basis (noting that some of these are overkill for 
issuance of a single certificate):

a.   3.2.2.4.1 Validating the Applicant as a Domain Contact



b.  3.2.2.4.2 Email, Fax, SMS, or Postal Mail to Domain Contact

c.   3.2.2.4.3 Phone Contact with Domain Contact

d.  3.2.2.4.4 Email to Constructed Address

e.  3.2.2.4.5 Domain Authorization Document

f.3.2.2.4.6 Agreed-Upon Change to Website

g.   3.2.2.4.7 DNS Change



None of the above validate ownership of the e-mail account, instead they
validate control of the (middlebox) e-mail server.


2.   On a per Domain basis.  One approval is sufficient to approve issuance for 
certificates in this domain space since these represent administrator actions provided 
subsequent requests are all performed via authenticated channel to the CA . This approval would last until this customer notified the CA 
otherwise :

a.   3.2.2.4.1 Validating the Applicant as a Domain Contact

b.  3.2.2.4.2 Email, Fax, SMS, or Postal Mail to Domain Contact

c.   3.2.2.4.3 Phone Contact with Domain Contact

d.  3.2.2.4.4 Email to Constructed Address

e.  3.2.2.4.5 Domain Authorization Document

f.3.2.2.4.6 Agreed-Upon Change to Website

g.   3.2.2.4.7 DNS Change



These would only be appropriate if there is some evidence that the domain
owner is actually authorized to act on behalf of their users.

 At a minimum, the domain must not contain words such as "mail" in their
second level name (e.g. hotmail.com would be out, mail.example.com would
not).  There are probably other automated tests that detect likely
e-mail hosters, and which can be mandated as requiring individual
account validation even if the domain happens to not be an e-mail
hoster, (e.g. envelopesandmail.example being a company dealing only with
snail mail of others and handling only their own e-mails, but would be
required by policy to validate each of their e-mail accounts separately 
because their domain name matches a rule that has been made rigid for 
the common good).





3.   Assuming issuance to a service provider (email hosting entity like 
Microsoft, Yahoo or Google) that hosts email for many domains, CA verifies that 
the Email domain DNS MX record points to the hosting company which indicates 
the company has delegated email control to the hosting company.



In contrast, issuance to such must be *rejected* as man-in-the middle
attacks.


4.   A DNS TXT record for the domain indicating approval to issue email 
certificates, or perhaps a CAA record with a new tag like issuesmime which permits 
the CA to issue certificates to this domain .  
Details in CA CPS.



That could make sense, also in the rare cases where the account holders
at a private (not e-mail hoster) domain wants to (unwisely) outsource
mail signing and encryption to someone.  However delegating e-mail
signing is essentially a wide-ranging power of attorney, not something
you would grant to a random technology provider (unlike an *actual*
attorney).



5.   A DNS TXT record for the domain indicating approval to issue email 
certificates, or perhaps a CAA record with a new tag like issuesmime which permits 
the email hosting company to issue certificates to this domain .  Details in CA CPS



Definitely bad.




Are there any other methods that you had in mind when writing this requirement?  Since 
issuance needs to be WT audited, there should be some level of "agreement" on 
acceptable validation methods.




May I suggest:

A. E-mail with an activation code to the *actual* account named in the
  certificate (similar to mailing list confirmed signup).

B. Evidence from a trusted (by the CA) e-mail hoster that a physical
  person or legal entity is paying for that e-m

RE: [EXT] Symantec Update on SubCA Proposal

2017-07-20 Thread Steve Medin via dev-security-policy
1)  December 1, 2017 is the earliest credible date that any RFP respondent 
can provide the Managed CA solution proposed by Google, assuming a start date 
of August 1, 2017. Only one RFP respondent initially proposed a schedule 
targeting August 8, 2017 (assuming a start date of June 12, 2017). We did not 
deem this proposal to be credible, however, based on the lack of specificity 
around our RFP evaluation criteria, as compared to all other RFP responses 
which provided detailed responses to all aspects of the RFP, and we have 
received no subsequent information from this bidder to increase our confidence.

2)  We are using several selection criteria for evaluating RFP responses, 
including the depth of plan to address key technical integration and 
operational requirements, the timeframe to execute, the ability to handle the 
scope, volume, language, and customer support requirements both for ongoing 
issuance and for one-time replacement of certificates issued prior to June 1, 
2016, compliance program and posture, and the ability to meet uptime, interface 
performance, and other SLAs. Certain RFP respondents have distinguished 
themselves based on the quality and depth of their integration planning 
assumptions, requirements and activities, which have directly influenced the 
dates we have proposed for the SubCA proposal.

3)  The RFP was first released on May 26, 2017. The first round of bidder 
responses was first received on June 12, 2017.

4)  It is our longstanding policy not to comment on rumors or market 
speculation.





From: Alex Gaynor [mailto:agay...@mozilla.com]
Sent: Wednesday, July 19, 2017 10:25 AM
To: Steve Medin 
Cc: mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: [EXT] Symantec Update on SubCA Proposal



Hi Steve,

Thank you for this update on Symantec's progress. I have a few follow-up
questions:

1) Did any of the RFP respondents indicate that they could provide the Managed
   CA solution in the timeframe originally proposed by Google? (August 8th)
   Alternatively, is December 1st, 2017 the earliest date that any RFP
   respondents can achieve?

2) What selection criteria is Symantec using in considering RFP responses?

3) On June 1st, Symantec wrote that "we are in the midst of a rigorous RFP
   process"
   
(https://www.symantec.com/connect/blogs/symantec-s-response-google-s-subca-proposal).
   In this mail you wrote that "Last month, we released a Request for Proposal
   (RFP)". How do you reconcile those?

4) There are currently rumors that Symantec is considering a sale of its CA
   business
   (https://www.reuters.com/article/us-symantec-divestiture-idUSKBN19W2WI). Do
   these timelines reflect that possibility, or should we expect requests to
   amend this timeline in the event of a change of ownership?

Thank you,
Alex



___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Validation of Domains for secure email certificates

2017-07-20 Thread Gervase Markham via dev-security-policy
Hi Doug,

On 20/07/17 13:04, Doug Beattie wrote:
> Since there is no BR equivalent for issuance of S/MIME certificates (yet), 
> this is all CAs have to go on.  I was curious if you agree that all of these 
> methods meet the above requirement:

As you might imagine, this question puts me in a difficult position. If
I say that a certain method does meet the requirement, I am making
Mozilla policy up on the fly (and while on holiday ;-). If I say it does
not, I would perhaps panic a load of CAs into having to update their
issuance systems for fear of being dinged for misissuance.

It is unfortunate that there is no BR equivalent for email. However, I'm
not convinced that the best way forward is for Mozilla to attempt to
write one by degrees in response to questioning from CAs :-) I think the
best thing for you to do is to look at your issuance processes and ask
yourself whether you would be willing to stand up in a court of law and
assert that they were "reasonable measures". When thinking about that,
you could perhaps ask yourself whether you were doing any things which
had been specifically outlawed or deprecated in an SSL context by the
recent improvements in domain validation on that side of the house.

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: [EXT] Symantec Update on SubCA Proposal

2017-07-20 Thread Steve Medin via dev-security-policy
We believe our proposed dates reflect an aggressive but achievable period of 
time to implement the SubCA proposal and allow impacted organizations the time 
needed to replace, test and operationalize replacement certificates in their 
infrastructure to mitigate interoperability and compatibility risk associated 
with this premature replacement of certificates, which is consistent with the 
intent of the SubCA proposal. Our proposed dates are informed by the RFP 
responses and follow-up discussions we have had with our prospective Managed CA 
partners.





From: Eric Mill [mailto:e...@konklone.com]
Sent: Wednesday, July 19, 2017 3:43 PM
To: Steve Medin 
Cc: mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: [EXT] Symantec Update on SubCA Proposal







On Wed, Jul 19, 2017 at 11:31 AM, Steve Medin via dev-security-policy 
mailto:dev-security-policy@lists.mozilla.org>>
 wrote:

   > -Original Message-
   > From: dev-security-policy 
[mailto:dev-security-policy-
   > 
bounces+steve_medin=symantec@lists.mozilla.org]
 On Behalf Of
   > Jakob Bohm via dev-security-policy
   > Sent: Tuesday, July 18, 2017 4:39 PM
   > To: 
mozilla-dev-security-pol...@lists.mozilla.org
   > Subject: Re: [EXT] Symantec Update on SubCA Proposal
   >
   >
   > Just for clarity:
   >
   > (Note: Using ISO date format instead of ambiguous local date format)
   >
   > How many Symantec certs issued prior to 2015-06-01 expire after 2018-
   > 06-01, and how does that mesh with the alternative date proposed
   > below:
   >
   > On 18/07/2017 21:37, Steve Medin wrote:
   > > Correction: Summary item #3 should read:
   > >
   > > 3. May 1, 2018
   > > a. Single date of distrust of certificates issued prior to 6/1/2016.
   > (changed from August 31,2017 for certificates issued prior to 6/1/2015 and
   > from January 18, 2018 for certificates issued prior to 6/1/2016).
   > >

   Over 34,000 certificates were issued prior to 2015-06-01 and expire after 
2018-06-01. This is in addition to almost 200,000 certificates that would also 
need to be replaced under the current SubCA proposal assuming a May 1, 2018 
distrust date. We believe that nine months (from August 1, 2017 to May 1, 2018) 
is aggressive but achievable for this transition — a period minimally necessary 
to allow for site operators to plan and execute an orderly transition and to 
reduce the potential risk of widespread ecosystem disruption. Nevertheless, we 
urge the community to consider moving the proposed May 1, 2018 distrust date 
out even further to February 1, 2019 in order to minimize the risk of end user 
disruption by ensuring that website operators have a reasonable timeframe to 
plan and deploy replacement certificates.



   That's pretty close to saying that nothing should happen, since almost all 
the certificates will have expired by then. That certainly is the least 
disruptive, but it seems contrary to the intent of the proposal.



   -- Eric



   ___
   dev-security-policy mailing list
   
dev-security-policy@lists.mozilla.org
   https://lists.mozilla.org/listinfo/dev-security-policy







   --

   
konklone.com
 | @konklone

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: [EXT] Symantec Update on SubCA Proposal

2017-07-20 Thread Steve Medin via dev-security-policy
> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+steve_medin=symantec@lists.mozilla.org] On Behalf Of
> David E. Ross via dev-security-policy
> Sent: Wednesday, July 19, 2017 12:48 PM
> To: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Re: [EXT] Symantec Update on SubCA Proposal
>
> On 7/19/2017 8:31 AM, Steve Medin wrote:
> >> -Original Message-
> >> From: dev-security-policy [mailto:dev-security-policy-
> >> bounces+steve_medin=symantec@lists.mozilla.org] On Behalf Of
> >> Jakob Bohm via dev-security-policy
> >> Sent: Tuesday, July 18, 2017 4:39 PM
> >> To: mozilla-dev-security-pol...@lists.mozilla.org
> >> Subject: Re: [EXT] Symantec Update on SubCA Proposal
> >>
> >>
> >> Just for clarity:
> >>
> >> (Note: Using ISO date format instead of ambiguous local date format)
> >>
> >> How many Symantec certs issued prior to 2015-06-01 expire after
> 2018-
> >> 06-01, and how does that mesh with the alternative date proposed
> >> below:
> >>
> >> On 18/07/2017 21:37, Steve Medin wrote:
> >>> Correction: Summary item #3 should read:
> >>>
> >>> 3. May 1, 2018
> >>> a. Single date of distrust of certificates issued prior to 6/1/2016.
> >> (changed from August 31,2017 for certificates issued prior to
> >> 6/1/2015 and from January 18, 2018 for certificates issued prior to
> 6/1/2016).
> >>>
> >
> > Over 34,000 certificates were issued prior to 2015-06-01 and expire after
> 2018-06-01. This is in addition to almost 200,000 certificates that would
> also need to be replaced under the current SubCA proposal assuming a May
> 1, 2018 distrust date. We believe that nine months (from August 1, 2017 to
> May 1, 2018) is aggressive but achievable for this transition - a period
> minimally necessary to allow for site operators to plan and execute an
> orderly transition and to reduce the potential risk of widespread ecosystem
> disruption. Nevertheless, we urge the community to consider moving the
> proposed May 1, 2018 distrust date out even further to February 1, 2019
> in order to minimize the risk of end user disruption by ensuring that website
> operators have a reasonable timeframe to plan and deploy replacement
> certificates.
> >
>
> It appears that Symantec wants to delay distrusting certificates until all
> existing subscriber certificates reach their inherent expiration dates.
>

Our proposed distrust date (May 1, 2018) is based on an aggressive but 
achievable period of time to allow impacted organizations the time needed to 
replace, test and operationalize replacement certificates in their 
infrastructure.  More than 234,000 certificates are required to be replaced 
before their expiration dates assuming a distrust date of May 1, 2018. In fact, 
we urge the community to consider moving this distrust date out even further to 
February 1, 2019 in order to minimize the risk of end user disruption by 
ensuring that website operators have a reasonable timeframe to plan and deploy 
replacement certificates. This recommendation is echoed by our prospective 
Managed CA partners.

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: [EXT] Symantec Update on SubCA Proposal

2017-07-20 Thread Steve Medin via dev-security-policy
> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+steve_medin=symantec@lists.mozilla.org] On Behalf Of
> Jakob Bohm via dev-security-policy
> Sent: Wednesday, July 19, 2017 12:22 PM
> To: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Re: [EXT] Symantec Update on SubCA Proposal
> 
> On 19/07/2017 17:31, Steve Medin wrote:
> >> -Original Message-
> >> From: dev-security-policy [mailto:dev-security-policy-
> >> bounces+steve_medin=symantec@lists.mozilla.org] On Behalf Of
> >> Jakob Bohm via dev-security-policy
> >> Sent: Tuesday, July 18, 2017 4:39 PM
> >> To: mozilla-dev-security-pol...@lists.mozilla.org
> >> Subject: Re: [EXT] Symantec Update on SubCA Proposal
> >>
> >>
> >> Just for clarity:
> >>
> >> (Note: Using ISO date format instead of ambiguous local date format)
> >>
> >> How many Symantec certs issued prior to 2015-06-01 expire after
> 2018-
> >> 06-01, and how does that mesh with the alternative date proposed
> >> below:
> >>
> >> On 18/07/2017 21:37, Steve Medin wrote:
> >>> Correction: Summary item #3 should read:
> >>>
> >>> 3. May 1, 2018
> >>>  a. Single date of distrust of certificates issued prior to 6/1/2016.
> >> (changed from August 31,2017 for certificates issued prior to
> >> 6/1/2015 and from January 18, 2018 for certificates issued prior to
> 6/1/2016).
> >>>
> >
> > Over 34,000 certificates were issued prior to 2015-06-01 and expire after
> 2018-06-01. This is in addition to almost 200,000 certificates that would
> also need to be replaced under the current SubCA proposal assuming a May
> 1, 2018 distrust date. We believe that nine months (from August 1, 2017 to
> May 1, 2018) is aggressive but achievable for this transition — a period
> minimally necessary to allow for site operators to plan and execute an
> orderly transition and to reduce the potential risk of widespread ecosystem
> disruption. Nevertheless, we urge the community to consider moving the
> proposed May 1, 2018 distrust date out even further to February 1, 2019
> in order to minimize the risk of end user disruption by ensuring that website
> operators have a reasonable timeframe to plan and deploy replacement
> certificates.
> >
> 
> So when and why did Symantec issue 34,000 WebPKI certificates valid
> longer than 3 years, that would expire after 2018-06-01 ?
> 
> Are these certificates issued before 2015-04-01 with validity periods longer
> than 39 months?
> 
> Are they certificates issued under "special circumstances" ?
> 
> Are they certificates with validity periods between 36 and 39 months?
> 
> 

The vast majority of these certificates were issued prior to April 1, 2015 and 
were subject to the 60 month rule that was in effect at the time of issuance. 
This population also includes several thousand that are for <39 month validity.



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Miss-issuance: URI in dNSName SAN

2017-07-20 Thread Gervase Markham via dev-security-policy
On 19/07/17 14:53, Alex Gaynor wrote:
> I'd like to report the following instance of miss-issuance:

Thank you. Again, I have drawn this message to the attention of the CAs
concerned (Government of Venezuela and Camerfirma).

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: dNSName containing '/' / low serial number entropy

2017-07-20 Thread Gervase Markham via dev-security-policy
On 18/07/17 23:25, Charles Reiss wrote:
> https://crt.sh/?id=174827359 is a certificate issued by D-TRUST SSL



I'm supposed to be on holiday :-), but I have emailed the 3 CAs
concerned drawing these issues to their attention, and asking them to
comment here when they have discovered the cause.

Perhaps we need a wiki page on "how to best respond to an incident
report from Mozilla"? :-)

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Regarding CA requirements as to technical infrastructure utilized in automated domain validations, etc. (if any)

2017-07-20 Thread Gervase Markham via dev-security-policy
On 18/07/17 17:51, Matthew Hardeman wrote:
> The broader point I wish to make is that much can be done do improve the 
> strength of the various subset of the 10 methods which do rely solely on 
> network reliant automated validation methodologies.  The upside would be a 
> significant, demonstrable increase in difficulty for even well placed ISP 
> admins to compromise a compliant CAs validation processes.  The downside 
> would be increases in cost and complexity borne by the compliant CA.

Your point, in the abstract, is a reasonable one, but so is your further
point about trade-offs. The only way we can really make progress is for
you to propose specific changes to the language, and we can then discuss
the trade-offs of each.

> I noticed that too.  I assume it is still tied up in IPR hell?

No. IPR issues are solved. We are currently in arguments about what, if
any, additional necessary fixes to the text should go into the "restore
the text" ballot and what should go into a subsequent ballot, along with
the question of whether and which existing domain validations to
grandfather in and which to require that they be redone.

> I would advocate a level playing field here.  This would have the bonus 
> upside of helping to fix bad DNSSEC deployments.  If broken DNSSEC broke 
> ability to get a certificate anywhere, either the incorrect deployment would 
> likely be rolled back in the worst case or fixed in the best.

Certainly for CAA, we don't allow broken DNSSEC to fail open. I hope
that will be true of DNS-based validation methods - either after 190
passes, or soon after that.

> I believe there would be a massive improvement in the security of DNS query 
> and HTTP client fetch type validations if the CA were required to execute 
> multiple queries (ideally at least 3 or 4), sourced from different physical 
> locations (said locations having substantial network and geographic distance 
> between them) and each location utilizing significantly different internet 
> interconnection providers.

How could such a requirement be concretely specced in an auditable way?

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: How long to resolve unaudited unconstrained intermediates?

2017-07-20 Thread Gervase Markham via dev-security-policy
On 12/07/17 21:18, Ben Wilson wrote:
> For CAs with emailProtection and proper name constraints, where would such 
> CAs appear in   
> https://crt.sh/mozilla-disclosures?   
>  
> https://crt.sh/mozilla-disclosures#constrainedother ? Or a new section of the 
> list, yet to be determined?

I believe Rob has now split the list into two.

> And for CAs where EKU contains emailProtection, what are the programmatic 
> criteria that determine whether the CA will be in such list as properly name 
> constrained, since the Baseline Requirements don’t cover email certificates?  
> (Presumably, a properly name-constrained email CA would not require any 
> audit.)

Rob would be able to say. But the criteria for whether an email
intermediate is properly name constrained are in Mozilla policy 2.5.

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Validation of Domains for secure email certificates

2017-07-20 Thread Doug Beattie via dev-security-policy
Gerv,



Mozilla Policy 2.5 states this:



For a certificate capable of being used for digitally signing or encrypting 
email messages, the CA takes reasonable measures to verify that the entity 
submitting the request controls the email account associated with the email 
address referenced in the certificate or has been authorized by the email 
account holder to act on the account holder's behalf.



Since there is no BR equivalent for issuance of S/MIME certificates (yet), this 
is all CAs have to go on.  I was curious if you agree that all of these methods 
meet the above requirement:



1.   On a per request basis (noting that some of these are overkill for 
issuance of a single certificate):

a.   3.2.2.4.1 Validating the Applicant as a Domain Contact

b.  3.2.2.4.2 Email, Fax, SMS, or Postal Mail to Domain Contact

c.   3.2.2.4.3 Phone Contact with Domain Contact

d.  3.2.2.4.4 Email to Constructed Address

e.  3.2.2.4.5 Domain Authorization Document

f.3.2.2.4.6 Agreed-Upon Change to Website

g.   3.2.2.4.7 DNS Change

2.   On a per Domain basis.  One approval is sufficient to approve issuance 
for certificates in this domain space since these represent administrator 
actions provided subsequent requests are all performed via authenticated 
channel to the CA . This approval would 
last until this customer notified the CA otherwise :

a.   3.2.2.4.1 Validating the Applicant as a Domain Contact

b.  3.2.2.4.2 Email, Fax, SMS, or Postal Mail to Domain Contact

c.   3.2.2.4.3 Phone Contact with Domain Contact

d.  3.2.2.4.4 Email to Constructed Address

e.  3.2.2.4.5 Domain Authorization Document

f.3.2.2.4.6 Agreed-Upon Change to Website

g.   3.2.2.4.7 DNS Change

3.   Assuming issuance to a service provider (email hosting entity like 
Microsoft, Yahoo or Google) that hosts email for many domains, CA verifies that 
the Email domain DNS MX record points to the hosting company which indicates 
the company has delegated email control to the hosting company.

4.   A DNS TXT record for the domain indicating approval to issue email 
certificates, or perhaps a CAA record with a new tag like issuesmime which 
permits the CA to issue certificates to this domain .  Details in CA CPS.

5.   A DNS TXT record for the domain indicating approval to issue email 
certificates, or perhaps a CAA record with a new tag like issuesmime which 
permits the email hosting company to issue certificates to this domain .  Details in CA CPS



Are there any other methods that you had in mind when writing this requirement? 
 Since issuance needs to be WT audited, there should be some level of 
"agreement" on acceptable validation methods.



Doug


___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Certificate with invalid dnsName

2017-07-20 Thread Inigo Barreira via dev-security-policy
Thanks for this info. These Startcom certs were issued from the old system.
We´ll contact the users and act accordingly.

Best regards

Iñigo Barreira
CEO
StartCom CA Limited


-Original Message-
From: dev-security-policy
[mailto:dev-security-policy-bounces+inigo=startcomca@lists.mozilla.org]
On Behalf Of Charles Reiss via dev-security-policy
Sent: jueves, 20 de julio de 2017 3:30
To: mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: Certificate with invalid dnsName

On 07/19/2017 06:03 PM, Tom wrote:
> Following that discovery, I've search for odd (invalid?) DNS names.
> Here is the list of certificated I've found, it may overlap some 
> discovery already reported.
> If I'm correct, theses certificate are not revoked, not expired, and 
> probably trusted by Mozilla (crt.sh issuer are marked trusted by 
> Mozilla, but not all).

Annotating these certs:

> Starting with *:

I believe this cert is presently untrusted by Mozilla due to revocation of
all paths to the Federal PKI:
> https://crt.sh/?id=7211484*eis.aetc.af.mil

chains to StartCom (and all of these from StartCom are minor compared to 
StartCom's other problems):
> https://crt.sh/?id=10714112*g10.net-lab.net

chains to Baltimore CyberTrust Root (DigiCert):
> https://crt.sh/?id=48682944*nuvolaitaliana.it

chains to StartCom:
> https://crt.sh/?id=15736178*assets.blog.cn.net.ru
> https://crt.sh/?id=17295812*dev02.calendar42.com
> https://crt.sh/?id=15881220*dev.1septem.ru
> https://crt.sh/?id=15655700*assets.blog.cn.net.ru
> https://crt.sh/?id=17792808*quickbuild.raptorengineering.io


> 
> Starting with -:

chains to QuoVadis:
> https://crt.sh/?id=54285413
> -d1-datacentre-12g-console-2.its.deakin.edu.au

chains to StartCom:
> https://crt.sh/?id=78248795-1ccenter.777chao.com


> 
> Multiple *.:

chains to QuoVadis:
> https://crt.sh/?id=13299376*.*.victoria.ac.nz

I believe this cert is presently trusted by Mozilla only via a 
technically constrained subCA:
> https://crt.sh/?id=44997156*.*.rnd.unicredit.it

chains to Swisscom:
> https://crt.sh/?id=5982951*.*.int.swisscom.ch


> 
> Internals TLD:

chains to Baltimore CyberTrust Root (DigiCert):
> https://crt.sh/?id=33626750a1.verizon.test

I believe this cert is presently untrusted by Mozilla due to revocation 
of the relevant subCA:
> https://crt.sh/?id=33123653DAC38997VPN2001A.trmk.corp

chains to Certplus (DocuSign):
> https://crt.sh/?id=42475510naccez.us.areva.corp

I believe these presently lack an unrevoked, unexpired trust path in 
Mozilla:
> https://crt.sh/?id=10621703collaboration.intra.airbusds.corp
> https://crt.sh/?id=48726306zdeasaotn01.dsmain.ds.corp
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy