Re: Summary of Camerfirma's Compliance Issues

2021-03-31 Thread Ryan Sleevi via dev-security-policy
(Writing in a Google capacity)

In [1], we removed support in Camerfirma certificates, as previously
announced [2]. This included removing support for any subordinate CAs. As
announced, this was planned to roll out as part of the Chrome 90 release
schedule, scheduled to hit stable on 2021-04-06.

As with any CA removal, we’ve continued to examine ecosystem progress. When
appropriate, we've also reached out to specific organizations to understand
any challenges that might significantly impact our users. While we actively
discourage CAs and site operators from directly contacting us to request
exceptions, we do reach out to organizations that may significantly affect
a non-trivial number of users. This situation is particularly unique due to
the global pandemic, as several Portugese and Spanish government websites
related to COVID-19 are affected.

We've received confirmation from these organizations that they are on track
to migrate. These organizations have requested 1-2 additional weeks to
replace their certificates, beyond the three months since the original
announcement. Normally, we would not honor such requests, given the
industry standard expectations around certificate replacement being doable
in five days or less. However, the global pandemic has brought unique and
unprecedented challenges. Given the importance of these websites in helping
resolve this global health crisis, we’re inclined to provide that
additional migration support under these circumstances.

We plan to temporarily suspend the Camerfirma removal for at least the
first Chrome 90 release, to provide that additional time to migrate these
pandemic-related websites. Although we considered other technical
approaches, we believe this approach to be the least risky for this
situation. We’ll continue to work directly with these COVID-19 related
websites on their migration. We plan to remove trust no later than Chrome
91, but may remove it sooner, such as part of a Chrome 90 security update,
based on these migration efforts. This will not affect Chrome Beta, Dev,
and Canary channels, as they will continue to block certificates from the
Camerfirma hierarchy.

[1] https://bugs.chromium.org/p/chromium/issues/detail?id=1173547
[2]
https://groups.google.com/g/mozilla.dev.security.policy/c/dSeD3dgnpzk/m/iAUwcFioAQAJ

On Thu, Jan 28, 2021 at 10:39 PM Eric Mill via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Just to build on what Ryan said, and to clarify any confusion around the
> scope of Chrome’s action here - Chrome is no longer accepting Camerfirma
> certificates that are specifically used for *TLS server authentication* for
> websites.
>
> Our planned action is related to the certificates Chrome uses and
> verifies, which are only those used for TLS server authentication. This
> does include any type of certificate used in Chrome for TLS server
> authentication, including Qualified Website Authentication Certificates
> (QWACs) and certificates used to comply with the Revised Payment Services
> Directive (PSD2). However, it does not cover other use cases, such as TLS
> client certificates or the use of Qualified Certificates for digital
> signatures.
>
> In order to ensure Chrome’s response is comprehensive, the list of
> affected roots includes all of those operated by Camerfirma that have the
> technical capability to issue TLS server authentication certificates, even
> if those roots are not currently being used to issue TLS server
> authentication certificates. But please note that the changes we announced
> for Chrome will not impact the validity of these roots for other types of
> authentication, only current and future use of those roots for TLS server
> authentication in Chrome.
>
>
> On Monday, January 25, 2021 at 12:01:42 AM UTC-8, Ryan Sleevi wrote:
> > (Writing in a Google capacity)
> >
> > I personally want to say thanks to everyone who has contributed to this
> > discussion, who have reviewed or reported past incidents, and who have
> > continued to provide valuable feedback on current incidents. When
> > considering CAs and incidents, we really want to ensure we’re
> considering
> > all relevant information, as well as making sure we’ve got a correct
> > understanding of the details.
> >
> > After full consideration of the information available, and in order to
> > protect and safeguard Chrome users, certificates issued by AC Camerfirma
> SA
> > will no longer be accepted in Chrome, beginning with Chrome 90.
> >
> > This will be implemented via our existing mechanisms to respond to CA
> > incidents, via an integrated blocklist. Beginning with Chrome 90, users
> > that attempt to navigate to a website that uses a certificate that
> chains
> > to one of the roots detailed below will find that it is not

Job/Job Posting: Chrome Root Program

2021-03-30 Thread Ryan Sleevi via dev-security-policy
[Posting in a Google hat]

Several years ago [1], Kathleen opened a discussion about whether it would
be OK to post job opportunities here. While the discussion didn't come to a
firm conclusion, and there were those both for and against such postings, I
reached out to Ben and Kathleen to check that it would be OK before posting
this, and they shared that they were OK with it.

Emily (CC'd) was originally going to post this, but in light of [1], we
thought it might be better if I did. If you have any questions, you can
follow-up with Emily directly, and I'm always happy to connect you with
her, in the event you use a mail/newsgroup client that hides CCs here for
the message.

In case it's not obvious, this is the team I'm on here at Google. Given the
Chrome Root Program's goal [2] of continuing public collaboration here on
m.d.s.p., hopefully folks see this as directly relevant to this list :)

---

Chrome is hiring software engineers [3] and a TPM/PgM [4] interested in
security, PKI, applied crypto, and related topics in the Washington D.C.
area. These new hires will help build out Chrome's root program: managing
trust decisions in CAs, building and maintaining Chrome's certificate
verification and TLS stack, and building tooling and measurement software
for guiding policy decisions. This work is part of Chrome's Trusty
Transport team, with the full stack of HTTPS in scope, from BoringSSL to
TLS to the UI/UX of connection security. More broadly, the team is part of
the Chrome Trust and Safety org, which is growing the Washington D.C.
office rapidly this year, so there will be plenty of like-minded Chromies
around.

NOTE: The min qualifications listed in the software engineer posting are
exaggerated. We're hiring both senior and junior SWEs, so please apply if
you're interested, even if you don't meet the minimum qualifications listed.

---

[1]
https://groups.google.com/g/mozilla.dev.security.policy/c/dn0qEZrxbQA/m/h9ojtox6AgAJ
[2]
https://groups.google.com/g/mozilla.dev.security.policy/c/3Q36J4flnQs/m/VyWFiVwrBQAJ
[3] https://careers.google.com/jobs/results/109182492218401478/
[4] https://careers.google.com/jobs/results/121728729358443206/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7.1: MRSP Issue #206: Limit re-use of domain name verification to 398 days

2021-03-19 Thread Ryan Sleevi via dev-security-policy
The S/MIME BRs are not yet a thing, while the current language covers such
CAs (as a condition of Mozilla inclusion)

On Fri, Mar 19, 2021 at 6:45 AM Doug Beattie via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Thanks Ben.
>
>
>
> What’s the purpose of this statement:
>
> 5. verify that all of the information that is included in server
> certificates remains current and correct at intervals of 825 days or less;
>
>
>
> The BRs limit data reuse to 825 days since March 2018 so I don’t think
> this adds anything.  If it does mean something more than that, can you
> update to make it more clear?
>
>
>
>
>
> From: Ben Wilson 
> Sent: Thursday, March 18, 2021 2:53 PM
> To: Doug Beattie 
> Cc: mozilla-dev-security-policy <
> mozilla-dev-security-pol...@lists.mozilla.org>
> Subject: Re: Policy 2.7.1: MRSP Issue #206: Limit re-use of domain name
> verification to 398 days
>
>
>
> I've edited the proposed subsection 5.1 and have left section 5 in for
> now.  See
>
>
> https://github.com/BenWilson-Mozilla/pkipolicy/commit/d37d7a3865035c958c1cb139b949107665fee232
>
>
>
> On Tue, Mar 16, 2021 at 9:10 AM Ben Wilson  bwil...@mozilla.com> > wrote:
>
> That works, too.  Thoughts?
>
>
>
> On Tue, Mar 16, 2021 at 5:21 AM Doug Beattie  <mailto:doug.beat...@globalsign.com> > wrote:
>
> Hi Ben,
>
> Regarding the redlined spec:
> https://github.com/mozilla/pkipolicy/compare/master...BenWilson-Mozilla:2.7.1?short_path=73f95f7#diff-73f95f7d2475645ef6fc93f65ddd9679d66efa9834e4ce415a2bf79a16a7cdb6
>
> Is this a meaningful statement given max validity is 398 days now?
>5. verify that all of the information that is included in server
> certificates remains current and correct at intervals of 825 days or less;
> I think we can remove that and them move 5.1 to item 5
>
> I find the words for this requirement 5.1 unclear.
>
>   " 5.1. for server certificates issued on or after October 1, 2021,
> verify each dNSName or IPAddress in a SAN or commonName at an interval of
> 398 days or less;"
>
> Can we say:
> "5.1. for server certificates issued on or after October 1, 2021, each
> dNSName or IPAddress in a SAN or commonName MUST have been validated  accordance with the CABF Baseline Requirements?> within the prior 398 days.
>
>
>
> -Original Message-
> From: dev-security-policy  <mailto:dev-security-policy-boun...@lists.mozilla.org> > On Behalf Of Ben
> Wilson via dev-security-policy
> Sent: Monday, March 8, 2021 6:38 PM
> To: mozilla-dev-security-policy <
> mozilla-dev-security-pol...@lists.mozilla.org  mozilla-dev-security-pol...@lists.mozilla.org> >
> Subject: Re: Policy 2.7.1: MRSP Issue #206: Limit re-use of domain name
> verification to 398 days
>
> All,
>
> Here is the currently proposed wording for subsection 5.1 of MRSP section
> 2.1:
>
> " 5.1. for server certificates issued on or after October 1, 2021, verify
> each dNSName or IPAddress in a SAN or commonName at an interval of 398 days
> or less;"
>
> Ben
>
> On Fri, Feb 26, 2021 at 9:48 AM Ryan Sleevi  r...@sleevi.com> > wrote:
>
> >
> >
> > On Thu, Feb 25, 2021 at 7:55 PM Clint Wilson via dev-security-policy <
> > dev-security-policy@lists.mozilla.org  dev-security-policy@lists.mozilla.org> > wrote:
> >
> >> I think it makes sense to separate out the date for domain validation
> >> expiration from the issuance of server certificates with previously
> >> validated domain names, but agree with Ben that the timeline doesn’t
> >> seem to need to be prolonged. What about something like this:
> >>
> >> 1. Domain name or IP address verifications performed on or after July
> >> 1,
> >> 2021 may be reused for a maximum of 398 days.
> >> 2. Server certificates issued on or after September 1, 2021 must have
> >> completed domain name or IP address verification within the preceding
> >> 398 days.
> >>
> >> This effectively stretches the “cliff” out across ~6 months (now
> >> through the end of August), which seems reasonable.
> >>
> >
> > Yeah, that does sound reasonable.
> >
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org  dev-security-policy@lists.mozilla.org>
> https://lists.mozilla.org/listinfo/dev-security-policy
>
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Public Discussion re: Inclusion of the ANF Secure Server Root CA

2021-03-16 Thread Ryan Sleevi via dev-security-policy
On Tue, Mar 16, 2021 at 6:02 PM Pablo Díaz via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Said "additional" confirmation email, addressed to the domain
> administrator, informs them that [Applicant Data] has requested an SSL
> certificate for their domain [Domain] by [Request ID] request, and
> instructs them to access a link (valid for 29 days, the email indicates the
> expiration date) where they can manually confirm or reject the request by
> clicking Confirm/Reject buttons.


Can you describe more here?

There are gaps here which seem to allow any number of insecure
implementations.

For example:
- Sending a link that must be accessed to approved is known-insecure, as
automated mail scanning software may automatically dereference links in
e-mail (in order to do content inspection). Confirm/Reject buttons alone
shouldn't be seen as sufficient to mitigate this, as that may vary based on
how the crawler works. Requiring explicit entry of the random value is a
necessary "human" measure (aka similar to a CAPTCHA)
- You haven't described how the email address is constructed for these
cases. The BRs only permit you to reuse the Random Value for 3.2.2.4.2, and
if and only if the domain contact is the same for all of the requested
domains. If, for example, you're using 3.2.2.4.4 (your #1 example), it
would be inappropriate if hostmaster@apex1.example could approve a request
for apex2.example, yet, as currently described, that seems to be possible,
in that both hostmaster@apex1.example and hostmaster@apex2.example will
receive the same Request ID to approve/reject the request.

The reliance on 3.2.2.4.2 is also known to be dangerous (in that it relies
on human inspection or OCR of otherwise unstructured text), which is why
it's discouraged compared to other methods (like .7, .19, or .20, and if
using e-mail, .13, .14). It's unclear how you extract the emails from the
WHOIS results that you provide, and whether that's something the IRM
performs or whether that's somehow programmatically driven.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Synopsis of Proposed Changes to MRSP v. 2.7.1

2021-03-10 Thread Ryan Sleevi via dev-security-policy
On Mon, Mar 8, 2021 at 7:08 PM Ben Wilson via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> #139 resolved - Audits are required even if no longer issuing, until CA
> certificate is revoked, expired, or removed.
>
> See
>
> https://github.com/BenWilson-Mozilla/pkipolicy/commit/888dc139d196b02707d228583ac20564ddb27b35
>

I'm assuming you're OK with this, but just wanting to make sure it's
explicit:

In the scenario where the CA destroys the private key, they may still have
outstanding certificates that work in Mozilla products. If Mozilla is
relying on infrastructure provided by that CA (e.g. OCSP, CRL), the CA no
longer has an obligation to audit that infrastructure.

I suspect that the idea is that if/when a CA destroys the private key, that
the expectation is Mozilla will promptly remove/revoke the root, but just
wanted to call out that there is a gap between the operational life cycle
of a CA (e.g. providing revocation services) and the private key lifecycle.

If this isn't intended, then removing the "or until all copies" should
resolve this, but of course, open up new discussion.


> #221 resolved - Wrong hyperlink for “material change” in MRSP section 8
>
> Replace hyperlink with “there is a change in the CA's operations that could
> significantly affect a CA's ability to comply with the requirements of this
> Policy.”
>
>
> https://github.com/BenWilson-Mozilla/pkipolicy/commit/fbe04aa64f931940af967ed90ab98aa95789f057


Since "significantly" is highly subjective, and can lead the CA to not
disclosing, what would be the impact of just dropping the word? That is,
"that could affect a CA's ability to comply". There's still an element of
subjectivity here, for sure, but it encourages CAs to over-disclose, rather
than under-disclose, as the current language does.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Clarification request: ECC subCAs under RSA Root

2021-03-10 Thread Ryan Sleevi via dev-security-policy
I agree with Corey that this is problematic, and wouldn't even call it a
best practice/good practice.

I appreciate the goal in the abstract - which is to say, don't do more work
than necessary (e.g. having an RSA-4096 signed by RSA-2048 is wasting
cycles *if* there's no other reason for it), but as Corey points out, there
are times where it's both necessary and good to have such chains.

On Wed, Mar 10, 2021 at 9:46 AM pfuen...--- via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> > My understanding is that neither the BRs or any Root Program require
> that that subordinate CA key be weaker or equal in strength to the issuing
> CA's key.
> >
> > Additionally, such a requirement would prohibit cross-signs where a
> "legacy" root with a smaller key size would certify a new root CA with a
> stronger key. For that reason, this illustrative control seems problematic.
> >
>
> Thanks, Corey.
> I also see it problematic, but I've been seeing other root programs (i.e.
> Spanish Government) enforcing this rule, so I'd like to understand if it's
> a "best practice" or a rule, and, in particular, if it's rule to be
> respected for TLS-oriented hierarchies.
> P
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CCADB Proposal: Add field called JSON Array of Partitioned CRLs Issued By This CA

2021-02-26 Thread Ryan Sleevi via dev-security-policy
On Fri, Feb 26, 2021 at 6:01 PM Aaron Gable  wrote:

> On Fri, Feb 26, 2021 at 12:05 PM Ryan Sleevi  wrote:
>
>> You can still do parallel signing. I was trying to account for that
>> explicitly with the notion of the “pre-reserved” set of URLs. However, that
>> also makes an assumption I should have been more explicit about: whether
>> the expectation is “you declare, then fill, CRLs”, or whether it’s
>> acceptable to “fill, then declare, CRLs”. I was trying to cover the former,
>> but I don’t think there is any innate prohibition on the latter, and it was
>> what I was trying to call out in the previous mail.
>>
>> I do take your point about deterministically, because the process I’m
>> describing is implicitly assuming you have a work queue (e.g. pub/sub, go
>> channel, etc), in which certs to revoke go in, and one or more CRL signers
>> consume the queue and produce CRLs. The order of that consumption would be
>> non-deterministic, but it very much would be parallelizable, and you’d be
>> in full control over what the work unit chunks were sized at.
>>
>> Right, neither of these are required if you can “produce, then declare”.
>> From the client perspective, a consuming party cannot observe any
>> meaningful difference from the “declare, then produce” or the “produce,
>> then declare”, since in both cases, they have to wait for the CRL to be
>> published on the server before they can consume. The fact that they know
>> the URL, but the content is stale/not yet updated (I.e. the declare then
>> produce scenario) doesn’t provide any advantages. Ostensibly, the “produce,
>> then declare” gives greater advantage to the client/root program, because
>> then they can say “All URLs must be correct at time of declaration” and use
>> that to be able to quantify whether or not the CA met their timeline
>> obligations for the mass revocation event.
>>
>
> I think we managed to talk slightly past each other, but we're well into
> the weeds of implementation details so it probably doesn't matter much :)
> The question in my mind was not "can there be multiple CRL signers
> consuming revocations from the queue?"; but rather "assuming there are
> multiple CRL signers consuming revocations from the queue, what
> synchronization do they have to do to ensure that multiple signers don't
> decide the old CRL is full and allocate new ones at the same time?". In the
> world where every certificate is pre-allocated to a CRL shard, no such
> synchronization is necessary at all.
>

Oh, I meant they could be signing independent CRLs (e.g. each has an IDP
with a prefix indicating which shard-generator is running), and at the end
of the queue-draining ceremony, you see what CRLs each worker created, and
add those to the JSON. So you could have multiple "small" CRLs (one or more
for each worker, depending on how you manage things), allowing them to
process revocations wholly independently. This, of course, relies again on
the assumption that the cRLDP is not baked into the certificate, which
enables you to have maximum flexibility in how CRL URLs are allocated and
sharded, provided the sum union of all of their contents reflects the CA's
state.


> This conversation does raise a different question in my mind. The Baseline
> Requirements do not have a provision that requires that a CRL be re-issued
> within 24 hours of the revocation of any certificate which falls within its
> scope. CRLs and OCSP responses for Intermediate CAs are clearly required to
> receive updates within 24 hours of the revocation of a relevant certificate
> (sections 4.9.7 and 4.9.10 respectively), but no such requirement appears
> to exist for end-entity CRLs. The closest is the requirement that
> subscriber certificates be revoked within 24 hours after certain conditions
> are met, but the same structure exists for the conditions under which
> Intermediate CAs must be revoked, suggesting that the BRs believe there is
> a difference between revoking a certificate and *publishing* that
> revocation via OCSP or CRLs. Is this distinction intended by the root
> programs, and does anyone intend to change this status quo as more emphasis
> is placed on end-entity CRLs?
>
> Or more bluntly: in the presence of OCSP and CRLs being published side by
> side, is it expected that the CA MUST re-issue a sharded end-entity CRL
> within 24 hours of revoking a certificate in its scope, or may the CA wait
> to re-issue the CRL until its next 7-day re-issuance time comes up as
> normal?
>

I recall this came up in the past (with DigiCert, [1]), in which
"revocation" was enacted by setting a flag in a database (or perhaps that
was an *extra* incident, with a different C

Re: CCADB Proposal: Add field called JSON Array of Partitioned CRLs Issued By This CA

2021-02-26 Thread Ryan Sleevi via dev-security-policy
On Fri, Feb 26, 2021 at 1:46 PM Aaron Gable  wrote:

> If we leave out the "new url for each re-issuance of a given CRL" portion
> of the design (or offer both url-per-thisUpdate and
> static-url-always-pointing-at-the-latest), then we could in fact include
> CRLDP urls in the certificates using the rolling time-based shards model.
> And frankly we may want to do that in the near future: maintaining both CRL
> *and* OCSP infrastructure when the BRs require only one or the other is an
> unnecessary expense, and turning down our OCSP infrastructure would
> constitute a significant savings, both in tangible bills and in engineering
> effort.
>

This isn’t quite correct. You MUST support OCSP for EE certs. It is only
optional for intermediates. So you can’t really contemplate turning down
the OCSP side, and that’s intentional, because clients use OCSP, rather
than CRLs, as the fallback mechanism for when the aggregated-CRLs fail.

I think it would be several years off before we could practically talk
about removing the OCSP requirement, once much more reliable CRL profiles
are in place, which by necessity would also mean profiling the acceptable
sharding algorithms.

Further, under today’s model, while you COULD place the CRLDP within the
certificate, that seems like it would only introduce additional cost and
limitation without providing you benefit. This is because major clients
won’t fetch the CRLDP for EE certs (especially if OCSP is present, which
the BRs MUST/REQUIRE). You would end up with some clients querying (such as
Java, IIRC), so you’d be paying for bandwidth, especially in your mass
revocation scenario, that would largely be unnecessary compared to the
status quo.

Thus, in my mind, the dynamic sharding idea you outlined has two major
> downsides:
> 1) It requires us to maintain our parallel OCSP infrastructure
> indefinitely, and
>

To the above, I think this should be treated as a foregone conclusion in
today’s requirements. So I think mostly the discussion here focuses on #2,
which is really useful.

2) It is much less resilient in the face of a mass revocation event.
>
> Fundamentally, we need our infrastructure to be able to handle the
> revocation of 200M certificates in 24 hours without any difference from how
> it handles the revocation of one certificate in the same period. Already
> having certificates pre-allocated into CRL shards means that we can
> deterministically sign many CRLs in parallel.
>

You can still do parallel signing. I was trying to account for that
explicitly with the notion of the “pre-reserved” set of URLs. However, that
also makes an assumption I should have been more explicit about: whether
the expectation is “you declare, then fill, CRLs”, or whether it’s
acceptable to “fill, then declare, CRLs”. I was trying to cover the former,
but I don’t think there is any innate prohibition on the latter, and it was
what I was trying to call out in the previous mail.

I do take your point about deterministically, because the process I’m
describing is implicitly assuming you have a work queue (e.g. pub/sub, go
channel, etc), in which certs to revoke go in, and one or more CRL signers
consume the queue and produce CRLs. The order of that consumption would be
non-deterministic, but it very much would be parallelizable, and you’d be
in full control over what the work unit chunks were sized at.

>
> Dynamically assigning certificates to CRLs as they are revoked requires
> taking a lock to determine if a new CRL needs to be created or not, and
> then atomically creating a new one. Or it requires a separate,
> not-operation-as-normal process to allocate a bunch of new CRLs, assign
> certs to them, and then sign those in parallel. Neither of these --
> dramatically changing not just the quantity but the *quality* of the
> database access, nor introducing additional processes -- is acceptable in
> the face of a mass revocation event.
>

Right, neither of these are required if you can “produce, then declare”.
From the client perspective, a consuming party cannot observe any
meaningful difference from the “declare, then produce” or the “produce,
then declare”, since in both cases, they have to wait for the CRL to be
published on the server before they can consume. The fact that they know
the URL, but the content is stale/not yet updated (I.e. the declare then
produce scenario) doesn’t provide any advantages. Ostensibly, the “produce,
then declare” gives greater advantage to the client/root program, because
then they can say “All URLs must be correct at time of declaration” and use
that to be able to quantify whether or not the CA met their timeline
obligations for the mass revocation event.

In any case, I think this conversation has served the majority of its
> purpose. This discussion has led to several ideas that would allow us to
> update our JSON document only when we create new shards (which will still
> likely be every 6 to 24 hours), as opposed to on every re-issuance of a
> 

Re: Policy 2.7.1: MRSP Issue #206: Limit re-use of domain name verification to 398 days

2021-02-26 Thread Ryan Sleevi via dev-security-policy
On Thu, Feb 25, 2021 at 7:55 PM Clint Wilson via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> I think it makes sense to separate out the date for domain validation
> expiration from the issuance of server certificates with previously
> validated domain names, but agree with Ben that the timeline doesn’t seem
> to need to be prolonged. What about something like this:
>
> 1. Domain name or IP address verifications performed on or after July 1,
> 2021 may be reused for a maximum of 398 days.
> 2. Server certificates issued on or after September 1, 2021 must have
> completed domain name or IP address verification within the preceding 398
> days.
>
> This effectively stretches the “cliff” out across ~6 months (now through
> the end of August), which seems reasonable.
>

Yeah, that does sound reasonable.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CCADB Proposal: Add field called JSON Array of Partitioned CRLs Issued By This CA

2021-02-26 Thread Ryan Sleevi via dev-security-policy
On Fri, Feb 26, 2021 at 5:49 AM Rob Stradling  wrote:

> > We already have automation for CCADB. CAs can and do use it for
> disclosure of intermediates.
>
> Any CA representatives that are surprised by this statement might want to
> go and read the "CCADB Release Notes" (click the hyperlink when you login
> to the CCADB).  That's the only place I've seen the CCADB API "announced".
>
> > Since we're talking Let's Encrypt, the assumption here is that the CRL
> URLs
> > will not be present within the crlDistributionPoints of the certificates,
> > otherwise, this entire discussion is fairly moot, since those
> > crlDistributionPoints can be obtained directly from Certificate T
> ransparency.
>
> AIUI, Mozilla is moving towards requiring that the CCADB holds all CRL
> URLs, even the ones that also appear in crlDistributionPoints extensions.
> Therefore, I think that this entire discussion is not moot at all.
>

Rob,

I think you misparsed, but that's understandable, because I worded it
poorly. The discussion is mooted by whether or not the CA includes the
cRLDP within the certificate itself - i.e. that the CA has to allocate the
shard at issuance time and that it's fixed for the lifetime of the
certificate. That's not a requirement - EEs don't need cRLDPs - and so
there's no inherent need to do static assignment, nor does it sound like LE
is looking to go that route, since it would be incompatible with the design
they outlined. Because of this, the dynamic sharding discussed seems
significantly _less_ complex, both for producers and for consumers of this
data, than the static sharding-and-immutability scheme proposed.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CCADB Proposal: Add field called JSON Array of Partitioned CRLs Issued By This CA

2021-02-25 Thread Ryan Sleevi via dev-security-policy
On Thu, Feb 25, 2021 at 8:21 PM Aaron Gable  wrote:

> If I may, I believe that the problem is less that it is a reference (which
> is true of every URL stored in CCADB), and more that it is a reference to
> an unsigned object.
>

While that's a small part, it really is as I said: the issue of being a
reference. We've already had this issue with the other URL fields, and thus
there exists logic to dereference and archive those URLs within CCADB.
Issues like audit statements, CP, and CPSes are all things that are indeed
critical to understanding the posture of a CA over time, and so actually
having those materials in something stable and maintained (without a
dependence on the CA) is important.  It's the lesson from those
various past failure modes that had Google very supportive of the non-URL
based approach, putting the JSON directly in CCADB, rather than forcing yet
another "update-and-fetch" system.. You're absolutely correct
that the "configured by CA" element has the nice property of being assured
that the change came from the CA themselves, without requiring signing, but
I wouldn't want to reduce the concern to just that.

* I'm not aware of any other automation system with write-access to CCADB
> (I may be very wrong!), and I imagine there would need to be some sort of
> further design discussion with CCADB's maintainers about what it means to
> give write credentials to an automated system, what sorts of protections
> would be necessary around those credentials, how to scope those credentials
> as narrowly as possible, and more.
>

We already have automation for CCADB. CAs can and do use it for disclosure
of intermediates.


> * I'm not sure CCADB's maintainers want updates to it to be in the
> critical path of ongoing issuance, as opposed to just in the critical path
> for beginning issuance with a new issuer.
>

Without wanting to sound dismissive, whether or not it's in a critical path
of updating is the CA's choice on their design. I understand that there are
designs that could put it there, I think the question is whether it's
reasonable for the CA to have done that in the first place, which is why
it's important to drill down into these concerns. I know you merely
qualified it as undesirable, rather than actually being a blocker, and I
appreciate that, but I do think some of these concerns are perhaps less
grounded or persuasive than others :)

Taking a step back here, I think there's been a fundamental design error in
your proposed design, and I think that it, combined with the (existing)
automation, may make much of this not actually be the issue you anticipate.

Since we're talking Let's Encrypt, the assumption here is that the CRL URLs
will not be present within the crlDistributionPoints of the certificates,
otherwise, this entire discussion is fairly moot, since those
crlDistributionPoints can be obtained directly from Certificate
Transparency.

The purpose of this field is to help discover CRLs that are otherwise not
discoverable (e.g. from CT), but this also means that these CRLs do not
suffer from the same design limitations of PKI. Recall that there's nothing
intrinsic to a CRL that expresses its sharding algorithm (ignoring, for a
second, reasonCodes within the IDP extension). The only observability that
an external (not-the-CA) party has, whether the Subscriber or the RP, is
merely that "the CRL DP for this certificate is different from the CRLDP
for that certificate". It is otherwise opaque how the CA used it, even if
through a large enough corpus from CT, you can infer the algorithm from the
pattern. Further, when such shards are being used, you can observe that a
given CRL that you have (whose provenance may be unknown) can be known
whether or not it covers a given certificate by matching the CRLDP of the
cert against the IDP of the CRL.  We're talking about a scenario in which
the certificate lacks a CRLDP, and so there's no way to know that, indeed,
a given CRL "covers" the certificate unambiguously. The only thing we have
is the CRL having an IDP, because if it didn't, it'd have to be a full CRL,
and then you'd be back to only having one URL to worry about.

Because of all of this, it means that the consumers of this JSON are
expected to combine all of the CRLs present, union all the revoked serials,
and be done with it. However, it's that unioning that I think you've
overlooked here in working out your math. In the "classic" PKI sense (i.e.
CRLDP present), the CA has to plan for revocation for the lifetime of the
certificate, it's fixed when the certificate is created, and it's immutable
once created. Further, changes in revocation frequency mean you need to
produce new versions of that specific CRL. However, the scenario we're
discussing, in which these CRLs are unioned, you're entirely flexible at
all points in time for how you balance your CRLs. Further, in the 'ideal'
case (no revocations), you need only produce a single empty CRL. There's no
need to produce an 

Re: Policy 2.7.1: MRSP Issue #206: Limit re-use of domain name verification to 398 days

2021-02-25 Thread Ryan Sleevi via dev-security-policy
On Thu, Feb 25, 2021 at 2:29 PM Doug Beattie via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> I'd prefer that we tie this to a date related to when the domain
> validations are done, or perhaps 2 statements.  As it stands (and as others
> have commented), on July 1 all customers will immediately need to validate
> all domains that were done between 825 and 397 days ago, so a huge number
> all at once for web site owners and for CAs.
>

Isn't this only true if CAs use this discussion period to do nothing?

That is, can't CAs today (or even months ago) have started to more
frequently revalidate their customers, refreshing old validations, helping
transition customers to automated methods, etc?

That is, is the scenario you described inherently bad, or just a
consequence of CA inaction? And is the goal to have zero impact, or, which
your proposal seems to acknowledge, do we agree that some impact is both
reasonable and acceptable, and the only difference would be the degree?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CCADB Proposal: Add field called JSON Array of Partitioned CRLs Issued By This CA

2021-02-25 Thread Ryan Sleevi via dev-security-policy
Hugely useful! Thanks for sharing - this is incredibly helpful.

I've snipped a good bit, just to keep the thread small, and have some
further questions inline.

On Thu, Feb 25, 2021 at 2:15 PM Aaron Gable  wrote:

> I believe that there is an argument to be made here that this plan
> increases the auditability of the CRLs, rather than decreases it. Root
> programs could require that any published JSON document be valid for a
> certain period of time, and that all CRLs within that document remain
> available for that period as well. Or even that historical versions of CRLs
> remain available until every certificate they cover has expired (which is
> what we intend to do anyway). Researchers can crawl our history of CRLs and
> examine revocation events in more detail than previously available.
>

So I think unpacking this a little: Am I understanding your proposal
correctly that "any published JSON document be valid for a certain period
of time" effectively means that each update of the JSON document also gets
a distinct URL (i.e. same as the CRLs)? I'm not sure if that's what you
meant, because it would still mean regularly updating CCADB whenever your
shard-set changes (which seems to be the concern), but at the same time, it
would seem that any validity requirement imposes on you a lower-bound for
how frequently you can change or introduce new shards, right?

The issue I see with the "URL stored in CCADB" is that it's a reference,
and the dereferencing operation (retrieving the URL) puts the onus on the
consumer (e.g. root stores) and can fail, or result in different content
for different parties, undetectably. If it was your proposal to change to
distinct URLs, that issue would still unfortunately exist.

If there is an API that allows you to modify the JSON contents directly
(e.g. a CCADB API call you could make with an OAuth token), would that
address your concern? It would allow CCADB to still canonically record the
change history and contents, facilitating that historic research. It would
also facilitate better compliance tracking - since we know policies like
"could require that any published JSON" don't really mean anything in
practice for a number of CAs, unless the requirements are also monitored
and enforced.


> Regardless, even without statically-pathed, timestamped CRLs, I believe
> that the merits of rolling time-based shards are sufficient to be a strong
> argument in favor of dynamic JSON documents.
>

Right, I don't think there's any fundamental opposition to that. I'm very
much in favor of time-sharded CRLs over hash-sharded CRLs, for exactly the
reasons you highlight. I think the question was with respect to the
frequency of change of those documents (i.e. how often you introduce new
shards, and how those shards are represented).

There is one thing you mentioned that's also non-obvious to me, because I
would expect you already have to deal with this exact issue with respect to
OCSP, which is "overwriting files is a dangerous operation prone to many
forms of failure". Could you expand more about what some of those
top-concerns are? I ask, since, say, an OCSP Responder is frequently
implemented as "Spool /ocsp/:issuerDN/:serialNumber", with the CA
overwriting :serialNumber whenever they produce new responses. It sounds
like you're saying that common design pattern may be problematic for y'all,
and I'm curious to learn more.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CCADB Proposal: Add field called JSON Array of Partitioned CRLs Issued By This CA

2021-02-25 Thread Ryan Sleevi via dev-security-policy
On Thu, Feb 25, 2021 at 12:33 PM Aaron Gable via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Obviously this plan may have changed due to other off-list conversations,
> but I would like to express a strong preference for the original plan. At
> the scale at which Let's Encrypt issues, it is likely that our JSON array
> will contain on the order of 1000 CRL URLs, and will add a new one (and age
> out an entirely-expired one) every 6 hours or so. I am not aware of any
> existing automation which updates CCADB at that frequency.
>
> Further, from a resiliency perspective, we would prefer that the CRLs we
> generate live at fully static paths. Rather than overwriting CRLs with new
> versions when they are re-issued prior to their nextUpdate time, we would
> leave the old (soon-to-be-expired) CRL in place, offer its replacement at
> an adjacent path, and update the JSON to point at the replacement. This
> process would have us updating the JSON array on the order of minutes, not
> hours.


This seems like a very inefficient design choice, and runs contrary to how
CRLs are deployed by, well, literally anyone using CRLs as specified, since
the URL is fixed within the issued certificate.

Could you share more about the design of why? Both for the choice to use
sharded CRLs (since that is the essence of the first concern), and the
motivation to use fixed URLs.

We believe that earlier "URL to a JSON array..." approach makes room for
> significantly simpler automation on the behalf of CAs without significant
> loss of auditability. I believe it may be helpful for the CCADB field
> description (or any upcoming portion of the MRSP which references it) to
> include specific requirements around the cache lifetime of the JSON
> document and the CRLs referenced within it.


Indirectly, you’ve highlighted exactly why the approach you propose loses
auditability. Using the URL-based approach puts the onus on the consumer to
try and detect and record changes, introduces greater operational risks
that evade detection (e.g. stale caches on the CAs side for the content of
that URL), and encourages or enables designs that put greater burden on
consumers.

I don’t think this is suggested because of malice, but I do think it makes
it significantly easier for malice to go undetected, for accurate historic
information to be hidden or made too complex to maintain.

This is already a known and, as of recent, studied problem with CRLs [1].
Unquestionably, you are right for highlighting and emphasizing that this
constrains and limits how CAs perform certain operations. You highlight it
as a potential bug, but I’d personally been thinking about it as a
potential feature. To figure out the disconnect, I’m hoping you could
further expand on the “why” of the design factors for your proposed design.

Additionally, it’d be useful to understand how you would suggest CCADB
consumers maintain an accurate, CA attested log of changes. Understanding
such changes is an essential part of root program maintenance, and it does
seem reasonable to expect CAs to need to adjust to provide that, rather
than give up on the goal.

[1]
https://arxiv.org/abs/2102.04288

>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Questions About DigiCert .onion Certificate SubjectPublicKey Hash

2021-02-18 Thread Ryan Sleevi via dev-security-policy
This is already tracked as https://github.com/cabforum/servercert/issues/190
and is waiting the completion of SC41v2 in the CA/Browser Forum Server
Certificate Working Group before working on (along with a cluster of
related .onion fixes)

On Thu, Feb 18, 2021 at 12:05 PM SXIA via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Hello,
>
> As required by CABForum guidelines, CAs must include the hash of an ASN.1
> SubjectPublicKey of the .onion service. For example,
> https://crt.sh/?id=3526088262 shows the SHA256 of the public key of
> archivev3qli37bju4rlh27glh24lljyezwxf4pokmrdbpefjlcrp5id.onion is
> 08afa9604f4cd74a1a867f3ffcf61faacdb19785a9d4c378f72a54503f73dd65
>
> Since this a v3 address, it is not difficult to extract the public key
> from .onion domain. Below is the hexdump of hs_ed25519_public_key
>
> 3d 3d 20 65 64 32 35 35  31 39 76 31 2d 70 75 62
> 6c 69 63 3a 20 74 79 70  65 30 20 3d 3d 00 00 00
> 04 44 74 54 95 dc 16 8d  fc 29 a7 22 b3 eb e6 59
> f5 c5 ad 38 26 6d 72 f1  ee 53 22 30 bc 85 4a c5
>
> So the public key (32 bytes long) is just the last two lines of the
> hexdump, and we can generate the public_key.pem from it, which is
>
> -BEGIN PUBLIC KEY-
> MCowBQYDK2VwAyEABER0VJXcFo38Kacis+vmWfXFrTgmbXLx7lMiMLyFSsU=
> -END PUBLIC KEY-
>
> We can also convert it to DER ($ openssl pkey -pubin -outform DER -out
> public_key.der), and here comes the problem: I tried to hash the DER file,
> and I got 141dcca6fea50f1c9f12c7150ca157a8e6e7bf7e79a6eb6f592a6235ab57ce23,
> which is different from what I see in DigiCert's certificate. Any ideas why
> this happened?
>
> Also, since the support of v2 .onion address will be removed from the Tor
> code base on July 15th, 2021 and v3 .onion address contains the full public
> key, I think it is meaningless to have 2.23.140.1.31 extension after that.
>
> Best,
> Xia
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7.1: MRSP Issue #192: Require information about auditor qualifications in the audit report

2021-02-15 Thread Ryan Sleevi via dev-security-policy
On Mon, Feb 15, 2021 at 6:07 PM Jeff Ward via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Ryan, I hope you are not suggesting I am dodging you points.  That would
> be absurd.  Let me use different words as comparable world seems to be
> tripping you up.


I'm not trying to suggest you're dodging intentionally, but as I said, I
think we were, and to a degree still are, talking past each other. I think
your reply captured that you don't think I understood your phrasing,
which I didn't, but I also don't think you understood why I was trying to
unpack that phrase more.


> You are not providing a general/public distribution example to make your
> point so it is baseless.  You are using a restricted opinion from EY and
> neither Ryan Sleevi nor Google are listed as the two intended users.


I'm not sure why you're bringing in Google with respect to
https://wiki.mozilla.org/CA/Policy_Participants . I'm also not sure how
this relates to the specific questions I asked, which were an attempt to
reset the conversation here to try and make progress on finding solutions.


> The closest I have seen to support your desire to name individual auditors
> is in public company audit reports, which are housed on the SEC website.
> To be clear, of your two examples, one is an opinion, which is restricted,
> and the other represents the guidelines.  Perhaps you have seen a
> public/general distribution report from your second opinion as I do not see
> it in this thread.  I am aware, as mentioned previously, of the Federal PKI
> program desiring to know more about team members, but that is not listed in
> a non-restricted report, in a public/general distribution format.


Sure, and the purpose of the questions was an attempt to reset the
conversation to a point where we're not talking past each other, by trying
to build up from basics. I know we both know there are many different audit
criteria that are used, and the WTTF member organizations are global
organizations. I don't want to assume that those who focus on the WTTF are
as familiar with, say, the BSI requirements or other audit criteria, much
the same way it wouldn't be reasonable to expect I know as much about AI as
PKI :)

The example report I provided wasn't about calling out an individual
member, but rather to highlight an area that, objectively, seems
comparable, particularly to Ben's proposed language, to better understand
what you meant by that, and whether you had concerns with the new approach
still.

You've anchored here on the public disclosure part, which I can understand
and appreciate the risk perceived by having auditor names attached to the
reports, instead of just firms. I see most of your answers focus on a
specific form of disclosure, but that wasn't what my previous mail was
trying to get to yet. I was trying to work back to understand where
"comparable" criteria diverge, so we can figure out possible paths.

To be clear, I support Ben's proposal as a very useful and meaningful
interim step. Your original message was a bit ambiguous about where you
stand, in large part because your use of "comparable" left quite a bit
ambiguous, and whether you thought the changes were sufficient. I'm quite
glad to hear that it sounds workable for those performing WebTrust audits,
because that helps make forward progress.

However, I still think it'd be useful to view this as an interim step, and
it certainly feels like you might be saying this as far as things can go.
In the cloud services space, we regularly see end users expecting audits of
their cloud provider, and those transitively apply to dependencies of that
cloud provider (e.g. datacenter security, software vendors used by the
cloud provider, etc). This is similarly true with respect to software
supply chain security: working out not just first order dependencies, but
working through second and third-order dependencies as well, to build up a
holistic risk picture. Browsers rely on and delegate to CAs, so it's no
surprise that browsers expect audits for CAs. Yet browsers have their own
customers, and need to provide those customers the assurances they need,
and their customers need, etc.

My Question #5 was an attempt to unpack "comparable" further, because these
are hardly unique problems to software vendor/service provider
relationships. It's not even unique to "cloud" or "PKI", as supply chain
assurances are pretty universal, as shown by goods such as coconut milk,
coffee, chocolate, or diamonds. You answered in the context of "public
report", but I wasn't trying to go there yet. I was trying to figure out
where "comparable" split off, and what schemes were and weren't
comparable, so that we can continue to make improvements.  Now that it
seems we have a path forward for direct disclosure, I think it's reasonable
to spend time thinking about how to bring th

Re: Policy 2.7.1: MRSP Issue #192: Require information about auditor qualifications in the audit report

2021-02-15 Thread Ryan Sleevi via dev-security-policy
Apologies for belaboring the point, but I think we might be talking past
eachother.

You originally stated “The only place I am aware that lists the audit
partner in a comparable world is the signing audit partner on public
company audits in the US, which is available on the SEC website.” I gave
two separate examples of such, and you responded to one (FPKI) by saying
the report was not public (even when it is made available publicly), and
the other I didn’t see a response to.

This might feel like nit-picking, but I think this is a rather serious
point to work through, because I don’t think you’re fully communicating
what you judge to be a “comparable world”, as it appears you are dismissing
these examples.

I can think of several possible dimensions you might be thinking are
relevant, but rather than assume, I’m hoping you can expand with a few
simple questions. Some admittedly seem basic, but I don’t want to take
anything for granted here.

1) Are you/the WTTF familiar with these audit schemes?

2) Are you aware of schemes that require disclosing the relevant skills and
experience of the audit team to the client? (E.g. as done by BSI C5 audits
under ISAE 3000)

3) Are you aware of such reports naming multiple parties for the use of the
report (e.g. as done by FPKI audits)

4) Are you aware of schemes in which a supplier requires a vendor to be
audited, and ensures that the customer of supplier are able to access such
audits as part of their reliance upon supplier? (Note, this doesn’t have to
be limited to ISMS systems)

I’m trying to understand what, given the prevalence of these practices,
makes these instances *not* a comparable world, since it seems that helps
move closer to solutions.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7.1: MRSP Issue #192: Require information about auditor qualifications in the audit report

2021-02-15 Thread Ryan Sleevi via dev-security-policy
On Mon, Feb 15, 2021 at 2:03 PM Jeff Ward via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> I wanted to clarify a couple of points.  Firms must be independent to do
> audit/assurance work.  If independence is impaired, for example, by one
> person in the firm performing management functions, the entire firm is no
> longer independent.  Firms have the responsibility to monitor activities of
> its professionals, which also includes personal investments, to ensure they
> remain independent.
>
> Also, WebTrust practitioners provide information on the firm and the
> professionals used on these engagements.  The information provided is
> closely aligned with the Auditor Qualifications you are describing.  As you
> know, CPA Canada provides a listing of qualified audit firms on its
> website.  Working closely with them could also help in instances where
> auditor qualifications are in question.
>
> And one last item, thank you for hearing us on the listing of auditors
> performing the engagement.  The only place I am aware that lists the audit
> partner in a comparable world is the signing audit partner on public
> company audits in the US, which is available on the SEC website.  Other
> than that, I am not aware of any other team member being listed.  We have
> seen listings of team members and related experience summarized on a
> non-publicly issued letter to management in the US Federal space.


Jeff,

https://www.oversight.gov/sites/default/files/oig-reports/18-19.pdf

Is an example, which is an audit of the U.S. Government Printing Office,
provided by a WTTF member, against the US Federal PKI CP. This doesn’t meet
the criteria you mentioned (public company, SEC), and itself was provided
several years ago.

It is directed to a set of named parties, and made publicly available by
those parties, using the WebTrust for CAs criteria. On page 4 (report)/6
(FPKI submission)/9 (PDF page), you can see an enumerated list of audit
participants and their applicable skills, summarized.

Since you mentioned “a comparable world”, the BSI C5 controls, which
provide a valuable model for improvements in transparency and thoroughness
of reporting (aka the so called “detailed controls” report), notes this
within Section 3.5.1 of the Controls [1]

“As part of the reporting, it must be specified which of the professional
examinations/certifications are held by the audit team (e. g. in the
section “Independence and quality assurance of the auditor”). Upon request,
appropriate documents (e. g. certificates etc.) must be submitted to the
client.”

Could you clarify whether you and the WTTF considered these two cases? The
former is an example of using an assurance scheme the FPKIMA has said on
its own is insufficient, namely WTCA, but with additional reporting can be
made sufficient. The latter is an example of a scheme specifically adapted
for cloud/vendor security controls against an ISAE 3000 reporting scheme,
which is nearly identical to WTBRs in that regard. It was unclear if y’all
were simply not familiar with these cases, or if you believe there is
substantive differences in the proposal here that may require addressing.

[1]
https://www.bsi.bund.de/SharedDocs/Downloads/EN/BSI/Publications/CloudComputing/ComplianceControlsCatalogue-Cloud_Computing-C5.pdf?__blob=publicationFile=3

>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Public Discussion of GlobalSign's CA Inclusion Request for R46, E46, R45 and E45 Roots

2021-02-11 Thread Ryan Sleevi via dev-security-policy
On Thu, Feb 11, 2021 at 1:11 PM Nick Lamb via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> I have a question (if I should write it in Bugzilla instead please say
> so it is unclear to me what the correct protocol is)
>

While Mozilla Policy permits discussion in both, I will say it's
significantly easier when the discussion is on Bugzilla to ensure the
feedback is considered and promptly responded to. So if you want to
consider posing your questions there, that's really helpful for posterity.
If, for example, it became necessary to discuss a set of issues for a CA,
Bugzilla incident reports are going to be the primary source of the
incident report and discussion, and unless there's a clear link *on the
bug* to such mailing list discussion, it will no doubt be overlooked.

So I'd say feel free to ask your question there, which helps make sure it's
answered before the issue is closed.


> I also have noticed something that definitely isn't (just) for
> GlobalSign. It seems to me that the current Ten Blessed Methods do not
> tell issuers to prevent robots from "clicking" email links. We don't
> need a CAPTCHA, just a "Yes I want this certificate" POST form ought to
> be enough to defuse typical "anti-virus", "anti-malware" or automated
> crawling/ cache building robots. Maybe I just missed where the BRs
> tell you to prevent that, and hopefully even without prompting all
> issuers using the email-based Blessed Methods have prevented this,
>

Yes, this has been raised previously in the Forum by Peter Bowen (then at
Amazon), as part of the discussion and input with respect to the validation
methods. This is one of many outstanding items still for the Validation
Working Group of the CA/B Forum, as possible mitigations were also
discussed. In short, "capability URLs" (where the entire URL is, in effect,
the capability) are dangerous.

Note that there have been far more than "Ten Blessed Methods" since those
discussions, so perhaps it's clearer to just say 3.2.2.4.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: The CAA DNS Operator Exception Is Problematic

2021-02-10 Thread Ryan Sleevi via dev-security-policy
On Tue, Feb 9, 2021 at 9:22 PM Nick Lamb via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Mon, 8 Feb 2021 13:40:05 -0500
> Andrew Ayer via dev-security-policy
>  wrote:
>
> > The BRs permit CAs to bypass CAA checking for a domain if "the CA or
> > an Affiliate of the CA is the DNS Operator (as defined in RFC 7719)
> > of the domain's DNS."
>
> Hmm. Would this exemption be less dangerous for a CA which is the
> Registry for the TLD ?


Potentially, but that’s not the use case for why this exists. Recall that
Registry != Registrar here, and even then, the Operator may be distinct
from either of those two. The use case argued was not limited to “just”
gTLDs.

>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: The CAA DNS Operator Exception Is Problematic

2021-02-08 Thread Ryan Sleevi via dev-security-policy
On Mon, Feb 8, 2021 at 1:40 PM Andrew Ayer via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> The BRs permit CAs to bypass CAA checking for a domain if "the CA or
> an Affiliate of the CA is the DNS Operator (as defined in RFC 7719)
> of the domain's DNS."
>
> Much like the forbidden "any other method" of domain validation, the DNS
> operator exception is perilously under-specified. It doesn't say how
> to determine who the DNS operator of a domain is, when to check, or for
> how long this information can be cached.  Since the source of truth for a
> domain's DNS operator is the NS record in the parent zone, I believe the
> correct answer is to check at issuance time by doing a recursive lookup
> from the root zone until the relevant NS record is found, and caching
> for no longer than the NS record's TTL.  Unfortunately, resolvers do
> not typically provide an implementation of this algorithm, so CAs would
> have to implement it themselves.  Considering that CAs are not generally
> DNS experts and there are several almost-correct-but-subtly-wrong ways
> to implement it, I have little faith that CAs will implement this
> check correctly.  My experience having implemented both a CAA lookup
> algorithm and an algorithm to determine a domain's DNS operator is that
> it's actually easier to implement CAA, as all the nasty DNS details can
> be handled by the resolver. This leads me to conclude that the only CAs
> who think they are saving effort by relying on the DNS operator exception
> are doing so incorrectly and insecurely.


Thanks, Andrew, for raising this.

This does seem something that should be looked to be phased out /
forbidden.

We've seen similar concerns raised related to BygoneSSL [1], with respect
to Cloud Providers (incorrectly) believing they are the domain operator
(e.g. customer account configured, but DNS now points elsewhere)

[1] https://insecure.design/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7.1: MRSP Issue #192: Require information about auditor qualifications in the audit report

2021-01-28 Thread Ryan Sleevi via dev-security-policy
On Thu, Jan 28, 2021 at 3:05 PM Ben Wilson  wrote:

> Thanks.  My current thinking is that we can leave the MRSP "as is" and
> that we write up what we want in
> https://wiki.mozilla.org/CA/Audit_Statements#Auditor_Qualifications,
> which is, as you note, information about members of the audit team and how
> individual members meet #2, #3, and #6.
>

Is this intended as a temporary fix until the issue is meaningfully
addressed? Or are you seeing this as a long-term resolution of the issue?

I thought the goal was to make the policy clearer on the expectations, and
my worry is that it would be creating more work for you and Kathleen, and
the broader community, because it puts the onus on you to chase down CAs to
provide the demonstration because they didn't pay attention to it in the
policy. This was the complaint previously raised about "CA Problematic
Practices" and things that are forbidden, so I'm not sure I understand the
distinction/benefit here from moving it out?

I think the relevance to MRSP is trying to clarify whether Mozilla thinks
of auditors as individuals (as it originally did), or whether it thinks of
auditors as organizations. I think that if MRSP was clarified regarding
that, then the path you're proposing may work (at the risk of creating more
work for y'all to request that CAs provide the information that they're
required to provide, but didn't know that).

If the issue you're trying to solve is one about whether it's in the audit
letter vs communicated to Mozilla, then I think it should be possible to
achieve that within the MRSP and explicitly say that (i.e. not require it
in the audit letter, but still requiring it).

Just trying to make sure I'm not overlooking or misunderstanding your
concerns there :)

>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7.1: MRSP Issue #187: Require disclosure of incidents in Audit Reports

2021-01-28 Thread Ryan Sleevi via dev-security-policy
On Sun, Jan 24, 2021 at 11:33 PM Ben Wilson via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> All,
>
> Based on the comments received, I am inclined to clarify the proposed
> language under Issues #154 and #187 with reference to a CA's Bugzilla
> compliance bugs rather than "incidents".  The existing language in section
> 2.4 of the MRSP already requires the CA to promptly file an Incident Report
> in Bugzilla for all incidents.
>
> My proposal for Issue #154 is to add a final sentence to MRSP section 2.4
> which would say, "If being audited according to the WebTrust criteria, the
> CA’s Management Assertion letter MUST include a complete list of the CA's
> Bugzilla compliance bugs that were unresolved at any time during the audit
> period."
>
> Under Issue #187, I propose that new item 11 in MRSP section 3.1.4
> (required
> publicly-available audit documentation) would read:  "11.  a complete list
> of the CA’s Bugzilla compliance bugs that were unresolved at any time
> during the audit period."
>

I don't think this is a good change, and doesn't meet the intent of the
problem.

This implies that if Mozilla believed an incident resolved (or, as we've
seen certain CAs do, the CA themselves mark their issue as resolved), that
there's no requirement to disclose this to the auditor other than "Hope the
CA is nice" (which, sadly, is not reasonable).

I explicitly think incident is the right approach, and disagree that
flagging it as compliance bugs is good or useful for the ecosystem. I
further think that even matters flagged as "Duplicate" or "Invalid" _are_
useful to ensure that the auditor is aware of the relevant discussion. For
example, if evidence contrary to the facts stated on the bug (i.e. it was
*not* a duplicate), this is absolutely relevant.

So I guess I'm disagreeing with Jeff and Clemens here, by suggesting that
incident should be any known or reported violation of Mozilla policy, which
may be manifested as bugs, in order to ensure transparency and confirmation
that the auditor had the necessary information and facts available and that
it was considered as part of the statement. This still permits auditors to,
for example, consider the issue as a duplicate/remediated, but given that
the whole goal is to receive confirmation from the auditors that they were
aware of all the same things the community is, I don't think the proposed
language gets to that.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7.1: MRSP Issue #192: Require information about auditor qualifications in the audit report

2021-01-28 Thread Ryan Sleevi via dev-security-policy
On Thu, Jan 28, 2021 at 1:43 PM Ben Wilson via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On second thought, I think that Mozilla can accomplish what we want without
> modifying the MRSP
> <
> https://www.mozilla.org/en-US/about/governance/policies/security-group/certs/policy/#32-auditors
> >
> (which says audits MUST be performed by a Qualified Auditor, as defined in
> the Baseline Requirements section 8.2), and instead adding language to
> https://wiki.mozilla.org/CA/Audit_Statements#Auditor_Qualifications that
> explains what additional information we need submitted to determine that an
> auditor is "qualified" under Section 8.2 of the Baseline Requirements.
>
> In other words (paraphrasing from BR 8.2), we would need evidence that the
> persons or entities:
> 1. Are independent from the subject of the audit;
> 2. Have the ability to conduct an audit that addresses the criteria;
> 3. Have proficiency in examining Public Key Infrastructure technology,
> information security tools and techniques, information technology and
> security auditing, and the third-party attestation function;
> 4. Are accredited in accordance with ISO 17065 applying the requirements
> specified in ETSI EN 319 403  *OR*   5. Are licensed by WebTrust;
> 6. Are bound by law, government regulation, or professional code of ethics
> (to render an honest and objective opinion); and
> 7. Maintain Professional Liability/Errors & Omissions insurance with policy
> limits of at least one million US dollars in coverage.
>
> We do some of this already when we check on an auditor's status to bring an
> auditor's record current in the CCADB.  The edits that we'll make will just
> make it easier for us to go through the list above.
>
> Thoughts?
>

I'm not sure this approach is very clear about the edits you're making, and
whether pull requests or commits might be clearer, as Wayne did in the
past. If there is a commit, happy to look at it and apologies if I missed
it.

I'm not sure this addresses the issue as raised, or at least, "or entities"
seems to create the same issues that are trying to be addressed, by
thinking in terms of "legal entities" rather than qualified persons.

Your discussion about "auditor's" and "auditor's status" might be misread
as "Audit firm", when I think the issue raised was thinking about "person
performing the audit". The individual persons aren't necessarily licensed
or accredited (e.g. #4/ #5), and may not be the ones that retain PL/E
insurance (#7). Further, the individuals might be independent, but the firm
not (#1)

So I think you're really just left with wanting to have a demonstration as
to the members of the audit team and how individual members meet (#2, #3,
#6). Is that right? I think Kathleen's proposal from November got close to
that, and then the remainder is clarifying the language that you've
proposed for 2.7.1, namely "Individuals have competence, partnerships and
corporations do not".

I think the expectation goal is that "Individually, and as an audit team,
they are independent (#1)" (e.g. you can't have a non-independent party
running the audit with a bunch of independent parties reporting to them,
since they're no longer independent), while that collectively the audit
team meets #2/#3, with the burden being to demonstrate how the individuals
on the team meet that.

Is that what you were thinking? Or is my explanation a jumbled mess :)
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Root Store Policy Suggestion

2021-01-28 Thread Ryan Sleevi via dev-security-policy
On Thu, Jan 28, 2021 at 1:32 PM Burton  wrote:

> Hi Ryan,
>
> The answer to your questions.
>
> A remediation plan is only useful in cases of slight CA non-compliance to
> the rules set forth by the root store policy.
>
> A remediation plans in cases of slight CA non-compliance provides
> assurance of CA commitment to compliance.
>

Sure, and I think (and hopefully I'm fairly stating), that the goal is
these should be provided in the Incident Reports themselves. That is, the
remediation should address both the immediate and systemic issues, and
future incidents of the CA will be judged against this.

The intent is certainly that anyone in the community participates and
reviews these, and I think we see a lot of fantastic activity on the bug
reports from people who do, which is a healthy sign, even though they're
often calling out concerns with the remediation or highlighting how it
fails to meet the expectations.


> A CA under investigation of serious non-compliance with detailed
> documented evidence of non-compliance incidents has reach the stage of no
> return.
>
> A remediation plan in the cases of serious non-compliance is a reference
> document in the case of new root inclusion as documented evidence of
> commitment to compliance.
>

> The CA roots should be removed in the case of  serious non-compliance and
> asked to reapply for inclusion again to the root store with new roots and
> new commitment to compliance with new audits from a different auditor and
> reformed practices and management.
>

Right, and I think this might be premature or giving false hope, at least
to CAs that assume every CA, once removed, can simply reapply with a
remediation plan. I agree with you, it's incredibly valuable to understand
how the CA plans to address the issues, and just like incident reports,
it's useful to understand how the CA views the incidents that might lead up
to distrust and how it plans to mitigate them before reapplying. Yet we've
often seen CAs believe that because a remediation plan exists for the
identified issues, it's sufficient to apply for new roots, when really,
such CAs are working from a serious trust deficit, and so not only need to
remediate the identified issues, but show how they're going above and
beyond addressing the systemic issues, in order to justify the risk of
trusting them again. Understandably, this depends on a case-by-case basis.

To your original point, historically CA actions (generally) worked in three
phases:

1) A pattern is believed to exist (of incidents), or an incident is so
severe it warrants immediate public discussion. The community is asked to
provide details - e.g. of incidents that were overlooked, of other relevant
data - to ensure that a full and comprehensive picture of relevant facts
are gathered and understood. The CA is invited to share details (e.g. how
they mitigated such issues) or to respond to the facts, if they believe
they're not accurate.

2) A discussion about the issues themselves, to evaluate the nature of the
incidents, as well as solicit proposals from the community in particular
(rather than the CA, although the CA is welcome to contribute) about how to
mitigate the risks these issues and incidents highlight.

3) At least for Mozilla, a proposed plan for Mozilla products, which is
often based on suggestions from the community (in #2) as well as Mozilla's
own product and security considerations. Mozilla may solicit further
feedback on their plan, from the community and the CA, to make sure they've
balanced the concerns and considerations raised in #2 accurately, or may
decide it warrants immediate action.

This is a rough guide, obviously there are exceptions. For example, Mozilla
and other browsers blocking MITM roots hasn't always involved all three
stages. Similarly, in CA compromise events, Step 2 and 3 may be skipped
entirely, because the only viable solution is obvious.

Other programs, whether Apple, Google, or Microsoft, don't necessarily
operate the same way. For example, Google, Apple and Microsoft don't
provide any statement at all about public engagement, although they may
closely monitor the discussions in #1 and #2.

Step #1 has, intentionally and by design, largely been replaced by the
Incident Reporting requirements incorporated into the Root Policies of both
Mozilla and Google Chrome. That is, the incident reports, and the public
discussions of the incidents, serve to contemporaneously address issues,
identify remediations, and understand and identify how well the CA
understands the risks and is able to take meaningful corrective action.
These days, Step #1 is merely summarizing the incidents based on the
information in the incidents, and thus may not need the same lengthy
discussion in the past, prior to the incident disclosure requirements (e.g.
StartCom, WoSign).

Step #2 is still widely practiced, as we've seen throughout a number of
recent and past events. Without wanting to put words into Mozilla's mouth,
certainly 

Re: Public Discussion of GlobalSign's CA Inclusion Request for R46, E46, R45 and E45 Roots

2021-01-27 Thread Ryan Sleevi via dev-security-policy
Hey Ben,

I know discussion here has been quiet, but in light of other threads going
on, I actually want to say I'm very supportive of GlobalSign's plan here,
and surprised they didn't call more attention to it, because it's something
to be proud of.

As I understand it, and happy to be corrected if I've made a mistake here,
GlobalSign is actively working to transition their hierarchy to one that
reflects a modern, good practice implementation. That's something they
should be proud of, and something all CAs should take note of. In
particular, they're transitioning to single-purpose roots (e.g. all TLS,
all e-mail), in order to reduce the risk to relying parties from roots that
exist cross-purposes.

You're absolutely correct for calling out GlobalSign's past incidents, and
I don't want to appear to be suggesting that simply spinning up new roots
wipes the old slate clean, but this is a huge step in the right direction,
and does reflect good practice. I will disclose that GlobalSign reached
out, as have several other CAs (PKIoverheid, Sectigo, DigiCert), to invite
feedback in their design and discuss some of their challenges, and I think
that's also the kind of positive, proactive behavior worth encouraging.

Within the context of the incident reports, I do want to also acknowledge
that GlobalSign has consistently improved the quality of their reports. As
the OCSP issue shows, the level of detail and clarity in their
communications, and their providing artifacts, has been a huge help in
addressing and assuaging concerns. Rather than purely looking at the number
of incidents, and instead looking at how the incident reports are handled,
this is good progress. While some bugs did feel like they dragged out (e.g.
1575880), even then, GlobalSign was proactive in communicating both the
completion of past actions as well as next steps, with clear timetables.

GlobalSign has already proactively addressed some of the other things one
might expect from a "modern" CA - for example, supporting and enabling
automation for their customers, both "retail" and "managed CA/Enterprise"
(via AEG), and so I do see a positive trend in addressing some of the
systemic issues here.

That's not to say things are perfect: for example, the mixing of TLS
profiles (DV, OV, EV) in a single hierarchy means that there are still a
number of systemic risks which we've seen as patterns, both from GlobalSign
and other CAs, in terms of audit and audit scopes and proper validation of
information, but I do think this is trending closer to what would be good
for all CAs  to think about.

The sooner we can get to sunsetting GlobalSign's legacy roots, the better,
and so I definitely think it would be encouraging to see them help migrate
customers sooner than waiting on natural expiration alone, just like I
think continuing to shorten lifetimes would be another very positive sign.
However, I understand and recognize there are complexities here that may
prevent that, but I didn't want folks to think this was "as good as it
gets" ;)

On Mon, Jan 11, 2021 at 7:59 PM Ben Wilson via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> This is to announce the beginning of the public discussion phase of the
> Mozilla root CA inclusion process for GlobalSign.
>
> See https://wiki.mozilla.org/CA/Application_Process#Process_Overview,
> (Steps 4 through 9).
>
> GlobalSign has four (4) new roots to include in the root store.  Two roots,
> one RSA and another ECC, are to support server authentication (Bugzilla Bug
> # 1570724 ) while
> two
> other roots are for email authentication, RSA and ECC (Bugzilla Bug #
> 1637269 ).
>
> Mozilla is considering approving GlobalSign’s request(s). This email begins
> the 3-week comment period, after which, if no concerns are raised, we will
> close the discussion and the request may proceed to the approval phase
> (Step 10).
>
> *A Summary of Information Gathered and Verified appears here in these two
> CCADB cases:*
>
>
> https://ccadb-public.secure.force.com/mozilla/PrintViewForCase?CaseNumber=0469
>
>
> https://ccadb-public.secure.force.com/mozilla/PrintViewForCase?CaseNumber=0596
>
> *Root Certificate Information:*
>
> *GlobalSign Root R46 *
>
> crt.sh -
>
> https://crt.sh/?q=4FA3126D8D3A11D1C4855A4F807CBAD6CF919D3A5A88B03BEA2C6372D93C40C9
>
> Download - https://secure.globalsign.com/cacert/rootr46.crt
>
> *GlobalSign Root E46*
>
> crt.sh -
>
> https://crt.sh/?q=CBB9C44D84B8043E1050EA31A69F514955D7BFD2E2C6B49301019AD61D9F5058
>
> Download - https://secure.globalsign.com/cacert/roote46.crt
>
> *GlobalSign Secure Mail Root R45 *
>
> crt.sh -
>
> https://crt.sh/?q=319AF0A7729E6F89269C131EA6A3A16FCD86389FDCAB3C47A4A675C161A3F974
>
> Download - https://secure.globalsign.com/cacert/smimerootr45.crt
>
> *GlobalSign Secure Mail Root E45 *
>
> crt.sh -
>
> 

Re: Root Store Policy Suggestion

2021-01-27 Thread Ryan Sleevi via dev-security-policy
On Wed, Jan 27, 2021 at 2:45 PM Burton  wrote:

> I included the remediation plan in the proposal because a CA will mostly
> always include a remediation plan when they reach the stage of serious
> non-compliance investigation by root store policy owners.
>

Sure, but I was more asking: are you aware of any point in the past where
the remediation plan has been valuable, useful or appropriate? I'm not.

The expectation is continuous remediation, so any remediation plan at a
later stage seems too little, too late, right? The very intentional goal of
the incident reporting was to transition to a continuous improvement
process, where the CA was evaluated based on their
contemporaneous remediation to incidents, rather than waiting until things
get so bad they pile up and a remediation plan is used.

So I'm trying to understand what a remediation plan would include, during
discussion, that wouldn't (or, more explicitly, shouldn't) have been
included in the incident reports as they happened?

>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Root Store Policy Suggestion

2021-01-27 Thread Ryan Sleevi via dev-security-policy
On Wed, Jan 27, 2021 at 10:11 AM Burton  wrote:

> Hello,
>
> The Mozilla root store policy should include a section that sets out time
> limit periods in numbered stages for non-compliance CA discussions. That
> way everything is fair, can't be disputed and everyone knows when the
> discussion of the non-compliance CA will conclude. Then the decision from
> the root store policy owners will proceed shortly afterwards to either
> accept the remediation plan from the CA or proceed with the partial or
> complete removal of the CA from the root store.
>
> These time limits below should be enough ample time for the discussion to
> take place between the CA, the community and the root store policy owners.
>
> Stage 1 (Discussion Period: *1 Week*):
>
>- Official notification to all that an investigation regarding the
>non-compliance issues of the CA has started.
>- Requests for additional information, etc.
>
> Stage 2 (Discussion Period: *4 Weeks*):
>
>- The CA to produces a draft remediation plan.
>- The CA answers all questions from the root store policy owners and
>the community.
>- Requests for additional information, etc.
>
> Stage 3 (Discussion Period: *2 Weeks*):
>
>- Discussion of the final remediation plan proposed by the CA.
>- Discussion of whether to partial distrust or full distrust of the
>CA.
>- Requests for anymore additional information.
>
> The decision by the root store policy owners.
>

>From a proposal standpoint, could you explain a bit more about the goals of
a "draft remediation plan"?

The point and purpose of Incident Reports is so that CAs are continuously
improving, and adopting meaningful improvements and remediations. At the
point at which a CA is being discussed, isn't that already evidence alone
that the CA's failure to remediate not just the incidents, but the systemic
root issues, has failed?

Recent replies such as
https://www.mail-archive.com/dev-security-policy@lists.mozilla.org/msg14089.html
and
https://www.mail-archive.com/dev-security-policy@lists.mozilla.org/msg14090.html
have captured how, given the nature of how PKI works, the only viable
remediation plans are those that transition to new roots. This is something
that, for example, Google Chrome explicitly recommends and encourages, at
https://g.co/chrome/root-policy , as a way of minimizing the risk to their
users.

Without expressing opinions on the current, ongoing discussion for Mozilla,
it may be useful to examine past CA incidents, as it seems that there were
several remediation plans posed by CAs in the past, yet consistently, every
single one of them failed to adequately address the very core issues, and
every one of them revealed actions the CA was already expected to be
taking, either in light of incidents or as part of the bare minimum
expectation of CAs.

I mention all of this because your emphasis on remediation plan does seem
to suggest you see there being viable paths forward, other than (at a
minimum), a transition from problematic roots to new roots, so I was hoping
you could expand, using perhaps past CA discussions rather than the current
one, as they provide ample evidence of problematic patterns.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Summary of Camerfirma's Compliance Issues

2021-01-25 Thread Ryan Sleevi via dev-security-policy
(Writing in a Google capacity)

I personally want to say thanks to everyone who has contributed to this
discussion, who have reviewed or reported past incidents, and who have
continued to provide valuable feedback on current incidents. When
considering CAs and incidents, we really want to ensure we’re considering
all relevant information, as well as making sure we’ve got a correct
understanding of the details.

After full consideration of the information available, and in order to
protect and safeguard Chrome users, certificates issued by AC Camerfirma SA
will no longer be accepted in Chrome, beginning with Chrome 90.

This will be implemented via our existing mechanisms to respond to CA
incidents, via an integrated blocklist. Beginning with Chrome 90, users
that attempt to navigate to a website that uses a certificate that chains
to one of the roots detailed below will find that it is not considered
secure, with a message indicating that it has been revoked. Users and
enterprise administrators will not be able to bypass or override this
warning.

This change will be integrated into the Chromium open-source project as
part of a default build. Questions about the expected behavior in specific
Chromium-based browsers should be directed to their maintainers.

To ensure sufficient time for testing and for the replacement of affected
certificates by website operators, this change will be incorporated as part
of the regular Chrome release process. Information about timetables and
milestones is available at https://chromiumdash.appspot.com/schedule.

Beginning approximately the week of Thursday, March 11, 2021, website
operators will be able to preview these changes in Chrome 90 Beta. Website
operators will also be able to preview the change sooner, using our Dev and
Canary channels, while the majority of users will not encounter issues
until the release of Chrome 90 to the Stable channel, currently targeting
the week of Tuesday, April 13, 2021.

When responding to CA incidents in the past, there have been a variety of
approaches taken by different browser vendors, determined both by the facts
of the incident and the technical options available to the browsers. Our
particular decision to actively block all certificates, old and new alike,
is based on consideration of the details available, the present technical
implementation, and a desire to have a consistent, predictable,
cross-platform experience for Chrome users and site operators.

For the list of affected root certificates, please see below. Note that we
have included a holistic set of root certificates in order to ensure
consistency across the various platforms Chrome supports, even when they
may not be intended for TLS usage. However, please note that the
restrictions are placed on the associated subjectPublicKeyInfo fields of
these certificates.

Affected Certificates (SHA-256 fingerprint)

   - 04F1BEC36951BC1454A904CE32890C5DA3CDE1356B7900F6E62DFA2041EBAD51
   - 063E4AFAC491DFD332F3089B8542E94617D893D7FE944E10A7937EE29D9693C0
   - 0C258A12A5674AEF25F28BA7DCFAECEEA348E541E6F5CC4EE63B71B361606AC3
   - C1D80CE474A51128B77E794A98AA2D62A0225DA3F419E5C7ED73DFBF660E7109
   - 136335439334A7698016A0D324DE72284E079D7B5220BB8FBD747816EEBEBACA
   - EF3CB417FC8EBF6F97876C9E4ECE39DE1EA5FE649141D1028B7D11C0B2298CED


On Thu, Dec 3, 2020 at 1:01 PM Ben Wilson via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

>  All,
>
> We have prepared an issues list as a summary of Camerfirma's compliance
> issues over the past several years. The purpose of the list is to collect
> and document all issues and responses in one place so that an overall
> picture can be seen by the community. The document is on the Mozilla wiki:
> https://wiki.mozilla.org/CA:Camerfirma_Issues.
> 
>
> I will now forward the link above to Camerfirma to provide them with a
> proper opportunity to respond to the issues raised and to ask that they
> respond directly in m.d.s.p. with any comments, corrections, inputs, or
> updates that they may have.
>
> If anyone in this group believes they have an issue appropriate to add to
> the list, please send an email to certifica...@mozilla.org.
>
> Sincerely yours,
> Ben Wilson
> Mozilla Root Program
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Summary of Camerfirma's Compliance Issues

2021-01-10 Thread Ryan Sleevi via dev-security-policy
On Sat, Jan 9, 2021 at 1:44 PM Ramiro Muñoz via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> > That Camerfirma does not understand or express appreciation for this
> risk
> > is, to the extent, of great cause for concern.
>
> Dear Ryan,
>
> We are looking at the same data but we’re reading two completely different
> stories.
>
> We are reading a story of a small CA that had its own graduation journey,
> struggled but eventually managed to emerge stronger from such journey.
>
> You are reading a story of deceitful and unreliable CA that represents the
> worst danger to the entire community (your even wrote: “Camerfirma is as
> bad or worse than WoSign and DigiNotar”!),  even if you yourself recognised
> that was your subjective opinion on the matter.


I am concerned about the attempts to so significantly dismiss the concerns
as merely subjective.

I’m saddened that Camerfirma does not recognize the seriousness of these
issues, despite this thread, as evidenced by this latest response.
Camerfirma continues to suggest “risk” as if this is some absolute that
should the the guiding pole.

The analogy, in the hopes that it helps Camerfirma understand, is a bit
like saying to a bank “I know we borrowed $100, and defaulted on that loan
and never paid it back, but we were a small CA, we’ve grown, and now we
would like to borrow $1 million. We cannot demonstrate our financials, nor
can we offer collateral, but we believe we are low risk, because it was
only $100”.

More concretely, Camerfirma is viewing this through the lens of what did go
wrong, and continuing to be blind to how that signals, from a risk
perspective, of what can go wrong. They are asking to be judged based on
the direct harm to users by their (many, more than any CA I can think of)
failures, while similarly asking the community to disregard the
significance of that pattern of failures, and what it says about the
overall operations of the CA.

In short, Camerfirma is asking to be trusted implicitly and explicitly for
the future, and asking that their $100 default not hold back their $1m
loan. In banking, as in trust, this is simply unreasonable.

Some have suggested that “trust” is the ability to use pst actions to
predict future outcomes. If you say you do X, and as long as I’ve known
you, you’ve done X, then when I say I “trust” you to do X, it’s an
indicator I believe your future actions will be consistent with those past
actions.

Camerfirma has, undisputed, shown a multi-year pattern that continues,
which demonstrates both a failure to correctly implement requirements, but
also a failure to reasonably and appropriately respond to and prevent
future incidents. The incident responses, which Camerfirma would like to
assert are signs of maturity, instead show a CA that has continued to
operate below the baseline expectations for years.

Camerfirma would like the community to believe that they now meet the bare
minimum, as if that alone should be considered, and all of these past
actions disregarded because of this.

Yet the risk is real: that Camerfirma has not met the bare minimum at
present, and that Camerfirma is not prepared to continue to meet that
minimum as the requirements are improved over time. We have exhaustive
evidence of this being the case in the past, and the only assurances we
have that things are different now is Camerfirma’s management believing
that, this time, they have finally got it right. However, the responses on
even the most recent incidents continue to show that Camerfirma is
continuing to pursue the same strategy for remediation it has for years: a
strategy that has demonstrably failed to keep up with industry
requirements, and failed to address the systemic issues.

These are objective statements, demonstrated by the evidence presented, but
Camerfirma would like to present them as subjective, because they take
consideration of the full picture, and not merely the rosy, but misleading,
image that Camerfirma would like to present.

That these are persistent, sustained issues, without systemic change, is
something demonstrably worse than DigiNotar. Further, when considering the
capability for harm, and the sustained pattern of failure, it would be
foolish to somehow dismiss the risk, pretending as if Chekhov’s gun of
failure is not destined to go off in the next act.

At the core, Camerfirma is treating this as if any response from the
community should be directly proportional to the *individual* failures, as
many as they are, and is asking the community to ignore both the systemic
patterns and what it says about the future. This is abundantly clear when
they speak of risk: they apparently are unable to comprehend or acknowledge
what the patterns predict, and the risk of that, and thus ask such patterns
be disregarded entirely as somehow, incorrectly, being too subjective.

If these failures were to be plotted on a time series, there is no question
that the slope of this graph is worrying, and 

Re: Summary of Camerfirma's Compliance Issues

2021-01-05 Thread Ryan Sleevi via dev-security-policy
On Tue, Jan 5, 2021 at 9:01 AM Ramiro Muñoz via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> In response to Ryan’s latest post, we want to provide the community with
> Camerfirma’s due responses and we hope this clears up any doubts that might
> have arisen.
>
> Ryan argument number 1: “These statements are ones that are sort of "true
> by degree". That is, if I was to dispute 1, Camerfirma would/could
> rightfully point out that they were *much* worse before, and so yes, it's
> true that they've improved.”
> 1.  Camerfirma has made huge improvements.
>
> Camerfirma has improved its operations to avoid errors. We have procedures
> in place to incentivate improvement in every level in our operations. We
> shall continue to work in this way in the future.
> We have worked to automate internal processes, also improving the
> management and level of attention on each incident. We have involved more
> experts in the process. Our goal has always been to meet the highest
> demands, including monitoring of CA activities that Mozilla has been
> implementing over the past years.
> In addition, Camerfirma publishes SSL certificates in the CT, EV since
> (May 2016) and OV since (April 2017) ( even if it was mandatory only from
> April 30, 2018). We have always been in favour of increasing transparency
> and providing useful information to the community.
> We have implemented several improvements during 2019 and 2020 (as we have
> already mentioned in previous reports):
>
> •   Decreased response time and increased attention to incidents. As a
> result, we have eliminated communications failures and we have avoided
> response delays (2020).
> •   Use of a centralized lint. Our three unconstrained subCA
> (Multicert, Infocert and Intesa Sao Paolo) with codification errors in
> issued certificates were added to our centralized lint. The first one since
> March 2019 and the other two since April 2019.
> •   Contractual cover of unilateral revocations with the subCAs has
> been clarified and streamlined. (October 2019).
> •   Mass revocation processes have been implemented (June 2020).
> •   Verification of CAA has been revised and reinforced with automatic
> procedures (June 2020).
> •   Control zlint has been implemented, both at pre-issuance and
> post-issuance (January 2019).
> •   Syntax control of the domain (August 2020)
> •   Additional automatic control to verify the correctness of AKI
> extension before certificate issuance has been implemented (April 2020).
>
> In addition, Camerfirma periodically evaluates the efficacy of these
> measures and every other improvement implemented.
>

I understand why Camerfirma feels it important to exhaustively list these
things, but I think the Wiki page, and its details, provides a much more
honest look at these. The adoption of Certificate Transparency before it
was mandatory, for example, does not highlight Camerfirma's leadership in
the area: it reveals how many issues Camerfirma's own management was
letting through its quality control processes, even though tools readily
existed to prevent this. The centralized lints for sub-CAs, for example,
were not imposed proactively to prevent incidents, but reactively in
response to a failure to supervise sub-CAs. Each of these improvements you
list, while there are some improvements, have been reactive in nature,
after Camerfirma has been shown to repeatedly fail to meet the basic
expectations of CAs, fail to detect this, and have the community highlight
it.

Perhaps no greater example of this can be found than "Syntax control of the
domain", implemented in August 2020, despite issues having been raised in
2017 highlighting the importance of this requirement. [1]

What Camerfirma has done here has list the controls they've implemented in
response to their specific incidents, which, while important, overlooks
that many of these were basic expectations that would have prevented
incidents. This is akin to a student asking for full credit for a paper
they turned in a semester late, on the principle that they at least turned
it in, even after their (failing) grade had already been received.

[1]
https://groups.google.com/g/mozilla.dev.security.policy/c/D0poUHqiYMw/m/Pf5p0kB7CAAJ


>
> Ryan argument number 2: “Similarly, to point out at how laughably bad the
> old system was does show that there is a degree of truth in 2”
> 2.  Camerfirma nowadays has a much more mature management system.
>
> Subjective opinions aside, the community bug reports from the last 4 years
> show the improvement and efficacy of controls in Camerfirma's systems and
> procedures:
>
>  CAMERFIRMA InfoCertMULTICERT
>  DIGITALSIGN  CGCOM  TOTAL
> BUGS 20201  3   2
>  0 1 7
> BUGS 20195  1   1
>  0 

Re: Summary of Camerfirma's Compliance Issues

2020-12-29 Thread Ryan Sleevi via dev-security-policy
On Mon, Dec 28, 2020 at 6:35 AM Ramiro Muñoz via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> El miércoles, 23 de diciembre de 2020 a las 0:01:23 UTC+1, Wayne Thayer
> escribió:
> > On Sat, Dec 19, 2020 at 1:03 AM Ramiro Muñoz via dev-security-policy <
> > dev-secur...@lists.mozilla.org> wrote:
> >
> > > Hi Ben, Ryan, Burton and all:
> > >
> > > Camerfirma will present its claims based on a description of the
> problems
> > > found by associating the references to the specific bugs.
> > > After making a complete analysis of the bugs as presented by Ben,
> always
> > > considering that bugs are the main source of truth, we see that the
> > > explanations offered by Camerfirma could generally be better
> developed. We
> > > hope to make up for these deficiencies with this report.
> > >
> > >
> > It's worth pointing out that in April 2018, the Camerfirma '2016 roots'
> > inclusion request [1] was denied [2] after a host of issues were
> > documented. At that time it was made clear that ongoing trust in the
> older
> > roots was in jeopardy [3]. While some progress was made, the number,
> > severity, and duration of new and ongoing bugs since then remains quite
> > high. In this context, I don't find these new disclosures and
> commitments
> > from Camerfirma to form a convincing case for their trustworthiness.
> >
> > - Wayne
> >
> > [1] https://bugzilla.mozilla.org/show_bug.cgi?id=986854
> > [2]
> >
> https://groups.google.com/g/mozilla.dev.security.policy/c/skev4gp_bY4/m/snIuP2JLAgAJ
> > [3]
> >
> https://groups.google.com/g/mozilla.dev.security.policy/c/skev4gp_bY4/m/ZbqPhO5FBQAJ
>
> Hi Wayne
>
> I understand your concern but, Camerfirma has indeed achieved huge
> improvements in terms of Mozilla’s policy compliance during recent years.
> Camerfirma nowadays has a much more mature management system. It’s true,
> some bugs have occurred but, looking at the bugs dashboard, our situation
> cannot be considered very different from other CAs.


So there's three specific claims here, as to why serious consideration of
distrust isn't warranted:

1. Camerfirma has made huge improvements
2. Camerfirma nowadays has a much more mature management system.
3. Camerfirma is not very different from other CAs.

These statements are ones that are sort of "true by degree". That is, if I
was to dispute 1, Camerfirma would/could rightfully point out that they
were *much* worse before, and so yes, it's true that they've improved.
Similarly, to point out at how laughably bad the old system was does show
that there is a degree of truth in 2. And, as I laid out in my own post,
Camerfirma *is* not very different from other CAs - CAs that have been
distrusted, for not very different reasons than Camerfirma. I'm sure
Camerfirma meant to mean "not much different than other *currently trusted*
CAs", but that's equally a degree of truth - many individual incidents
affected other CAs, even though the sheer volume *and nature* of Camerfirma
bugs is troubling.

This is an issue of judgement here, about whether or not the degree of
truth to these statements adequately reflects the very risk that continued
trust in Camerfirma poses. The sheer volume of bugs do help paint a
trendline, which is what Camerfirma is arguing here, but just there's a big
difference between y = x + x, y = x * x, and y = x ^ x, there's a big
difference in the nature of the incidents, the quality of response, and the
(lack of) a meaningful rate of improvement that don't really inspire
confidence. Similarly, the risk in removing trust is exceedingly low, which
helps show the "risk to current users" (from trusting) versus the "risk of
breaking things" (by distrusting) is not a major consideration.

I would be curious if folks believe there is evidence that is being
overlooked from the Wiki, or believe that there is a different perspective
that can be reached from that data, and if they'd like to show how and why
they reached that conclusion. I've shared my perspective, but value
learning more if others do see things differently.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Summary of Camerfirma's Compliance Issues

2020-12-14 Thread Ryan Sleevi via dev-security-policy
Thanks Ramiro for the update.

I do want to make sure we're on the same page. Responding point-by-point to
the issues would probably be the least productive path forward. If there
are specific disagreements with the facts as presented, which were taken
from the Bugzilla reports, it would be good to highlight that. However, I
believe the intent is that the Bugzilla report is the source of truth, so
if there are details that were *not* on the incident reports, I would say
that's more concerning than it is reassuring.

I'm a bit concerned to see your latest reply still highlight the "increased
the PKI team", since that's been a sort of repeated refrain (e.g. Issue T,
Issue BB, Issue PP). I don't disagree that it's important to ensure that
there are adequate resources devoted to compliance _and_ development, but I
think it's also important to make sure that the solution is not to throw
more people at the problem.

While the integrity of the CA is perhaps not as obviously cut and dry, as
was the case with WoSign/StartCom, I do believe it's reasonable to ask
whether or not the CA has the skills, expertise, and understanding of the
systemic issues to effectively manage things going forward, especially when
we have seen the regular repetition of issues. This suggests more systemic
fixes are needed, but also suggests that there are more systemic flaws in
how things are operated, designed, and administered that do call into
question the trustworthiness of the current infrastructure.

If Camerfirma were approaching with a new CA/new certificate hierarchy, it
would be reasonable to ask why, given all of these incidents, Camerfirma
should be included, since it puts a lot of risk onto the community.
Regaining trust and reputation, once lost, is incredibly difficult, and
requires going above and beyond. I hope that, in your formal response, you
provide a holistic picture of how Camerfirma has changed its operations,
implementation, infrastructure, management process, policies, and quite
frankly, the use cases/subscribers that it targets with these certificates,
so that the community can make an informed decision about risk.

Similarly, in thinking about what this would be for a "new" PKI, and how
difficult it would be, given the current evidence, to see that as good for
users, it's also worth asking what should happen with the current PKI.
Should we continue to assume that the implementation of the EV guidelines
is correct, even though we have ample evidence of basic RFC 5280 and BR
violations? Should we consider sunsetting (e.g. with a notBefore
restriction) trust? Would it be reasonable to remove trust altogether?

For example, Camerfirma currently has four roots ([1], [2], [3], [4]). [1]
appears to have issued less than 3,500 still valid certificates, [2] / [3]
are only trusted for e-mail, and [4] seems to be a super-CA of sorts (with
sub-CAs operated by Intesa Sanpaolo, Infocert, Multicert, and Govern
d'Andorra). For the sub-CAs that aren't constrained/revoked, it seems
Intesa Sanpaolo has only issued ~2200 certificates, Infocert ~650, and
Multicert ~3000.

Is that accurate? I totally could have made a mistake here, since this was
just manually inspecting every sub-CA from Camerfirma and I totally could
have clicked wrong, but that suggests that there is... very little
practical risk here to removal, compared to the rather significant risk of
having a CA that has established a pattern of compliance and supervision
issues.

[1] https://crt.sh/?caid=869
[2] https://crt.sh/?caid=140
[3] https://crt.sh/?caid=8453
[4] https://crt.sh/?caid=1114
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Intermediate common name ambiguous naming

2020-12-11 Thread Ryan Sleevi via dev-security-policy
Sure, is there a more specific question I could answer? I'm not really sure
how to rephrase that, and CAs seem to understand it. [1]

[1] https://www.abetterinternet.org/documents/2020-ISRG-Annual-Report.pdf

On Fri, Dec 11, 2020 at 1:43 PM Burton  wrote:

> Ryan,
>
> Please could you expand a little more on this?
>
> "*Ideally, users would most benefit from simply having a random value in
> the DN (no details, period) for both roots *and* intermediates, as this
> metadata both can and should be addressed by CCADB"*
>
> Burton
>
> On Fri, 11 Dec 2020, 16:49 Ryan Sleevi,  wrote:
>
>> On Fri, Dec 11, 2020 at 11:34 AM Burton  wrote:
>>
>>> The bits of information included in the CN field (company name, version,
>>> etc) created intermediate separation from the rest and the additional
>>> benefit of these bits of information included in the CN field in an
>>> intermediate was a person could locate with some accuracy at first glance
>>> the CA the intermediate belongs too and that can help in many different
>>> ways.
>>>
>>
>> I don't think the benefits here are meaningful, which is where I think we
>> disagree, nor do I think the use case you're describing is a good trade-off.
>>
>> Expecting the company name to be in the CN is, frankly, both unreasonable
>> and not required. The version is already there ("R3"), so I'm not sure why
>> you mention it, although it'd be totally valid for a CA to use a random
>> value there.
>>
>> I don't think "in many different ways" is very useful to articulate the
>> harm you see.
>>
>>
>>> The problem with the Let's Encrypt R3 intermediate is that the CN field
>>> is both unique and not unique. It's unique CN in the sense of the name "R3"
>>> and also only Let's Encrypt uses that name in an intermediate. It's not an
>>> unique CN because it's not random enough and doesn't have any other
>>> information to separate the intermediate if another CA decided to issue an
>>> intermediate with the same CN name "R3". Also from a first glance
>>> prospective it's hard to determine the issuer in this format without
>>> looking inside the intermediate subject.
>>>
>>
>> It would seem you're expecting the CN to be a DN. If that's the case, I
>> regret to inform you, you're wrong for expecting that and unreasonable for
>> asking for it :)
>>
>> If another CA wants to use the name R3, I see no issues with that: it is
>> the Distinguished Name in totality that addresses it, and we've already got
>> the issue you're concerned about (two CAs use the name "R3") identified by
>> the O.
>>
>>
>>> If Let's Encrypt R3 intermediate CN was hypothetically "LE R3" naming
>>> that would be enough uniqueness other intermediates, very short naming and
>>> from a first glance prospective easier to determine the issuer.
>>>
>>
>>> The main biggest problem point for me is the lack of uniqueness of the
>>> intermediate with the "R3" naming on it's own.
>>>
>>
>> Right, there's absolutely nothing wrong with two CAs using the same CN
>> for an intermediate, in the same way there's nothing wrong with two CAs
>> using the same CN for a leaf certificate (i.e. two CAs issuing certificates
>> to the same server). For your use case, it's important to use the
>> technology correctly: which is to use the DistinguishedName here, at a
>> minimum. However, I'm also trying to highlight that using the Distinguished
>> Name here is itself problematic, and has been for years, so if you're
>> changing from "use CN", you should really be "use CCADB" rather than "use
>> CN".
>>
>> Arguments such as "easy, at a glance, in the certificate viewer", which I
>> can certainly understand is how some have historically looked at things, is
>> in practice an antiquated idea that doesn't line up with operational
>> practice. I don't disagree that, say, in the 80s, this is how the ITU
>> thought things would be, or that Netscape originally thought this is how it
>> would be, but in the 25 years since SSL, we know it's not how things are.
>> For example, in order to satisfy your use case, with the Web PKI, we would
>> need to require that every CA revoke every Root/Intermediate every time
>> they rebrand, or transfer/sell roots, to replace them with "proper" O
>> values. As we see from this list itself, it's not uncommon for CAs to do
>> that, I think we'd both agree it's not desirable to require security
>> updates every time a CA rebrands (nor would it be operationally viable),
>> and so the assumption of "look at the DN" is, logically, flawed.
>>
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Intermediate common name ambiguous naming

2020-12-11 Thread Ryan Sleevi via dev-security-policy
On Fri, Dec 11, 2020 at 11:34 AM Burton  wrote:

> The bits of information included in the CN field (company name, version,
> etc) created intermediate separation from the rest and the additional
> benefit of these bits of information included in the CN field in an
> intermediate was a person could locate with some accuracy at first glance
> the CA the intermediate belongs too and that can help in many different
> ways.
>

I don't think the benefits here are meaningful, which is where I think we
disagree, nor do I think the use case you're describing is a good trade-off.

Expecting the company name to be in the CN is, frankly, both unreasonable
and not required. The version is already there ("R3"), so I'm not sure why
you mention it, although it'd be totally valid for a CA to use a random
value there.

I don't think "in many different ways" is very useful to articulate the
harm you see.


> The problem with the Let's Encrypt R3 intermediate is that the CN field is
> both unique and not unique. It's unique CN in the sense of the name "R3"
> and also only Let's Encrypt uses that name in an intermediate. It's not an
> unique CN because it's not random enough and doesn't have any other
> information to separate the intermediate if another CA decided to issue an
> intermediate with the same CN name "R3". Also from a first glance
> prospective it's hard to determine the issuer in this format without
> looking inside the intermediate subject.
>

It would seem you're expecting the CN to be a DN. If that's the case, I
regret to inform you, you're wrong for expecting that and unreasonable for
asking for it :)

If another CA wants to use the name R3, I see no issues with that: it is
the Distinguished Name in totality that addresses it, and we've already got
the issue you're concerned about (two CAs use the name "R3") identified by
the O.


> If Let's Encrypt R3 intermediate CN was hypothetically "LE R3" naming that
> would be enough uniqueness other intermediates, very short naming and from
> a first glance prospective easier to determine the issuer.
>

> The main biggest problem point for me is the lack of uniqueness of the
> intermediate with the "R3" naming on it's own.
>

Right, there's absolutely nothing wrong with two CAs using the same CN for
an intermediate, in the same way there's nothing wrong with two CAs using
the same CN for a leaf certificate (i.e. two CAs issuing certificates to
the same server). For your use case, it's important to use the technology
correctly: which is to use the DistinguishedName here, at a minimum.
However, I'm also trying to highlight that using the Distinguished Name
here is itself problematic, and has been for years, so if you're changing
from "use CN", you should really be "use CCADB" rather than "use CN".

Arguments such as "easy, at a glance, in the certificate viewer", which I
can certainly understand is how some have historically looked at things, is
in practice an antiquated idea that doesn't line up with operational
practice. I don't disagree that, say, in the 80s, this is how the ITU
thought things would be, or that Netscape originally thought this is how it
would be, but in the 25 years since SSL, we know it's not how things are.
For example, in order to satisfy your use case, with the Web PKI, we would
need to require that every CA revoke every Root/Intermediate every time
they rebrand, or transfer/sell roots, to replace them with "proper" O
values. As we see from this list itself, it's not uncommon for CAs to do
that, I think we'd both agree it's not desirable to require security
updates every time a CA rebrands (nor would it be operationally viable),
and so the assumption of "look at the DN" is, logically, flawed.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Intermediate common name ambiguous naming

2020-12-11 Thread Ryan Sleevi via dev-security-policy
On Fri, Dec 11, 2020 at 5:51 AM Burton via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> The common name of the Let's Encrypt R3 intermediate certificate (
> https://crt.sh/?id=3479778542) is in my opinion short and ambiguous. It
> doesn't have any information in common name that can identify the operator
> of the CA "Let's Encrypt" which can cause confusion who is running the CA.
>
> The intermediate certificate common name "R3" naming shouldn't be allowed.
> It's like the past root store naming that had ambiguous naming such as
> "Root CA".
>
> If such common name naming was adopted by other CAs it would terrible to
> manage certificate stores and cause chaos of confusion.
>
>  Burton
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy


I see nothing wrong here, and believe it would benefit users if more CAs
adopted this.

I’m honestly shocked that the benefits aren’t self-evident, although I
appreciate Let’s Encrypt having documented them as well. What possible
reason could you have for raising concern? I cannot see any fathomable
reason for wanting to, say, repeat the organization name within the CN, nor
is there any need whatsoever to have any “human readable” strings there in
the CN, as long as there is a unique string.

Ideally, users would most benefit from simply having a random value in the
DN (no details, period) for both roots *and* intermediates, as this
metadata both can and should be addressed by CCADB. After all, we have
plenty of examples over the past 25 years where, for both roots and
intermediates, every bit of the DN becomes inaccurate over time (e.g. roots
are sold, companies rebrand, etc).

So, while I understand the “I can’t look at the cert and tells who owns it”
argument, which I’m not sure if you’re making, but which is demonstrably
nonsense already, could you perhaps clarify more about why you don’t like
it?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Summary of Camerfirma's Compliance Issues

2020-12-10 Thread Ryan Sleevi via dev-security-policy
Hi Ben,

This is clearly a portrait of a CA that, like those that came before
[1][2][3][4], paint a pattern of a CA that consistently and regularly fails
to meet program requirements, in a way that clearly demonstrates these are
systemic and architectural issues.

As with Symantec, we see a systematic failure to appropriately supervise
RAs and Sub-CAs.
As with Procert, we see systemic technical failures continuing to occur. We
also see problematic practices here of revocations happening without a
systemic examination about why, which leads them to overlook incident
reports.
As with Visa, we see significant issues with their audits that are
fundamentally irreconcilable. As called out in [5] (Issue JJ), short of
distrusting their certificates, there isn't a path forward here.

I'm concerned that there's been no response from Camerfirma, even
acknowledging this publicly, as is the norm. I wanted to give a week, even
to allow for a simple acknowledgement, since Mozilla Policy requires that
CAs MUST follow and be aware of discussions here, above and beyond your
direct communication with them pointing this out.

Given that there haven't been corrections proposed yet, is it appropriate
to begin discussing what next steps should be to protect users?

[1] https://wiki.mozilla.org/CA:PROCERT_Issues
[2] https://wiki.mozilla.org/CA:Visa_Issues
[3] https://wiki.mozilla.org/CA:Symantec_Issues
[4] https://wiki.mozilla.org/CA:WoSign_Issues
[5] https://bugzilla.mozilla.org/show_bug.cgi?id=1583470#c3

On Thu, Dec 3, 2020 at 1:01 PM Ben Wilson via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

>  All,
>
> We have prepared an issues list as a summary of Camerfirma's compliance
> issues over the past several years. The purpose of the list is to collect
> and document all issues and responses in one place so that an overall
> picture can be seen by the community. The document is on the Mozilla wiki:
> https://wiki.mozilla.org/CA:Camerfirma_Issues.
> 
>
> I will now forward the link above to Camerfirma to provide them with a
> proper opportunity to respond to the issues raised and to ask that they
> respond directly in m.d.s.p. with any comments, corrections, inputs, or
> updates that they may have.
>
> If anyone in this group believes they have an issue appropriate to add to
> the list, please send an email to certifica...@mozilla.org.
>
> Sincerely yours,
> Ben Wilson
> Mozilla Root Program
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Announcing the Chrome Root Program

2020-12-02 Thread Ryan Sleevi via dev-security-policy
Writing in a Google capacity (See
https://wiki.mozilla.org/CA/Policy_Participants )

Recently, at the CA/Browser Forum 51 “virtual F2F” [1], the Chrome team
shared an announcement about a revamp to the Chrome Root Program, including
an updated policy available at https://g.co/chrome/root-policy, as well as
announcing a proposed initial Chrome Root Store,
https://g.co/chrome/root-store

These have not yet launched in Chrome, but as we begin landing code to
support these efforts, we wanted to make sure that the information was
available, especially for CAs that may be impacted by the change. However,
we also realize that members of this community may have questions about
this, about the Chrome team’s relationship with Mozilla in this community,
and what things will look like going forward. We wanted to provide answers
for some of these questions, as well as open a dialog for questions that
members of the community may have. CAs with questions are still asked to
e-mail us at chrome-root-authority-prog...@google.com.

[1]
https://cabforum.org/2020/10/21/minutes-of-the-ca-browser-forum-f2f-meeting-51-virtual-21-22-october-2020/#Google-Root-Program-Update


## What’s changing with Chrome?

For several months now, Chrome has been transitioning from using
OS-provided functions for verifying certificates to a cross-platform
certificate verifier built in Chrome. This has already launched for
ChromeOS and Linux, and has been rolling out to our stable channel for
macOS, with no unexpected incompatibility issues. On these platforms, we’ve
continued to maintain compatibility with OS-configuration of certificates,
which has been key to ensuring a seamless transition for users.


## Why is Chrome launching a Root Program and Store?

As we begin to look at bringing this new verifier to Chrome on other
platforms, the strategy of using the OS-provided root store has a number of
unfortunate tradeoffs that can impact the security for our users. Just as
Mozilla does with NSS [1] in order to keep Mozilla users safe, both Apple
and Microsoft also impose limits on trust and how it’s applied to CAs in
their Root Program in a way that is tied to their certificate verification
code and infrastructure. As captured by Mozilla’s FAQ regarding their Root
Program [2], the selection and management of CAs in a root store is closely
tied to the certificate verifier, its behaviors, and the ability to reason
about deploying new updates. Likewise, we’ll be rolling out the Chrome Root
Store to go with the new verifier on Chrome platforms. Our goal is still to
ensure this is a relatively seamless experience for users, although we
recognize that on some platforms, there will be some expected differences.

[1]
https://blog.mozilla.org/security/2019/02/14/why-does-mozilla-maintain-our-own-root-certificate-store/
[2]
https://wiki.mozilla.org/CA/FAQ#Can_I_use_Mozilla.27s_set_of_CA_certificates.3F


## Is this a fork of the Mozilla Root Program?

No. The Mozilla Root Program has been and is the gold standard for Root
Programs for nearly two decades, reflecting Mozilla’s commitment to open
source, transparency, and community governance. The Module system of
governance, used throughout Mozilla products, has been hugely successful
for both Mozilla and the community. However, its decisions are also product
decisions that affect the security of Mozilla’s products and users, and in
particular Mozilla Firefox, and so it’s unsurprising that in the event of
conflict or escalations, these are resolved by the Firefox Technical
Leadership Module.

The Chrome Root Program and Policy similarly reflects a commitment to the
security of Google Chrome and its users, and involves leadership for Google
Chrome in settling conflicts and escalations. These processes and policies
do share similarities, but it’s best to think of them as separate, much
like the decision making for the Blink rendering engine is different from
the decision making for the Gecko rendering engine, even if they result in
many similar conclusions about what Web Platform features to support and
implement within their relative products.


## Is this a fork of the Mozilla Root Store?

No. Even though there’s substantial overlap between the Mozilla Root Store
and the Chrome Root Store, it would be incorrect to call these a fork. This
is similar to how there is significant overlap between the root stores of
Apple and Microsoft.

This substantial overlap is intentional. As called out in our policy, our
goal is about ensuring the security of our users, as well as minimizing
compatibility differences across our different Chrome platforms and between
different browsers, including those of Apple, Microsoft, and Mozilla.
Mozilla’s leadership here, with the establishment of the CCADB and the
public and transparent process and review of CAs, has been essential in
achieving those goals, and this would not be possible without Mozilla’s
leadership in this space.

We anticipate that there may be times when we reach 

Re: Policy 2.7.1: MRSP Issue #206: Limit re-use of domain name verification to 398 days

2020-12-01 Thread Ryan Sleevi via dev-security-policy
On Tue, Dec 1, 2020 at 2:22 PM Ben Wilson via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> See responses inline below:
>
> On Tue, Dec 1, 2020 at 11:40 AM Doug Beattie 
> wrote:
>
> > Hi Ben,
> >
> > For now I won’t comment on the 398 day limit or the date which you
> propose
> > this to take effect (July 1, 2021), but on the ability of CAs to re-use
> > domain validations completed prior to 1 July for their full 825 re-use
> > period.  I'm assuming that the 398 day limit is only for those domain
> > validated on or after 1 July, 2021.  Maybe that is your intent, but the
> > wording is not clear (it's never been all that clear)
> >
>
> Yes. (I agree that the wording is currently unclear and can be improved,
> which I'll work on as discussion progresses.)  That is the intent - for
> certificates issued beginning next July--new validations would be valid for
> 398 days, but existing, reused validations would be sunsetted and could be
> used for up to 825 days (let's say, until Oct. 1, 2023, which I'd advise
> against, given the benefits of freshness provided by re-performing methods
> in BR 3.2.2.4 and BR 3.2.2.5).
>

Why? I have yet to see a compelling explanation from a CA about why
"grandfathering" old validations is good, and as you note, it undermines
virtually every benefit that is had by the reduction until 2023.

Ben, could you explain the rationale why this is better than the simpler,
clearer, and immediately beneficial for Mozilla users of requiring new
validations be conducted on-or-after 1 July 2021? Any CA that had concerns
would have ample time to ensure there is a fresh domain validation before
then, if they were concerned.

Doug, could you explain more about why it's undesirable to do that?


> >
> > Could you consider changing it to read more like this (feel free to edit
> > as needed):
> >
> > CAs may re-use domain validation for subjectAltName verifications of
> > dNSNames and IPAddresses done prior to July 1, 2021 for up to 825 days
>  > accordance with domain validation re-use in the BRs, section  4.2.1>.
> CAs
> > MUST limit domain re-use for subjectAltName verifications of dNSNames and
> > IPAddresses to 398 days for domains validated on or after July 1, 2021.
> >
>
> Thanks. I am open to all suggestions and improvements to the language. I'll
> see how this can be phrased appropriately to reduce the chance for
> misinterpretation.
>

As noted above, I think adopting this wording would prevent much (any?)
benefit from being achieved until 2023. I can understand that 2023 is
better than "never", but I'm surprised to see an agreement so quickly to
that being desirable over 2021. I suspect there's ample context here, but I
think it would benefit from discussion.


> > From a CA perspective, I don't have any major concerns with shortening
> the
> > domain re-use periods, but customers do/will.  Will there be a Mozilla
> blog
> > that outlines the security improvements with cutting the re-use period in
> > half and why July 2021 is the right time?

>
>
> Yes.  I'll prepare a blog post that outlines the security improvements to
> be obtained by shortening the reuse period. (E.g., certificates should not
> contain stale information.) July 2021 was chosen because I figured April
> 2021 would be too soon. It also allows CAs to schedule any needed system
> work during Q2/2021. In my opinion, Oct. 2023 is a considerably long tail
> for this change, and existing domains/customers should not be affected
> until then.


Right, my concern is that it's "too long" a tail. There's no real benefit
in July 2021 - the benefits aren't really met until Oct 2023.

As we saw with the last change to domain validation, there were a
significant number of CA incidents that resulted from this "grandfather"
approach. Happy to dig them up if it'd be helpful for review/discussion,
but it does seem like the public/Mozilla community has evidence of both it
not being in users best interest, as well as not leading to good compliance
adherence. If there's an argument for why this problematic approach is
still useful, I'm hoping folks can share why.

Equally, we know that once this date is set, there will likely be
substantial opposition from CAs to any further improvements/changes between
July 2021 through to October 2023, since they're "in the midst" of
transitioning. That equally seems a good reason to set an expectation and
with ample lead-time (e.g. 6 months), CAs can work with their customers to
obtain fresh manual validations that are needed, or, if they're security-
and customer- focused, work on automating validation such that such
transitions have zero effective impact on their Subscribers going forward.

I just can't see this approach working in 2020, knowing what we know now,
as well as knowing what the security goals are.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org

Re: CCADB Proposal: Add field called Full CRL Issued By This CA

2020-11-18 Thread Ryan Sleevi via dev-security-policy
On Wed, Nov 18, 2020 at 7:57 PM Ryan Hurst via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Kathleen,
>
> This introduces an interesting question, how might Mozilla want to see
> partial CRLs be discoverable? Of course, they are pointed to by the
> associated CRLdp but is there a need for a manifest of these CRL shards
> that can be picked up by CCADB?
>

What's the use case for sharding a CRL when there's no CDP in the issued
certificates and the primary downloader is root stores?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-11-18 Thread Ryan Sleevi via dev-security-policy
On Wed, Nov 18, 2020 at 8:19 AM Nils Amiet via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> We have carefully read your email, and believe we’ve identified the
> following
> important points:
>
> 1. Potential feasibility issue due to lack of path building support
>
> 2. Non-robust CRL implementations may lead to security issues
>
> 3. Avoiding financial strain is incompatible with user security.
>

I wasn't trying to suggest that they are fundamentally incompatible (i.e.
user security benefits by imposing financial strain), but rather, the goal
is user security, and avoiding financial strain is a "nice to have", but
not an essential, along that path.


> #1 Potential feasibility issue due to lack of path building support
>
> If a relying party does not implement proper path building, the EE
> certificate
> may not be validated up to the root certificate. It is true that the new
> solution adds some complexity to the certificate hierarchy. More
> specifically
> it would turn a linear hierarchy into a non-linear one. However, we
> consider
> that complexity as still being manageable, especially since there are
> usually
> few levels in the hierarchy in practice. If a CA hierarchy has utilized the
> Authority Information Access extension the chance that any PKI library can
> build the path seems to be very high.
>

I'm not sure which PKI libraries you're using, but as with the presumption
of "robust CRL" implementations, "robust AIA" implementations are
unquestionably the minority. While it's true that Chrome, macOS, and
Windows implement AIA checking, a wide variety of popular browsers
(Firefox), operating systems (Android), and libraries (OpenSSL, cURL,
wolf/yassl, etc etc) do not.

Even if the path could not be built, this would lead to a fail-secure
> situation
> where the EE certificate is considered invalid and it would not raise a
> security issue. That would be different if the EE certificate would
> erroneously
> be considered valid.
>

This perspective too narrowly focuses on the outcome, without considering
the practical deployment expectations. Failing secure is a desirable
property, surely, but there's no question that a desirable property for CAs
is "Does not widely cause outages" and "Does not widely cause our support
costs to grow".

Consider, for example, that despite RFC 5280 having a "fail close" approach
for nameConstraints (by MUST'ing the extension be critical), the CA/B Forum
chose to deviate and make it a MAY. The reason for this MAY was that the
lack of support for nameConstraints on legacy macOS (and OpenSSL) versions
meant that if a CA used nameConstraints, it would fail closed on those
systems, and such a failure made it impractical for the CA to deploy. The
choice to make such an extension non-critical was precisely because
"Everyone who implements the extension is secure, and everyone who doesn't
implement the extension works, but is not secure".

It is unfortunate-but-seemingly-necessary to take this mindset into
consideration: the choice between hard-failure for a number of clients (but
guaranteed security) or the choice between less-than-ideal security, but
practical usability, is often made for the latter, due to the socioeconomic
considerations of CAs.

Now, as it relates to the above point, the path building complexity would
inevitably cause a serious spike in support overhead, because it requires
that in order to avoid issues, every server operator would need to take
action to reconfigure their certificate chains to ensure the "correct"
paths were built to support such non-robust clients (for example, of the
OpenSSL forks, only LibreSSL supports path discovery, and only within the
past several weeks). As a practical matter, that is the same "cost" of
replacing a certificate, but with less determinism of behaviour: instead of
being able to test "am I using the wrong cert or the right one", a server
operator would need to consider the entire certification path, and we know
they'll get that wrong.

This is part of why I'm dismissive of the solution; not because it isn't
technically workable, but because it's practically non-viable when
considering the overall set of ecosystem concerns. These sorts of
considerations all factor in to the decision making and recommendation for
potential remediation for CAs: trying to consider all of the interests, but
also all of the limitations.


> #2 Non-robust CRL implementations may lead to security issues
>
> Relying parties using applications that don’t do a proper revocation check
> do
> have a security risk. This security risk is not introduced by our proposal
> but
> inherent to not implementing core functionality of public key
> infrastructures.
> The new solution rewards relying parties that properly follow the
> standards.
> Indeed, a relying party that has a robust CRL (or OCSP check)
> implementation
> already would not have to bear any additional security risk. They also
> would
> benefit from the point 

Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-11-16 Thread Ryan Sleevi via dev-security-policy
On Mon, Nov 16, 2020 at 8:40 AM Nils Amiet via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> > Hi Nils,
> >
> > This is interesting, but unfortunately, doesn’t work. The 4-certificate
> > hierarchy makes the very basic, but understandable, mistake of assuming
> the
> > Root CA revoking the SICA is sufficient, but this thread already
> captures
> > why it isn’t.
> >
> > Unfortunately, as this is key to the proposal, it all falls apart from
> > there, and rather than improving security, leads to a false sense of it.
> To
> > put more explicitly: this is not a workable or secure solution.
>
> Hello Ryan,
>
> We agree that revoking SICA would not be sufficient and we mentioned that
> in section 2.3 in the above message.
>
> The new solution described in section 2.3, not only proposes to revoke
> SICA (point (iv)) but also to revoke ICA (point (ii)) in the 4-level
> hierarchy (RCA -> ICA -> SICA -> EE).
>
> We believe this makes a substantial difference.
>

Sorry, I should have been clearer that even the assumptions involved in the
four-certificate design were flawed.

While I do want to acknowledge this has some interesting edges, I don't
think it works as a general solution. There are two critical assumptions
here that don't actually hold in practice, and are amply demonstrated as
causing operational issues.

The first assumption here is the existence of robust path building, as
opposed to path validation. In order for the proposed model to work, every
server needs to know about SICA2 and ICA2, so that it can properly inform
the client, so that the client can build a correct path and avoid ICA /
SICA1. Yet, as highlighted by
https://medium.com/@sleevi_/path-building-vs-path-verifying-the-chain-of-pain-9fbab861d7d6
, very few libraries actually implement path building. Assuming they
support revocation, but don't support path building, it's entirely
reasonable for them to build a path between ICA and SICA and treat the
entire chain as revoked. Major operating systems had this issue as recently
as a few years ago, and server-side, it's even more widespread and rampant.
In order to try to minimize (but impossible to prevent) this issue is to
require servers to reconfigure the intermediates they supply to clients, in
order to try to serve a preferred path, but has the same operational impact
as revoke-and-replace, under the existing, dominant approach.

The second assumption here is with an assumption on robust CRL support,
which I also hope we can agree is hardly the case, in practice. In order
for the security model to be realized by your proposed 4-tier plan, the
clients MUST know about the revocation event of SICA/ICA, in order to
ensure that SICA cannot spoof messages from ICA. In the absence of this,
the risk is entirely borne by the client. Now, you might think it somehow
reasonable to blame the implementer here, but as the recent outage of Apple
shows, there are ample reasons for being cautious regarding revocation. As
already discussed elsewhere on this thread, although Mozilla was not
directly affected due to a secondary check (prevent CAs from signing OCSP
responses), it does not subject responder certificates to OneCRL checks,
and thus there's still complexity lurking.

As it relates to the conclusions, I think the risk surface is quite
misguided, because it overlooks these assumptions with handwaves such as
"standard PKI procedures", when in fact, they don't, and have never, worked
that way in practice. I don't think this is intentional dismissiveness, but
I think it might be an unsupported rosy outlook on how things work. The
paper dismisses the concern that "key destruction ceremonies can't be
guaranteed", but somehow assumes complete and total ubiquity for deployment
of CRLs and path verification; I think that is, at best, misguided.
Although Section 3.4 includes the past remarks about the danger of
solutions that put all the onus on application developers, this effectively
proposes a solution that continues more of the same.

As a practical matter, I think we may disagree on some of the potential
positive outcomes from the path currently adopted by a number of CAs, such
as the separation of certificate hierarchies, a better awareness and
documentation during CA ceremonies, and a plan for agility for all
certificates beneath those hierarchies. In effect, it proposes an
alternative that specifically seeks to avoid those benefits, but
unfortunately, does so without achieving the same security properties. It's
interesting that "financial strain" would be highlighted as a goal to
avoid, because that framing also seems to argue that "compliance should be
cheap" which... isn't necessarily aligned with users' security needs.

I realize this is almost entirely critical, and I hope it's taken as
critical of the proposal, not of the investment or interest in this space.
I think this sort of analysis is exactly the kind of analysis we can and
should hope that any and all CAs can and 

Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-11-16 Thread Ryan Sleevi via dev-security-policy
be used to sign a valid OCSP response for ICA
> itself or
> for any sibling in the certificate hierarchy. Thus, allowing ICA to
> unrevoke
> itself. That is also true even if a new RCA2 is created and used to sign
> the
> new ICA2 because that would still make ICA valid because RCA is a trust
> anchor
> and the only way to “revoke” it is to remove it from trust stores.
>
> Contrarily to the 3-level hierarchy were the above concern applies, some
> affected CAs have a 4-level hierarchy such as the one shown in Figure 3
> (RCA ->
> ICA -> SICA -> EE), where the affected certificate with the
> id-kp-OCSPSigning
> EKU is named SICA.
>
> Since this is a 4-level hierarchy, the RCA in the 3-level hierarchy is now
> our
> ICA in this 4-level-hierarchy. Therefore, it is now possible to revoke the
> parent of the affected certificate SICA (its parent is ICA). The resulting
> operation does not require removing a trust anchor, which may take decades
> until everyone updates their systems, but truly mitigates the security
> risks
> immediately. Indeed, even if a malicious person used SICA2’s private key to
> sign an OCSP response to unrevoke SICA, since its parent (ICA) is also
> revoked
> and is not a trust anchor, the attack would not work.
>
> 3.3. Private keys may not have been properly destroyed
>
> An audited ceremony is not an absolute proof that no other copy of the
> private
> key exists.
>
> The community would benefit from revoking the parent (ICA) since some
> non-TLS
> issuing CAs may be unaudited and may have no other way to mitigate the
> security
> risk.
>
> 3.4. Offloading the pain to the community
>
> Anything other than a key destruction can be perceived as unfair since it
> may
> be asking everyone (web browsers, applications, etc.) to change and adapt
> because someone failed to follow the expectations. It would put more work
> on
> the community to fix the problem by modifying how applications perform
> certificate validation.
>
> However, the new solution only requires changes to a small subset of the
> hierarchy and does not offload the burden to the community. The new
> solution
> uses traditional PKI features such as CRLs and does not require the
> community
> to make any changes to certification validation checks.
>
> 3.5. Feasibility – Proof of Concept
>
> We provide a proof-of-concept script written in Python that generates the
> hierarchy in Figure 3 and test that the unmodified EE certificate is still
> considered valid under the modified hierarchy using OpenSSL. The code and
> documentation are available on GitHub [2].
>
> 3.6. Practicability, adherence, and workload for the parties
>
> The original solution requires that all Sub-CAs with problematic
> certificates
> at the SICA level engage in a very costly re-securization campaign.
> Non-concerned Sub-CAs have no work to do. In terms of security, it is
> important
> to stress that all Sub-CAs (the ones with problematic certificates and the
> ones
> without) remain at risk if a single Sub-CA does not do the proper work, or
> does
> it badly in terms of security.
>
> The new solution requires all Sub-CAs to act, but at a reduced cost, since
> they
> must roll only intermediate certificates, which are quite few usually. It
> has
> impacts and cost, but clearly lower. And there is no risk for any Sub-CA at
> all, as the re-securization is managed at the ICA level by the CA.
>
> 4. Overall conclusion
>
> In this document, a new solution was proposed and compared to the original
> solution. The feasibility of the new solution was verified by a
> proof-of-concept that people can use to test for themselves. Both the
> compliance issue and security risks were analyzed, without leading to any
> negative impacts while also reducing financial strain for some
> organizations.
> Depending on the exact and detailed nature of the hierarchy, the new
> solution
> has proven to bring operational advantages and is equivalent or improved
> with
> regard to security. We propose the community validate this new solution as
> well
> as our conclusions.
>
> This analysis reflects the authors knowledge at the time of writing. In
> case
> concerns would be missing, the authors are willing to investigate them and
> complete the analysis. For that purpose, members of the community are
> encouraged to provide feedback in any form.
>
>
> References
>
> [1] "SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated
> Responder Cert", post by Ryan Sleevi on the Mozilla Dev Security Policy
> mailing
> list,
>
> https://groups.google.com/g/mozilla.dev.security.policy/c/EzjIkNGfVEE/m/XSfw4tZPBwAJ
&

Re: Policy 2.7.1:MRSP Issue #205: Require CAs to publish accepted methods for proving key compromise

2020-11-15 Thread Ryan Sleevi via dev-security-policy
On Sun, Nov 15, 2020 at 6:02 AM Dimitris Zacharopoulos 
wrote:

>
>
> On 2020-11-15 1:04 π.μ., Peter Bowen via dev-security-policy wrote:
> > On Sat, Nov 14, 2020 at 2:05 PM Ryan Sleevi via dev-security-policy
> >  wrote:
> >> So, perhaps now that we've had this conversation, and you've learned
> about
> >> potentially illegitimate revocations are a thing, but that they were not
> >> the thing I was worrying about and that you'd misunderstood, perhaps we
> can
> >> move back to the substantive discussion?
> > Going back to earlier posts, it seems like CAs could include a
> > statement in their CPS separate from key compromise that they may
> > revoke a certificate at any time for any reason or no reason at their
> > sole discretion.  That would allow the CA to choose to accept proofs
> > of key compromise that are not listed in the CPS based on their
> > internal risk methodologies, correct?  It does still have the "secret
> > document" issue, but moves it away from key compromise and makes it
> > clear and transparent to subscribers and relying parties.  This means
> > the CA could revoke the subscriber certificate because they have an
> > internal procedure that they roll a bunch of D16 and revoke any
> > certificate with a serial number that matches the result.   Or the CA
> > could revoke the certificate because they got a claim that the key in
> > the certificate is compromised but it came in a form not explicitly
> > called out, so they had to use their own judgement on whether to
> > revoke.
> >
> > Thanks,
> > Peter
>
> Thanks for chiming-in Peter,
>
> I have always considered this revocation reason as the absolutely "last
> resort" for CAs when it comes to revocation of Certificates. Especially
> for the revocation of end-entity Certificates for which there is a
> Subscriber Agreement attached, if the CA cannot properly justify the
> revocation based on other documented reasons that Subscribers can
> understand and be prepared for, it seems like a process failure and a
> possible reason for Subscribers to move against the CA. Much like the
> invocation of 9.16.3 should be avoided as much as possible, I believe
> the same applies for relying on such very wide unbound CPS statements.
>

Isn't this the "different problem" that Nick was raising concerns about?

That is, your reply here, talking about revocation for reasons outside of
the Subscriber Agreement, sounds even further afield than Issue 205. Am I
missing something on how it connects here?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7.1:MRSP Issue #205: Require CAs to publish accepted methods for proving key compromise

2020-11-15 Thread Ryan Sleevi via dev-security-policy
On Sat, Nov 14, 2020 at 11:52 PM Nick Lamb  wrote:

> On Sat, 14 Nov 2020 17:05:26 -0500
> Ryan Sleevi  wrote:
>
> > I don't entirely appreciate being told that I don't know what I'm
> > talking about, which is how this reply comes across, but as I've
> > stated several times, the _original_ language is sufficient here,
> > it's the modified language that's problematic.
>
> That part of my statement was erroneous - of the actual texts I've seen
> proposed so far I prefer this amended proposal from Ben:
>
> "Section 4.9.12 of a CA's CP/CPS MUST clearly specify its accepted
> methods that Subscribers, Relying Parties, Application Software
> Suppliers, and other third parties may use to demonstrate private key
> compromise. A CA MAY allow additional, alternative methods that do not
> appear in section 4.9.12 of its CP/CPS."
>
> I can't tell from here whether you know what you're talking about, only
> whether I know what you're talking about, and I confess after some
> effort I don't believe I was getting any closer.
>
> Still, I believe this language can be further improved to achieve the
> goals of #205. How about:
>
> "Section 4.9.12 of a CA's CP/CPS MUST clearly specify one or more
> accepted methods that Subscribers, Relying Parties, Application Software
> Suppliers, and other third parties may use to demonstrate private key
> compromise. A CA MAY allow additional, alternative methods that do not
> appear in section 4.9.12 of its CP/CPS."
>
>
> This makes clear that the CA must have at least one of these "clearly
> specified" accepted methods which ought to actually help Matt get some
> traction.
>

I can tell we're not making any progress here, as this doesn't really
address the concerns I've highlighted twice. This would be a regression
from the proposed language, and would be a further regression from the
original language here.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7.1:MRSP Issue #205: Require CAs to publish accepted methods for proving key compromise

2020-11-14 Thread Ryan Sleevi via dev-security-policy
On Sat, Nov 14, 2020 at 6:05 PM Peter Bowen  wrote:

> On Sat, Nov 14, 2020 at 2:05 PM Ryan Sleevi via dev-security-policy
>  wrote:
> >
> > So, perhaps now that we've had this conversation, and you've learned
> about
> > potentially illegitimate revocations are a thing, but that they were not
> > the thing I was worrying about and that you'd misunderstood, perhaps we
> can
> > move back to the substantive discussion?
>
> Going back to earlier posts, it seems like CAs could include a
> statement in their CPS separate from key compromise that they may
> revoke a certificate at any time for any reason or no reason at their
> sole discretion.  That would allow the CA to choose to accept proofs
> of key compromise that are not listed in the CPS based on their
> internal risk methodologies, correct?  It does still have the "secret
> document" issue, but moves it away from key compromise and makes it
> clear and transparent to subscribers and relying parties.  This means
> the CA could revoke the subscriber certificate because they have an
> internal procedure that they roll a bunch of D16 and revoke any
> certificate with a serial number that matches the result.   Or the CA
> could revoke the certificate because they got a claim that the key in
> the certificate is compromised but it came in a form not explicitly
> called out, so they had to use their own judgement on whether to
> revoke.
>

The reply was to me, but it sounds like your proposal was to address
Dimitris' concerns, not mine. Is that correct?

As I mentioned when trying to clarify my reply to Nick, the reason I think
the originally proposed language works, in a way that the modified proposal
does not, is that what you describe conflates the "we decide to revoke"
(the D16, in your example) with the "You gave us some details and we should
have revoked on that basis". That's a valuable property to have, by making
sure it's unambiguous whether they're explicitly limiting the methods of
key compromise, or not.

The goal is, and has been, the same, to make sure that CAs are not setting
an onerous burden on the community to report key compromise, while also
providing guidance to the community on how the CA will reasonably take
action on reports of key compromise, including those that aren't "perfect"
matches to one of the methods.

If I understand Dimitris' concern correctly, it was about ensuring CAs had
flexibility for discretion, and your language supports that discretion.
However, as you note, it still has the issue with non public documents, in
the best case, or it leaves it ambiguous whether the CA is affirmatively
stating they will *reject* any report of key compromise that doesn't
exactly follow their methodology.

I believe Ben's original language (
https://github.com/BenWilson-Mozilla/pkipolicy/commit/719b834689949e869a0bd94f7bacb8dde0ccc9e4
) supports that, by both expecting CAs to define some set of explicit
methods (which was, I believe, Nick's original concern, if I've
understood), while also allowing them to state that they won't reject
reports outright, by disclosing they'll accept additional methods (which is
my concern). This disclosure also addresses Dimitris' concern, because it
allows the CA to accept other methods as needed, but sets the expectation
that the CP/CPS is the canonical place to document the methods you do
accept.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7.1:MRSP Issue #205: Require CAs to publish accepted methods for proving key compromise

2020-11-14 Thread Ryan Sleevi via dev-security-policy
On Sat, Nov 14, 2020 at 4:42 PM Nick Lamb  wrote:

> To the extent your preferred policy is actually even about issue #205
> (see later) it's not really addressing the actual problem we have,
> whereas the original proposed language does that.
>

I don't entirely appreciate being told that I don't know what I'm talking
about, which is how this reply comes across, but as I've stated several
times, the _original_ language is sufficient here, it's the modified
language that's problematic.


> > Yes, you're absolutely correct that I want to make sure a CA says "any
> > method at our discretion", but it's not about humiliation, nor is it
> > redundant. It serves a valuable purpose for reducing the risk of
> > arbitrary, undesirable rejections, while effectively improving
> > transparency and auditability.
>
> This boilerplate does not actually achieve any of those things, and
> you've offered no evidence that it could do so.


Perhaps you and I see evidence as different things. Could you help share
your knowledge of the audit reports and the auditing profession, to help
explain what evidence would be helpful here.

I think this reply is trying to capture that either you don't understand
how it's relevant, or you disagree, but I think it's more useful if you
explain why or how you believe it doesn't achieve these things. I've tried
to do you the same courtesy, by demonstrating how it does, but I can't tell
by such a dismissive reply if you're disagreeing (and if so, why), or if
you simply don't understand and would like to learn more.


> If anything it encourages CAs *not* to actually offer what we wanted: a
> clearly
> documented but secure way to submit acceptable proof of key compromise.
>

Again, this isn't the case, and I very much demonstrated why it wasn't in
my previous message with respect to CP/CPS reviews. You have to do more
than just saying "No", otherwise that just comes across as petulance.


> Why not? It will be easier to write only "Any method at our discretion"
> to fulfil this requirement and nothing more, boilerplate which
> apparently makes you happy but doesn't help the ecosystem.
>

The problem with this statement is that it directly contradicts what I
wrote in the message you're replying to. You're making a strawman here, and
I already demonstrated to you precisely why, and how, it allows us to go
from nothing to X, Y, Z and then further on to A, B. You're description of
what "makes me happy" is literally the opposite of what I said, which is
why I'm concerned that you may not understand my previous message, or how
this works. I'm happy to help clarify further, but I think you need to
elaborate more here so we can clear up the confusion. I'm sure it's equally
unfulfilling to hear me keep telling you you're wrong, so if you're finding
you're unsure of what I meant, I'm happy to answer questions, rather than
have to respond to incorrect statements.

Again, so that it's (hopefully) clear for you, my expression is to support
the original language, and that the modified language introduces weaknesses
that the original language doesn't.


> > But equally, I made the mistake for referring to PACER/RECAP without
> > clarifying more. My reply was to address that yes, there is the
> > existence of "potentially illegitimate revocations", but that it's
> > not tied to "secret" documents (which you misunderstood). And my
> > mistake in not clarifying was that the cases weren't addressed to the
> > CA, but related to a CA subscriber. You can read more about this at
> >
> https://www.techdirt.com/articles/20180506/13013839781/more-copyright-holders-move-up-stack-more-they-put-everyone-risk.shtml
>
> > Here, this is about revocations that harm security, more than help,
> > and you can read from that post more about why that's undesirable at
> >
> https://medium.com/@mattholt/will-certificate-authorities-become-targets-for-dmca-takedowns-7d111275897a
>
> We're in the discussion about issue #205 which is about proving _key
> compromise_


*sigh* So you've move the goal post in your reply, and when I make a good
faith engagement,  you try to chide me about moving goalposts. You
introduced the notion of "potentially illegitimate revocations" as a
misunderstanding of what I said. In my reply, I clarified to you that they
are a thing, which you were dismissive of, and also clarified that you were
misunderstanding my point. You were further confused by the reply, and how
you want to chide me for clarifying for you why what you said was wrong.
Yes, you're absolutely correct that you introduced something offtopic, and
by replying to you, I let the conversation continue to be offtopic.

So, perhaps now that we've had this conversation, and you've learned about
potentially illegitimate revocations are a thing, but that they were not
the thing I was worrying about and that you'd misunderstood, perhaps we can
move back to the substantive discussion?
___

Re: Policy 2.7.1:MRSP Issue #205: Require CAs to publish accepted methods for proving key compromise

2020-11-13 Thread Ryan Sleevi via dev-security-policy
On Fri, Nov 13, 2020 at 6:11 PM Dimitris Zacharopoulos 
wrote:

>
>
> On 2020-11-13 7:17 μ.μ., Ryan Sleevi wrote:
>
>
>
> On Fri, Nov 13, 2020 at 2:55 AM Dimitris Zacharopoulos 
> wrote:
>
>> There is transparency that the CA has evaluated some reporting
>> mechanisms and these will be documented in the CPS. However, on an issue
>> like compromised key reporting, there is no single recipe that covers
>> all possible and secure ways for a third-part to report a key
>> compromise.
>
>
> Sure, and the original proposed language doesn't restrict this.
>
> The CA can still disclose "Email us, and we'll work it out with you", and
> that's still better than the status quo today, and doesn't require the
> CP/CPS update you speculate about.
>
>
> I understand the proposal to describe a different thing. Your proposal to
> accept an email, is a different requirement, which is "how to communicate".
> I think the policy change proposal is to describe the details about what is
> expected from third-parties to submit in order to report a key compromise.
> Whether via email, web form or other means, I think this policy update
> covers *what* is expected to be submitted (e.g. via CSR, signed plain text,
> the actual private key).
>

Right, and that is what I meant as well. My point was the "we'll work it
out with you" being about explicitly stating an *open-ended what*, while
discouraging CAs, by making it detectable during CP/CPS reviews, of
intentionally choosing to *reject* any what they don't like. If a CA
doesn't state an open-ended what, it could mean that their plan is to do
exactly that, which would be bad, or it could just mean they forgot, which
is easily fixed, but also a warning sign.


>
> I addressed this in my reply to Nick, but for your benefit, the "bad"
> thing here is a CA that lists, in their CP/CPS, "We will only accept using
> our convoluted API that requires you to submit on the lunar equinox", and
> then states "Well, that's just the minimum, we have this doc over here
> saying we'll also consider X, Y, Z".
>
> The modified language fully allows that. The original language would see
> methods X, Y, Z also added to the CP/CPS.
>
>
> I think one of us has misunderstood the policy update proposal. I believe
> that what you describe is already covered by the existing policy which
> states that the CA must disclose *how* they accept requests (via email,
> API, web form, etc), disclosed in CPS section 1.5.2.
>

No, I think like above, I may have chosen an unclear example, but I think
we understand it the same. "Convoluted API ... at the lunar equinox" was
trying to cover both the how it's delivered and the method of proving it,
but I can totally see now that it might be perceived as just the how it's
delivered, which wasn't the intent.

I'm talking about the "what method of demonstrating proof is accepted".
That covers both the how it's delivered and how it's demonstrated, and as I
went into further with Nick, my goal is to make sure "X, Y, Z" are listed
in the CPS, and not some side-doc that may or may not be audited and may or
may not be disclosed. The problem with the "minimum" approach is that it
doesn't adequately make clear that the "listed in a side doc" is one of the
several problems to be solved, with the goal being to bring it into the
CP/CPS. Having just the minimum required would still encourage CAs that go
above and beyond the minimum to put it off in a side doc, when the goal is
to get it into the CP/CPS, for reasons I expanded on to Nick.

I think you took it as discouraging a CA to go "above and beyond the bare
minimum", which I agree with you, that would be bad to discourage. But I
want CAs to be clear when they do go above and beyond the minimum, and to
be clear that they also recognize it's important to do so, since the
minimum is just that: the minimum. The minimum isn't meant to be "This is
how you should operate daily", but the "You should be better than this
regularly, but you absolutely cannot regress this threshold", and having
CAs explicitly state that they expect to constantly be improving (by
accepting other methods), helps show they understand that, as I think
you're arguing for here.


> This makes no sense to me. We're not discussing secrets here. Say a
>> third party reports a key compromise by sending a signed plaintext
>> message, and the CA has only indicated as accepting a CSR with a
>> specific string in the CN field. The updated proposal would allow the CA
>> to evaluate this "undocumented" (in the CPS) reporting method, check for
>> its credibility/accuracy and proceed with accepting it or not.
>>
>
> The original pr

Re: Policy 2.7.1:MRSP Issue #205: Require CAs to publish accepted methods for proving key compromise

2020-11-13 Thread Ryan Sleevi via dev-security-policy
Right, I can see by my failing to explicitly state you were
misunderstanding my position in both parts of your previous mail, you may
have believed you correctly understood it, and not picked up on all of my
reply.

To be very clear: "secret" document is not what you described, as a way for
a CA to hide nefariousness. In the context of this issue, I'm talking about
"secret" as in not part of the public commitments that can be examined via
the CP/CPS, and that are visibility, transparently part of the audit. These
are the "internal" documents that may or may not be audited (depending on
the audit scheme and the auditor's interpretation; ETSI is less transparent
here, but I'm sure they'll insist it's somehow the same). Think your
training manuals, your internal policies for incident management, your
policies on escalation or interpretations of the BRs (such as jurisdiction
requirements), etc. They are things that are conceptually and logically
part of a CA, and part of how a CA operates, but they aren't part of the
externally documented set. Or, they might not be explicitly linked to some
audit criterion that requires examining then; for example, change control
management would be, but policies around lunch and break times are unlikely
to be.

My point is to bring them "into the CP/CPS". As I mentioned in my previous
e-mail, although I can understand you might be confused because I did not
clearly state I disagreed with you, the point is to make it explicit
whether or not a CA is stating that they will *reject* anything they don't
like. If a CA says "I accept X, Y, Z", and you send them "A", and they say
"Sorry, we said we don't accept A, you decided to trust us anyway, boo hoo
for you", they're not entirely wrong. That's exactly what audits and
CP/CPSes are designed to make explicit. If the CA says "We accept X, Y, Z,
and any other method", then if someone sends them "A", there's a
presumption here of acceptance. If the CA decides to reject it then, they
bear the burden and presumption of explaining their discretion, especially
if it turns out to be a valid issue. Here, the presumption is on the CA to
demonstrate why their judgement and discretion is trustworthy, knowing that
if they rejected A without good cause, they will be viewed negatively.
Equally, we can decide whether the new policy should ACTUALLY be "We accept
X, Y, Z, A, and any other method" - making it clear the new minimum set.

However, while you think it redundant, it actually provides a number of
important details for CP/CPS reviews. For example, were a CA to omit "any
method", it easily raises a red flag to ask a follow-up. We see these sorts
of omissions all the time in CP/CPS reviews, particularly around domain
validation methods (3.2.2.4). So we'll ask during the review, and the CA
will say, "well, we go into more detail in this other document. Our
auditors looked at it, and agreed it met the requirements, but we didn't
put it in our CP/CPS". Now, the problem with accepting that answer is that
you don't know whether or not they actually did go into that detail in the
other document - we don't have it, and would have to ask for it. You can
see this playing out in Incident Reports today, where we'll ask a CA to
attach the old and new document (such as a training document, an
architectural layout, a testing matrix), where we try to understand what's
changing. However, we also don't know whether or not the auditor looked at
that document as well; the CA's version of the audit report would show
this, but the one we see does not.

The benefit here is that by getting it into the CP/CPS, it clearly becomes
part of the audit. It also becomes part of the CA's public commitments, and
then we can see if they're changing operational behavior midstream. Either
they updated their CP/CPS, and we see that they did in fact go through a
change, or they changed behavior and didn't update their CP/CPS, which will
get flagged in the audit. Both ways, we, the community, end up learning
that something changed, and can review that change to make sure that change
is aligned with what we want.

Yes, you're absolutely correct that I want to make sure a CA says "any
method at our discretion", but it's not about humiliation, nor is it
redundant. It serves a valuable purpose for reducing the risk of arbitrary,
undesirable rejections, while effectively improving transparency and
auditability.

On Fri, Nov 13, 2020 at 8:12 PM Nick Lamb  wrote:

> On Fri, 13 Nov 2020 12:11:57 -0500
> Ryan Sleevi via dev-security-policy
>  wrote:
>
> > I want it to be explicit whether or not a CA is making a restrictive
> > set or not. That is, it should be clear if a CA is saying "We will
> > only accept these specific methods" or if the CA is saying "We will
> > accept these methods, plus any metho

Re: Policy 2.7.1: MRSP Issue #192: Require information about auditor qualifications in the audit report

2020-11-13 Thread Ryan Sleevi via dev-security-policy
On Thu, Nov 12, 2020 at 7:27 PM Kathleen Wilson via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> I am very much in favor of increasing transparency about the
> qualifications of the auditors providing audit statements for CAs in our
> program. However, I think that we need to spend more time figuring out a
> few things before adding such a requirement to our policy. Therefore, I
> think we should add this to our list of things to spend some focused
> time to figure out in early 2021, and move this item to the next version
> of Mozilla’s root store policy.
>

I think that's a reasonable place, but I do worry that there's a risk that
either progress won't be made until then, and we'll be in the similar
situation of needing more time.

Given that this relates to audits and audit statements, does it make sense
to actually go forward with trying to find a narrowly banded approach, with
a forward dated requirement, such as 12 months after 2.7.1?

The benefit to this approach is that it allows the expectation to be
factored into contractual agreements with auditors, which may be done
several years in advance, as well as provides the necessary signals to
audit firms, that this is a thing of real and concrete value that Mozilla
is committed to. It also allows Mozilla to assess progress, throughout the
year, on the full set of requirements. If we think about other changes that
have required some degree of details to be worked out (e.g. the SHA-1
deprecation, RSA-1024 deprecation, certificate lifetimes), the benefit of
setting a clear expectation ahead of time helps move the conversation from
debating "if" to discussing "how", which can result in more productive
engagement.

Equally, I think there's a lot that can be borrowed from how approaches to
transparency have been done in other regards. For example, with the
introduction of CAA, there was "merely" a requirement to disclose, which
later turned into more concrete criteria and objectives. Similarly, with
respect to organizational validations, an approach being taken right now
for the EV Guidelines is to disclose, with the intent of using such
disclosures to better inform, analyze, and develop concrete requirements,
based on those collective disclosures and interpretations.

In this regard, the principles from Mozilla's 1.0 Certificate Policy
provide a small minimum, along with some of the language from, say, the
FPKI, regarding technical competencies. The basis here is simply for the
auditor to *disclose* why they believe they meet the criteria or objectives
set. This avoids the need to address part of your questions (e.g. "How do
auditors apply to be considered qualified auditors"), because it leaves the
current policies and presumptions in place, but introduces the disclosure
requirement for why the auditor is relevant and reliable for the report.

This is why I took exception to ETSI's approach, which I worry your
questions jump to as well, which is the first step to answering those
questions is understanding the existing set of qualifications and how those
relate to Mozilla's goals and principles, without seeking to establish a
full "qualification regime" out of the gate. By focusing on disclosure, it
allows us to evaluate both "is this what we expect/want", as well as better
address some of the issues (e.g. "We see auditors with skill X provide much
better reports than skill Y; we should consider making X a required skill")

As it relates to Ben's proposal, I think the language he's proposed largely
works, and we can avoid the set of questions you're worried about, by
removing the language "Mozilla will review each individual auditor's
credentials and ensure that any Qualified Auditor has the collective set of
skills required by Section 8.2 of the Baseline Requirements". Additionally,
introducing a forward-dated requirement (e.g. 12 months) helps work through
any of the delivery issues Jeff highlighted, in a way that's mutually
workable. This ensures transparency, without adding a hurdle here. Issues
related to auditors that Mozilla may lose trust in are already covered in
Section 2.3 of the policy, with
https://github.com/mozilla/pkipolicy/issues/146 existing to provide greater
clarity, but I think this can be seen as a contingency.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7.1:MRSP Issue #205: Require CAs to publish accepted methods for proving key compromise

2020-11-13 Thread Ryan Sleevi via dev-security-policy
On Fri, Nov 13, 2020 at 2:55 AM Dimitris Zacharopoulos 
wrote:

> There is transparency that the CA has evaluated some reporting
> mechanisms and these will be documented in the CPS. However, on an issue
> like compromised key reporting, there is no single recipe that covers
> all possible and secure ways for a third-part to report a key
> compromise.


Sure, and the original proposed language doesn't restrict this.

The CA can still disclose "Email us, and we'll work it out with you", and
that's still better than the status quo today, and doesn't require the
CP/CPS update you speculate about.


> For CAs that want to do the right thing with this flexibility, the
> original language Ben proposed seems to be problematic, which is why I
> highlighted it in the discussion.


As above


> The updated language keeps all the
> "good" things from the original language, and allows the CA to accept a
> reporting method that they may have not considered. Obviously, the
> logical thing is that once this new method is accepted, the CPS should
> be updated to include that additional method but that might take place
> later, after the report was accepted and certificates revoked.
>
> I can't think of a bad scenario with this added language.
>

I addressed this in my reply to Nick, but for your benefit, the "bad" thing
here is a CA that lists, in their CP/CPS, "We will only accept using our
convoluted API that requires you to submit on the lunar equinox", and then
states "Well, that's just the minimum, we have this doc over here saying
we'll also consider X, Y, Z".

The modified language fully allows that. The original language would see
methods X, Y, Z also added to the CP/CPS.


> This makes no sense to me. We're not discussing secrets here. Say a
> third party reports a key compromise by sending a signed plaintext
> message, and the CA has only indicated as accepting a CSR with a
> specific string in the CN field. The updated proposal would allow the CA
> to evaluate this "undocumented" (in the CPS) reporting method, check for
> its credibility/accuracy and proceed with accepting it or not.
>

The original proposal also allows this, by saying exactly what you stated
here, but within the CP/CPS.
The modified proposal, however, keeps secret whether or not the CA has a
policy to unconditionally reject such messages.

You seem to be thinking the original proposal prevents any discretion; it
doesn't. I'm trying to argue that such discretion should be explicitly
documented, rather than kept secret by the CA, or allowing the CA to give
conflicting answers to different relying parties on whether or not they'll
accept such messages.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7.1:MRSP Issue #205: Require CAs to publish accepted methods for proving key compromise

2020-11-13 Thread Ryan Sleevi via dev-security-policy
On Thu, Nov 12, 2020 at 10:51 PM Nick Lamb via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Thu, 12 Nov 2020 15:51:55 -0500
> Ryan Sleevi via dev-security-policy
>  wrote:
>
> > I would say the first goal is transparency, and I think that both
> > proposals try to accomplish that baseline level of providing some
> > transparency. Where I think it's different is that the concern
> > Dimitris raised about "minimums", and the proposed language here, is
> > that it discourages transparency. "We accept X or Y", and a secret
> > document suggesting "We also accept Z", makes it difficult to
> > evaluate a CA on principle.
>
> [...]
>
> > Yes, this does mean they would need to update their CP/CPS as they
> > introduce new methods, but this seems a net-positive for everyone.
>
> I think the concern about defining these other than as minimums is
> scenarios in which it's clear to us that key compromise has taken place
> but your preferred policy forbids a CA from acting on that knowledge on
> the grounds that doing so isn't "transparent" enough for your liking
> because their policy documents did not spell out the method which
> happened to be used.
>

I'm afraid this misunderstands things.

I want it to be explicit whether or not a CA is making a restrictive set or
not. That is, it should be clear if a CA is saying "We will only accept
these specific methods" or if the CA is saying "We will accept these
methods, plus any method at our discretion".

It's very easy for a CA to be transparent about the latter, and is no
different than the so-called "any other method" that was 3.2.2.4.10 in the
BRs.

The problem with the updated language from Ben is that it makes it
indistinguishable whether or not the CP/CPS disclosed method is a complete
set (i.e. that the CA will reject a reported compromise because it fails to
meet their preferred method) or not. The fact that there can be "secret"
documents which could say "We'll accept anything at our discretion" or that
it could be empty is a problem.

You seem to be taking it as if the CA has to provide exhaustive technical
details for each method. That's not the case. They need to disclose the
minimum (which might be "we accept anything at our discretion if e-mailed
to us"), but with the original language, they would functionally need to
disclose whether or not they will *reject* something not on that list, and
the proposed revision removes that.


>
> You seem to anticipate a quite different environment in which "secret
> documents" are used to justify revocations which you presumably see as
> potentially illegitimate. I haven't seen any evidence of anything like
> that happening, or of anybody seeking to make it happen - which surely
> makes a Mozilla policy change to try to prevent it premature.


I encourage you to make use of PACER/RECAP then.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7.1:MRSP Issue #205: Require CAs to publish accepted methods for proving key compromise

2020-11-12 Thread Ryan Sleevi via dev-security-policy
On Thu, Nov 12, 2020 at 1:39 PM Ben Wilson via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Thu, Nov 12, 2020 at 2:57 AM Dimitris Zacharopoulos 
> wrote:
>
> >
> > I believe this information should be the "minimum" accepted methods of
> > proving that a Private Key is compromised. We should allow CAs to accept
> > other methods without the need to first update their CP/CPS. Do people
> > think that the currently proposed language would forbid a CA from
> > accepting methods that are not explicitly documented in the CP/CPS?
> >
> > I also think that "parties" is a bit ambiguous, so I would suggest
> > modifying that to follow the language of the BRs section 4.9.2
> > "Subscribers, Relying Parties, Application Software Suppliers, and other
> > third parties". Here is my proposed change:
> >
> > "Section 4.9.12 of a CA's CP/CPS MUST clearly specify the methods (at a
> > minimum) that Subscribers, Relying Parties, Application Software
> > Suppliers, and other third parties may use to demonstrate private key
> > compromise."
> >
> > Dimitris,
> Instead, what about something like,  "Section 4.9.12 of a CA's CP/CPS MUST
> clearly specify its accepted methods that Subscribers, Relying Parties,
> Application Software Suppliers, and other third parties may use to
> demonstrate private key compromise. A CA MAY allow additional, alternative
> methods that do not appear in section 4.9.12 of its CP/CPS." ?
>

I understand and appreciate Dimitris' concern, but I think your original
language works better for Mozilla users in practice, and sadly, moves us
back in a direction that the trend has been to (carefully) move away from.

I would say the first goal is transparency, and I think that both proposals
try to accomplish that baseline level of providing some transparency. Where
I think it's different is that the concern Dimitris raised about
"minimums", and the proposed language here, is that it discourages
transparency. "We accept X or Y", and a secret document suggesting "We also
accept Z", makes it difficult to evaluate a CA on principle.

The second goal is auditability: whether or not the CP/CPS represents a
binding commitment to the community. This is why they exist, and they're
supposed to help relying parties not only understand how the CA operates,
but how it is audited. If a CA has a secret document Foo that discloses
secret method Z, is the failure to actually support secret method Z worth
noting within the audit? I would argue yes, but this approach would
(effectively) argue no.

I think your original language was better suited for this, and is
consistent with the view that the CP/CPS is both the primary document that
becomes part of the auditable controls, and is the primary document for
reviewing how well a CA aligns with Mozilla's practices, needs, and users.

Yes, this does mean they would need to update their CP/CPS as they
introduce new methods, but this seems a net-positive for everyone. Getting
CAs more attuned into updating their CP/CPS, to provide adequate
disclosure, seems like a net-win, even though I'm sure some CAs have
designed processes and contracts that make this difficult. However, the
industry of the past 10 years has shown the importance of being able to
update the CP/CPS, and clearly document it, so I think we should
increasingly be skeptical about statements about the challenges.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7.1: MRSP Issue #192: Require information about auditor qualifications in the audit report

2020-11-09 Thread Ryan Sleevi via dev-security-policy
On Mon, Nov 9, 2020 at 11:53 AM Clemens Wanko via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Hi Ryan, hi all,
> well, isn’t the point to make here just, that there are multiple ways to
> ensure proper auditor qualification? No matter which way you like to go
> however, you must define the details of your regime: what is the criteria
> you require the auditor to fulfill, how do you organize that these criteria
> are checked, how do you ensure the quality of this process, how do you
> publish any results with regard to the auditors qualification, etc.


Clemens,

I see you doubled down on this approach, but I already responded
previously. I think this mindset to compliance does a great disservice, and
also substantially misrepresents, the values and principles at play here.
That is, you've again anchored on the notion here tha compliance should be
a checklist - this exact approach and mindset that has led to countless
security issues and incidents. This is the fundamentally wrong approach
here that I think bears calling out, and it's worth again, emphasizing,
that no, it doesn't have to be like how you describe, and also
(unsurprisingly) overlooks the downsides.

If you examine the posts I referenced previously, you'll see this in
action, so I do hope you will. So your questions are a bit nonsense here,
because they start from a conceptually bad foundation.



> Certainly, there are always alternatives. But the alternative should be
> clearly defined and described in order to allow an evaluation of all the
> options before taking a decision. In the current case I like to understand
> the alternatives you suggest for Mozilla.
>

You've turned this thread into a broader discussion of ETSI standards, and
while many criticisms could be fairly stated, I think that detracts from
this thread, and ignores the very thing it was started to discuss. I'd like
to reorient you back to the original purpose, of bringing transparency to
the auditor skills and competencies. It would seem your position is that
there should be no independent evaluation, by affected software vendors, of
the skills and competencies of the auditor or how they perform the audit.
You would like us to rely solely on the NAB and Supervisory Body for
carrying that out, even for a totally unrelated audit scheme (the eIDAS
Regulation), which can incidentally make use of largely-unrelated standards
for audits (the EN 319 403 series). You seem to argue that's superior to
any form of transparency or accountability, and seem to reject the notion
that, were auditors skills and qualifications known and part of the report,
it can tangibly lead to improvements on the requirements.

Frankly, I think that idea is nonsense. We know, from the Supervisory
Bodies involved in the eIDAS Regulation, that there are vast differences in
quality and expectation from CABs. Browsers have shared those same concerns
to ENISA, and ACAB'c seems to recognize it as well, from its very
existence. Yet you seem to reject efforts to improve that, and suggest that
we simply "trust the process" that it'll get sorted out. We have ample
evidence, from Certificate Transparency, about how bringing transparency
reveals systemic issues and flaws. Yet, rather than embrace this for
audits, it's seemingly rejected with unsubstantiated roadblocks.

You've not responded, in substance, to my previous message, and so I'm not
sure how to interpret that, especially since this largely repeats the same
points.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7.1: MRSP Issue #192: Require information about auditor qualifications in the audit report

2020-11-07 Thread Ryan Sleevi via dev-security-policy
On Sat, Nov 7, 2020 at 9:21 AM Jeff Ward via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Sure Ryan, the answer is quite simple.  When I used the word "public" in
> my post, I should have been more clear as to the nuance of this concept.
> Public reports by definition are generally distributed (available to
> anyone).  When reports are restricted in distribution and only intended for
> a certain user or users as specified in the report, they are no longer
> public.  In each of the EY examples you reference, they all state in the
> last paragraph before the EY signature, "This report is intended solely for
> the information and use of GPO-CA and the Federal PKI Policy Authority and
> is not intended to be, and should not be, used by anyone other than GPO-CA
> and the Federal PKI Policy Authority."  When reports are not generally
> distributed and available to the general public, the specifics of
> individuals performing the audit will not be present.   When my firm issues
> reports for FPKI, we also provide the listing of individuals in a letter
> that is not public information.
>

Thanks Jeff,

This is useful for confirming that there is a clearly viable path forward
here. As you know, I appreciate the nuance here regarding public reporting,
as well as reports that are restricted in distribution but still made
public (e.g. as part of a regulatory function, such as the OIG in this
case). I think we agree in the substance: that this is possible, and are
merely working out the details here with regards to reporting.

For example, Mozilla could require that, in addition to the "traditional"
WebTrust reporting , Mozilla be named as part of a restricted distribution
report that contains these details. Alternatively, Mozilla could require
that, as part of participating within their root program, CAs ensure such
reports include as restricted distribution those Subscribers, Relying
Parties, and Application Vendors that would rely upon the CAs' services,
much like widely practiced in industry today with respect to cloud
providers.

Would you agree that it's fair to say that it is not fundamentally that
auditors cannot report such information, but focused here on the details of
how that report is delivered to Mozilla?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7.1: MRSP Issue #192: Require information about auditor qualifications in the audit report

2020-11-07 Thread Ryan Sleevi via dev-security-policy
On Sat, Nov 7, 2020 at 4:52 AM Dimitris Zacharopoulos 
wrote:

>
> I will try to further explain my thoughts on this. As we all know,
> according to Mozilla Policy "CAs MUST follow and be aware of discussions in
> the mozilla.dev.security.policy
>  forum, where
> Mozilla's root program is coordinated". I believe Mozilla Root store
> managers' minimum expectations from CAs are to *read the messages and
> understand the content of those messages*. Right now, we have [1], [2],
> [3], [4], [5], [6], [7], [8], [9] policy-related threads opened up for
> discussion since October 15th.
>
> If every post in these threads contained as much information and
> complexity as your recent reply to Clemens,
>

This seems like a strawman argument,  ht I don’t think it’s intentional.

You’re arguing that “if things were like this hypothetical situation, that
would be bad”. However, they aren’t like that situation, as the evidence
you provided shows. This also goes back to the “what is your desired
outcome from your previous mail”, and trying to work out what a clear call
to action to address your concerns. Your previous message, especially in
the context of your (hypothetical) concern, reads like you’re suggesting
“Mozilla shouldn’t discuss policy changes with the community”. I think
we’re all sensitive and aware of the desire not to have too many parallels
discussions, which is exactly why Ben’s been only introducing a few points
a week, to facilitate that and make progress without overwhelming.

As it relates to this thread, or any other thread, it seems the first order
evaluation for any CA is “Will the policy change”, followed by “What do I
need to do to meet the policy?”, both of which are still very early in this
discussion. You’re aware of the policy discussion, and you’re aware a
decision has not been made yet: isn’t that all you need at this point?
Unlike some of the other proposals, which require action by CAs, this is a
proposal that largely requires action by auditors, because it touches on
the audit framework and scheme. It seems like, in terms of expectations for
CAs to participate, discussing this thread with your auditor is the
reasonable step, and working with them to engage here.

Hopefully that helps. Your “but what if” is easily answered as “but we’re
not”, and the “this is a lot, what do I need to do” is simply “talk with
your auditor and make sure they’re aware of discussions here”. That seems a
very simple, digestible call to action?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7.1: MRSP Issue #192: Require information about auditor qualifications in the audit report

2020-11-06 Thread Ryan Sleevi via dev-security-policy
On Fri, Nov 6, 2020 at 6:08 PM Dimitris Zacharopoulos via
dev-security-policy  wrote:

> Can other people, except Ryan, follow this thread? I certainly can't. Too
> much information, too much text, too many assumptions, makes it impossible
> to meaningfully participate in the discussion.


These are complex topics, for sure, but that’s unavoidable. Participation
requires a degree of understanding both about the goals to be achieved by
auditing, as well as the relevant legal and institutional frameworks for
these audits. So, admittedly, that’s not the easiest to jump into.

Could you indicate what you’re having trouble following? I don’t know that
we can do much about “too much information”, since that can be said about
literally anything unfamiliar, but perhaps if you would simply ask
questions, or highlight what you’d like to more about, it could be more
digestible?

What would you say your desired outcome from your email to be? Accepting,
for a second that this is a complex topic, and so discussion will
inherently be complex, and so a response such as “make it simpler for me”
is a bit unreasonable.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7.1: MRSP Issue #192: Require information about auditor qualifications in the audit report

2020-11-06 Thread Ryan Sleevi via dev-security-policy
On Fri, Nov 6, 2020 at 12:00 PM Clemens Wanko via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Hi Ryan, hi all,
> three things to comment on that:
>
> 1.  How is the EU ETSI audit scheme thought and what is it intended to
> provide to Mozilla and the CA/Browser ecosystem?

The European scheme of technical standards for CA/TSP developed by ETSI was
> made and is constantly adopted to integrate CA/Browser requirements. The
> main standards I am referring to are: ETSI EN 319 401, EN 319 411-1 and EN
> 319 411-2. With regard to auditor guidance specifically for the CA/Browser
> ecosystem, ETSI even developed and maintains the TS 119 403-2 “Part 2:
> Additional requirements for Conformity Assessment Bodies auditing Trust
> Service Providers that issue Publicly-Trusted Certificates”.
> For auditors using such technical standards Europe has established a well
> thought through scheme based upon organizations (accredited audit bodies)
> which job it is – amongst others – to ensure auditors (the natural person)
> qualification, independence, etc. This scheme turned out to be of high
> trustworthiness, reliability, robustness, etc. And it works very well over
> here since years.
>

I'm sure something is lost in translation, but this does not address any of
the concerns raised. The ETSI standards here are not relevant; voluntary
standards, in and of themselves, are not binding. The fact that a standard
exists is not relevant, since what matters is how the standard is adhered
to and supervised.

Your subsequent statement is simply "Well, they're auditors, people trust
them to do this", without any form of meaningful engagement on the why.
"It's their job to ensure independence" - yes, but that says nothing about
the performance of the job, that your measure of independence is consistent
with another measure, etc. Your entire appeal here simply is "We say we're
trustworthy, so of course we're trustworthy", which would be farcical if it
wasn't presented so seriously.


> 2.  Transparency
> The transparency lies in the European scheme with independent skilled
> bodies auditing each other in order to ensure conformant implementation of
> technical standards as well as of operational requirements for audit bodies
> as well as for the single auditor (natural person).


This is, again, a restatement of "Trust us, because we declared we're
trustworthy". For something such as trust, there's no question of "but
verify". Your appeal to SDOs is not relevant here, because as you and I are
both aware, the SDO just makes (voluntary) standards, and doesn't enforce
them. And this gets to the heart of the matter: the question about how each
of these dimensions are interpreted is something you would prefer the CABs
to self-declare, and the suggestion here is "Sure, you can say that, but
you need to also help us understand why that's true".


> Not only the requirements for the qualification/independence/etc. of the
> auditor (natural person) are clearly defined within the ISO/ETSI but also
> the management requirements of the body in order to ensure that they are
> kept upright – btw. BSI C5 controls, section 4.4.9 is meant similar to
> that.
> Every accredited body is audited at least once a year by its NAB which
> checks conformant audit processing (e.g. according to ISO/IEC17065 in
> combination with ETSI EN 319 403).
>

This is the first time in this e-mail that you remotely come to
establishing a "why". As we both know full well, the authority of the CAB
(and its assessment along these relevant axis in 319 403) is imbued by the
NAB. The authority of the NAB is imbued by EA, established by Regulation
765/2008. The root of trust is, simply stated, "The state mechanisms of the
European Union trusts us, ergo, we are defacto trustworthy".

As a long-time participant in this group, I'm sure you can understand and
appreciate that the "Trust us, we're the government" argument rarely
improves trust. For CAs, we work on the basis of concrete evidence; after
all, this is why we have audits in the first place, even for government
CAs. The argument you're making here is that EA imbued sovereign member
states the ability to establish their NAB, and the NABs are responsible for
supervising the CABs. If we have a complaint with the CAB, the argument
goes, we should complain to the NAB, consistent with the ISO 17065
framework that EN 319 403 is based on.

Yet this misguided argument overlooks a host of salient details, no doubt
because of the convenience in doing so. Unlike, say, the ISAE 3000 approach
to audits, the approach taking by this EA/NAB/CAB chain is much more
limited with respect to a host of professional duties, and only directly
applicable within the contents of 319 403, and the relevant (supervised)
reports, such as 319 411-1. Thus, there's easy opportunity for there to be
a divergence in needs, by a CAB asserting that they are qualified within
the aegis of 319 403/319 411-1 with the necessary 

Re: Policy 2.7.1: MRSP Issue #192: Require information about auditor qualifications in the audit report

2020-11-06 Thread Ryan Sleevi via dev-security-policy
On Fri, Nov 6, 2020 at 12:31 PM Jeff Ward via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Audit reports, whether for WebTrust, financial statements, or other forms
> of engagement reports providing assurance to users of the information, do
> not include specific audit team members’ names.  Simply stated, this desire
> to include individual auditor’s qualifications in a public report is not
> consistent with any other compliance reporting methods or reporting
> requirements for CAs, or any other auditee for that matter.


Hi Jeff,

Could you help me square this statement with the practical examples? For
example, here's a report [1] from a WebTrust TF member, Ernst and Young,
demonstrating how this works in practice. You can see there hasn't been an
issue for years [2][3], even with the direct involvement of individuals on
the WebTrust TF in preparing such an audit.

So I'm having difficulty squaring your statement that they "do not", when
we've got examples from long-standing members of the WebTrust TF that
demonstrate that, in practice, they do. Could you help highlight what's
inconsistent here?

Alternatively, and as mentioned to ETSI, here's an example of an ISAE 3000
based audit scheme, similar to WebTrust, that also discloses the relevant
professional qualifications and skills to clients [4], as discussed in
4.4.8 and 4.4.9.

[1] https://www.oversight.gov/sites/default/files/oig-reports/18-19.pdf
[2]
https://www.oversight.gov/sites/default/files/oig-reports/Assessment%20Report%2019-12%20GPO%20Federal%20PKI%20Compliance.pdf
[3] https://www.oversight.gov/sites/default/files/oig-reports/17-27.pdf
[4]
https://www.bsi.bund.de/EN/Topics/CloudComputing/Compliance_Criteria_Catalogue/Compliance_Criteria_Catalogue_node.html
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7.1: MRSP Issue #192: Require information about auditor qualifications in the audit report

2020-11-05 Thread Ryan Sleevi via dev-security-policy
On Thu, Nov 5, 2020 at 7:00 PM Wojtek Porczyk 
wrote:

> On Thu, Nov 05, 2020 at 11:48:20AM -0500, Ryan Sleevi via
> dev-security-policy wrote:
> > competency is with individuals, not organizations.
>
> [snip]
>
> > I find the appeal to redundancy and the NAB, and further, the suggestion
> of
> > GDPR, to be a bit insulting to this community. This opposition to
> > transparency fundamentally undermines the trust in ETSI provided audits,
> or
> > this appeal to the eIDAS scheme, which has limited relevance given it's a
> > fundamentally different audit scheme, beggars belief. If you'd like to
> > raise Fear/Uncertainty/Doubt about GDPR, I believe you owe this
> community a
> > precise and detailed explanation about what you believe, relevant to the
> > auditor professional experience, would be problematic.
>
> Not the original poster, but
> 1) I understand that the very general language of OP, which you dismiss as
> FUD, is because this is "consult your own lawyer" area;
> 2) contrary to what you have written, the onus is on Mozilla to demonstrate
> the compliance with GDPR and not the other way around.
>

I think this is significantly misstating things.

Without opining on the legal statements you've offered, the reality is that
fundamentally, if we cannot achieve reasonable evidence to trust the
auditor - specifically, the individuals that make up the audit team (both
the audit members and any relevant technical experts that may have
contributed) - then there's no objective reason to accept the audit. The
concerns you raised are secondary to that, and so the objection to
something *cannot* be provided simply means that it would limit the
utility, reliability, and trustworthiness of those audits and potentially
make them unacceptable. The need for objective, transparent understanding
about the skills and competencies is as paramount as the need for an
objective, transparent understanding about the CA itself, and for which
audits exist. The attestation of the CAB is demonstrably insufficient for
purpose, in the same way as a CA saying "I solemnly swear I'm up to good"
would not somehow invite trust.

I understand that the appeal is "Well, the NAB oversees the CAB, you see,
and EA oversees the NABs, and through the power of Regulation 765/2008,
trust is imbued", but that of course fails the very basic test of having
concrete, objective data for the consumer of the audit (e.g. Mozilla, but
also this broader community). It entirely outsources the supervision,
without providing any evidence of meeting the requirement, for a core
product security requirement. Rather than accept such risk, it becomes just
as reasonable to reject such audits.

I highlight this because to suggest something *cannot* be done is to
unquestionably invite the assertion of FUD. If there are *challenges* to
doing so, we should try to find solutions. But outright dismissal, or
suggesting somehow that transparency is not necessary because *handwave*
this other scheme *handwave* doesn't inspire confidence, nor does it
achieve the necessary transparency. This is clearly not insurmountable, as
discussed previously, and in the worst case, might mean that rather than
arbitrarily accepting audits, Mozilla would contractually negotiate with
potential auditors to ensure the necessary skills and qualifications were
met (e.g. as widely practiced in other industries).
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7.1: MRSP Issue #192: Require information about auditor qualifications in the audit report

2020-11-05 Thread Ryan Sleevi via dev-security-policy
That sounds like a great idea, and sounds like a good compliment to an
approach that several (Root) CAs have taken, of requiring all their
externally-operated subordinate CAs review their auditors and audit scope
with the root CAO, before finalizing the audit engagement.

An example of how the public review has been done in the past is available
at
https://groups.google.com/g/mozilla.dev.security.policy/c/sRJOPiePUvI/m/oRmRffaQmcYJ

On Thu, Nov 5, 2020 at 4:53 PM Tim Hollebeek via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Ben,
>
> We're concerned that these changes could unintentionally prevent many new
> auditors from being able to conduct audits, despite being Qualified
> Auditors.  The problem is that CAs will understandably be very hesitant to
> use a new auditor, as they cannot risk having an audit conducted, and then
> not accepted by Mozilla.  An unfortunate effect of that is that CAs will
> likely only consider auditors who have a previous history of being accepted
> as qualified.
>
> One potential solution is to allow CAs to ask Mozilla to "pre-vet" a
> potential auditor they would like to use.  Mozilla policy already has a
> process in the next paragraph to provide opinions on auditors who "do not
> fit the definition of Qualified Auditor" that could potentially be
> re-used.  Unfortunately, as the paragraph is currently worded, it can only
> be used for auditors who are *NOT* Qualified, which is really weird.
>
> It would be helpful to clarify that CAs MAY use the same procedure to work
> with Mozilla to determine whether a particular auditor is Qualified or not,
> prior to the beginning an engagement.
>
> -Tim
>
> > -Original Message-
> > From: dev-security-policy 
> On
> > Behalf Of Ben Wilson via dev-security-policy
> > Sent: Tuesday, November 3, 2020 6:53 PM
> > To: Mozilla 
> > Subject: Policy 2.7.1: MRSP Issue #192: Require information about auditor
> > qualifications in the audit report
> >
> > Historically, Mozilla Policy required that CAs "provide attestation of
> their
> > conformance to the stated verification requirements and other operational
> > criteria by a competent independent party or parties with access to
> details of
> > the CA's internal operations."
> > https://wiki.mozilla.org/CA:CertificatePolicyV1.0  "Competency" was "for
> > whom there is sufficient public information available to determine that
> the
> > party is competent to judge the CA's conformance to the stated criteria.
> In the
> > latter case the 'public information' referred to should include
> information
> > regarding the party's:
> >
> >- knowledge of CA-related technical issues such as public key
> >cryptography and related standards;
> >- experience in performing security-related audits, evaluations, or
> risk
> >analyses; *and*
> >- honesty and objectivity."
> >
> > Today, section 3.2 of the MRSP
> >  > group/certs/policy/#32-auditors>
> > states, "In normal circumstances, Mozilla requires that audits MUST be
> > performed by a Qualified Auditor, as defined in the Baseline Requirements
> > section 8.2," but under section 2.3  > US/about/governance/policies/security-group/certs/policy/#23-baseline-
> > requirements-conformance>,
> > "Mozilla reserves the right to accept audits by auditors who do not meet
> the
> > qualifications given in section 8.2 of the Baseline Requirements, or
> refuse
> > audits from auditors who do."
> >
> > Section 8.2 of the Baseline Requirements states an auditor must have:
> > 1. Independence from the subject of the audit; 2. The ability to conduct
> an
> > audit that addresses the criteria specified in an Eligible Audit Scheme
> (see
> > Section 8.1); 3. Employs individuals who have proficiency in examining
> Public
> > Key Infrastructure technology, information security tools and techniques,
> > information technology and security auditing, and the third-party
> attestation
> > function; 4. (For audits conducted in accordance with any one of the ETSI
> > standards) accredited in accordance with ISO 17065 applying the
> requirements
> > specified in ETSI EN 319 403; 5. (For audits conducted in accordance
> with the
> > WebTrust standard) licensed by WebTrust; 6. Bound by law, government
> > regulation, or professional code of ethics; and 7. Except in the case of
> an
> > Internal Government Auditing Agency, maintains Professional
> Liability/Errors
> > & Omissions insurance with policy limits of at least one million US
> dollars in
> > coverage
> >
> > It is proposed in Issue #192
> >  that information about
> > individual auditor's qualifications be provided--identity, competence,
> > experience and independence. (For those interested as to this
> independence
> > requirement, Mozilla Policy v.1.0 required either disclosure of the
> auditor's
> > compensation or the 

Re: Policy 2.7.1: MRSP Issue #192: Require information about auditor qualifications in the audit report

2020-11-05 Thread Ryan Sleevi via dev-security-policy
Hi Clemens,

I think this fundamentally misunderstands the proposal. As Ben mentioned,
and as countless other schemes have highlighted, competency is with
individuals, not organizations. While the eIDAS Scheme is relevant for
eIDAS qualification, I think it's important to highlight that browsers are
not, nor have they ever, relied upon the Qualified Trust List, nor on the
eIDAS Framework, as they are fundamentally separate trust frameworks. I
understand you see this redundant, but given that browsers are not relying
on the Supervisory Body function, since they are different trust
frameworks, it's absolutely vital for transparency to be able to provide
that information.

While something may be acceptable for the eIDAS Certification scheme,
audits exist to support those consuming them, and it's important to make
sure they are addressed. WebTrust equally has an approach like you
describe, but what is being suggested here is that is not sufficient for
the need and use case, and I fully support that. I can understand that
professional accountability is scary, because it might mean that audits
that might be acceptable for eIDAS are rejected as unacceptable for
Mozilla, but again, that's a reflection of the different nature of trust
frameworks.

I find the appeal to redundancy and the NAB, and further, the suggestion of
GDPR, to be a bit insulting to this community. This opposition to
transparency fundamentally undermines the trust in ETSI provided audits, or
this appeal to the eIDAS scheme, which has limited relevance given it's a
fundamentally different audit scheme, beggars belief. If you'd like to
raise Fear/Uncertainty/Doubt about GDPR, I believe you owe this community a
precise and detailed explanation about what you believe, relevant to the
auditor professional experience, would be problematic.

The suggestion here that this is somehow unique or untowards is deeply
concerning. I note, for example, that BSI's own C5 controls are designed
around transparency, and Section 4.4.9 on auditor qualification similarly
places provisions for transparency.

Without making an appeal to either the NAB or the Supervisory Body, both of
which are relevant for eIDAS but not acting in a function for browser trust
frameworks (nor can they), what alternative would you propose to help
community members here have verifiable evidence and confidence in the
auditor's qualifications?

On Thu, Nov 5, 2020 at 10:54 AM Clemens Wanko via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Hi Ben,
> in order to avoid for every single audit the compilation work for the
> auditor (in person) on his qualification, independence, etc. as well as the
> need to crosscheck the statements he made, that was covered for the EU
> ETSI/eIDAS scheme by the accreditation of the body (organization; example:
> I am member/employee of TUV Austria CERT as accredited body) employing and
> using that auditor (in person) for a specific audit. The body will
> receive/keep its accreditation only in case it proves in annual audits with
> its accreditation body, that the following auditor related aspects were
> covered in the audit projects performed throughout the past audit period:
>
> ISO/IEC17065 - I listed the relevant chapters only:
> 6 Resource requirements ... 31
> 6.1 Certification body personnel ... 31
> 6.1.1 General .. 31
> 6.1.2 Management of competence for personnel involved in the certification
> process . 31
> 6.1.3 Contract with the personnel  33
> 6.2 Resources for evaluation. 33
> 6.2.1 Internal resources  33
> 6.2.2 External resources (outsourcing) ... 33
>
> …amended by guidance documentation in the following areas:
> Annex A (informative) Principles for product certification bodies and
> their certification activities ... 63
> A.1 General .. 63
> A.2 Impartiality  63
> A.3 Competence .. 65
> A.4 Confidentiality and openness . 65
> A.4.1 General .. 65
> A.4.2 Confidentiality .. 65
> A.4.3 Openness .. 65
> A.4.4 Access to information .. 65
> A.5 Responsiveness to complaints and appeals
> .. 65
> A.6 Responsibility ... 67
>
> For ETSI/eIDAS auditors the ISO/IEC17065 is detailed by the following
> mandatory ETSI EN 319 403 (updated version: ETSI EN 319 403-1)
> requirements. I listed 

Re: Policy 2.7.1: MRSP Issue #153: Cradle-to-Grave Contiguous Audits

2020-11-04 Thread Ryan Sleevi via dev-security-policy
(Aside: Congrats on the new e-mail address)

The question here is what does "the grave" mean. A common response from CAs
is "Oh, we stopped issuing TLS certificates from that X years ago, that's
why we don't have audits this year", even though a given root (**or**
subordinate) is still included/transitively trusted.

The goal is to capture that the obligation of a CA exists until they are
fully removed, or until they destroy the key.

This also addresses another concern that we saw with the SHA-1 deprecation,
in which CA's would give notice on a Monday "Please remove our root
certificate", and then expect that, by Tuesday, they could ignore Mozilla
policy requirements and the BRs. That doesn't work, because unless/until
the CA is fully removed, Mozilla users, and the broader ecosystem, are at
risk. So this is intended to prevent that scenario, by highlighting that
merely asking for removal isn't sufficient, the removal needs to be
completed. And, if that takes too long, the CA has an option to destroy the
key (e.g. if they're going out of business).

For the situation you describe, of transference of key material
control/ownership, that responsibility still continues, plus the added
intersection of the Section 8 requirements regarding such transfer. The new
organization is responsible for providing those continuing audits, or, if
they don't like that obligation, the destruction of key material. There's
still "devious" things that can happen; e.g. old-CA notifies Mozilla on
Monday, Mozilla approves on Tuesday, new CA requests removal on Wednesday,
new CA begins shenanigans on Thursday, it's reasonable to ask whether or
not old-CA knew that new-CA was going to get up to mischief and what the
implications are. But that's stuff that would nominally get sorted out on
Tuesday.

Note that some of this discussion comes from 2 years ago in the CA/B Forum
in Shanghai, at
https://cabforum.org/2018/10/18/minutes-for-ca-browser-forum-f2f-meeting-45-shanghai-17-18-october-2018/#28-Audit-requirements-over-the-lifecycle-of-a-root

On Wed, Nov 4, 2020 at 10:14 AM Corey Bonnell via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> I reviewed the associated GitHub commentary on the following change:
>
> "Full-surveillance period-of-time audits MUST be conducted and updated
> audit
> information provided no less frequently than **annually** until the CA
> certificate is no longer trusted by Mozilla's root store. Successive audits
> information provided no less frequently than **annually** from the time of
> CA key pair generation until the CA certificate is no longer trusted by
> Mozilla's root store or until all copies of the CA private key have been
> completely destroyed, as evidenced by a Qualified Auditor's key destruction
> report, whichever occurs sooner."
>
> and I'm having difficulty understanding why there is a new stipulation to
> allow for key destruction reports to release a CA from the obligation of
> annual audits for its CA certificates. Is the intent to specify that if the
> key material and operations for a given CA is transferred to another
> organization, the obligation to have annual audits for the original
> organization no longer stands, or is there some other reason for the
> addition of this language?
>
> Thanks,
> Corey
>
> On Thursday, October 15, 2020 at 5:00:49 PM UTC-4, Ben Wilson wrote:
> > This issue #153, listed here:
> > https://github.com/mozilla/pkipolicy/issues/153, is proposed for
> resolution
> > with version 2.7.1 of the Mozilla Root Store Policy. It is related to
> Issue
> > 139  (audits required
> even
> > if not issuing).
> >
> > The first paragraph of section 3.1.3 of the MRSP would read:
> >
> > Full-surveillance period-of-time audits MUST be conducted and updated
> audit
> > information provided no less frequently than *annually* from the time of
> CA
> > key pair generation until the CA certificate is no longer trusted by
> > Mozilla's root store or until all copies of the CA private key have been
> > completely destroyed, as evidenced by a Qualified Auditor's key
> destruction
> > report, whichever occurs sooner. Successive period-of-time audits MUST
> be
> > contiguous (no gaps).
> > Item 5 in the fifth paragraph of section 7.1 of the MRSP (new root
> > inclusions) would read:
> >
> > 5. an auditor-witnessed root key generation ceremony report and
> contiguous
> > period-of-time audit reports performed thereafter no less frequently
> than
> > annually;
> >
> > The proposed language can be examined further in the following commits:
> >
> >
> https://github.com/BenWilson-Mozilla/pkipolicy/commit/0d72d9be5acca17ada34cf7e380741e27ee84e55
> >
> >
> https://github.com/BenWilson-Mozilla/pkipolicy/commit/888dc139d196b02707d228583ac20564ddb27b35
> >
> > Or here:
> >
> https://github.com/BenWilson-Mozilla/pkipolicy/blob/2.7.1/rootstore/policy.md
> >
> > Thanks in advance for your comments,
> >
> > Ben
> 

Re: Policy 2.7.1: MRSP Issue #186: Requirement to Disclose Self-signed Certificates

2020-10-30 Thread Ryan Sleevi via dev-security-policy
On Fri, Oct 30, 2020 at 12:38 PM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On 2020-10-30 16:29, Rob Stradling wrote:
> >> Perhaps add: "And also include any other certificates sharing the same
> >> private/public key pairs as certificates already included in the
> >> requirements."  (this covers the situation you mentioned where a
> >> self-signed certificate shares the key pair of a certificate that chains
> >> to an included root).
> >
> > Jakob,
> >
> > I agree that that would cover that situation, but your proposed language
> goes way, way too far.
> >
> > Any private CA could cross-certify a publicly-trusted root CA.  How
> would the publicly-trusted CA Operator discover such a cross-certificate?
> Why would such a cross-certificate be of interest to Mozilla anyway?  Would
> it really be fair for non-disclosure of such a cross-certificate to be
> considered a policy violation?
>

I agree with Rob that, while the intent is not inherently problematic, I
think the language proposed by Jakob is problematic, and might not be
desirable.


> How would my wording include that converse situation (a CA not subject
> to the Mozilla policy using their own private key to cross sign a CA
> subject to the Mozilla policy)?
>
> I do notice though that my wording accidentally included the case where
> the private key of an end-entity cert is used in as the key of a private
> CA, because I wrote "as certificates" instead of "as CA certificates".


Because "as certificates already included in the requirements" is ambiguous
when coupled with "any other certificates". Rob's example here, of a
privately-signed cross-certificate *is* an "any other certificate", and the
CA who was cross-signed is a CA "already included in the requirements"

I think this intent to restate existing policy falls in the normal trap of
"trying to say the same thing two different ways in policy results in two
different interpretations / two different policies"

Taking a step back, this is the general problem with "Are CA
(Organizations) subject to audits/requirements, are CA Certificates, or are
private keys", and that's seen an incredible amount of useful discussion
here on m.d.s.p. that we don't and shouldn't relitigate here. I believe
your intent is "The CA (Organization) participating in the Mozilla Root
Store shall disclose every Certificate that shares a CA Key Pair with a CA
Certificate subject to these requirements", and that lands squarely on this
complex topic.

A different way to achieve your goal, and to slightly tweak Ben's proposal
(since it appears many CAs do not understand how RFC 5280 is specified) is
to take a slight reword:

"""
These requirements include all cross-certificates that chain to a CA
certificate that is included in Mozilla’s CA Certificate Program, as well
as all certificates (e.g. including self-signed certificates and non-CA
certificates) issued by the CA that share the same CA Key Pair. CAs must
disclose such certificates in the CCADB.
"""

However, this might be easier with a GitHub PR, as I think Wayne used to
do, to try to make sure the language is precise in context. It tries to
close the loophole Rob is pointing out about "who issued", while also
trying to address what you seem to be concerned about (re: key reuse). To
Ben's original challenge, it tries to avoid "chain to", since CAs
apparently are confused at how a self-signed certificate can chain to
another variation of that self-signed certificate (it's a valid path, it's
just a path that can be eliminated as a suboptimal path during chain
building)
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: TLS certificates for ECIES keys

2020-10-30 Thread Ryan Sleevi via dev-security-policy
On Thu, Oct 29, 2020 at 2:07 PM Jacob Hoffman-Andrews via
dev-security-policy  wrote:

> The processor sends the resulting TLS certificate to Apple. Apple signs a
> second, non-TLS certificate from a semi-private Apple root. This root is
> trusted by all Apple devices but is not in other root programs.
> Certificates chaining to this root are accepted for submission by most CT
> logs. This non-TLS certificate has a CN field containing text that is not a
> domain name (i.e. it has spaces). It has no EKUs, and has a special-purpose
> extension with an Apple OID, whose value is the hash of the public key from
> the TLS certificate (i.e. the public key that will be used by clients to
> encrypt data share packets). This certificate is submitted to CT and uses
> the precertificate flow to embeds SCTs.


Jacob,

I'm hoping you could help clarify this, as the "This certificate" remark in
the last sentence leaves me a little confused.

I understand the flow is:
A) "Someone" requests ISRG generate a TLS-capable certificate from a
TLS-capable root (whether that someone is ISRG or Apple is, I think,
largely inconsequential for this question)
B) That TLS-capable certificate is disclosed via CT and has SCTs embedded
within it, as ISRG/LE already does
C That TLS-capable certificate is then sent to Apple
D) Apple generates a second certificate from an Apple-controlled root.
E) This second certificate contains an extension with an Apple-specific OID
that contains the hash of the ISRG certificate

Concretely:
1) Is this Apple-controlled CA TLS capable?
2) Is this Apple-controlled CA subject to the Baseline Requirements?

These first two questions are trying to understand why this root is present
within CT logs, and whether CT logs are free to remove this Apple root.
Apple has, in the past, had issues with inappropriate audits and controls,
as a result of being improperly supervised [1]. I'm trying to understand
whether it is from these BR-audited and BR-subjected certificates that
Apple is proposing issuing, or from one of its other certificates not part
of those audits. Most importantly, however, it's necessary to determine
whether or not the Apple-controlled CA is trusted for TLS, as that impacts
whether or not Apple's use of their CA like this is permitted.

3) Is "This certificate is submitted to CT" referring to A) (ISRG cert) or
E) (Apple cert)?

It seems like you're describing E), with the expectation CT logs will
accept that certificate. However, Apple also recently shared [2] plans to
allow CT logs to reject non-TLS certificates.

If the answer to 1/2 is "Yes", then this scheme clearly violates the BRs
and E) cannot be issued.
If the answer to 1/2 is "No", then it would seem like CT logs would be free
to reject E) from being logged, and to reject this Apple-operated CA in the
first place (and would benefit the security of users and the WebPKI by
doing so).

Because it seems these are inherently contradictory, I'm hoping the answer
to 3 is that "This certificate" is "A", which makes your later questions
more relevant and understandable. But, on the whole, I suspect I'm missing
something, because it seems that you might mean "E". And if E is meant,
then I think it has significant CT implications that need to also be worked
out, separate from the BR question here.

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1575125
[2]
https://groups.google.com/a/chromium.org/g/ct-policy/c/JWVVhZTL5RM/m/HVZdSH4hAQAJ
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: EJBCA performs incorrect calculation of validities

2020-10-28 Thread Ryan Sleevi via dev-security-policy
On Wed, Oct 28, 2020 at 10:50 AM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> This aspect of RFC5280 section 4.1.2.5 is quite unusual in computing,
> where the ends of intervals are typically encoded such that subtracting
> the interval ends (as pure numbers) yields the interval length.
>

>= notBefore, <= notAfter is as classic as "< size - 1" in
0-indexed for-loops (i.e. a size of 1 indicates there's one element - at
index 0), or "last - first" needs to add +1 if counting elements in a
pointer range.


> As a data point, the reference CA code in the OpenSSL library,
> version 1.1.0
>

Generates a whole bunch of completely invalid badness that is completely
non-compliant, and is hardly a "reference" CA (c.f. the long time to switch
to utf8only for DNs as just one of the basic examples)

So this seems another detail where the old IETF working group made
> things unnecessarily complicated for everybody.
>

https://www.youtube.com/watch?v=HMqZ2PPOLik

https://tools.ietf.org/rfcdiff?url2=draft-ietf-pkix-new-part1-01.txt dated
2000. This is 2020.

Where does that change come from?
https://www.itu.int/rec/T-REC-X.509-23-S/en (aka ITU's X.509), which in
2000, stated "TA indicates the period of validity of the certificate, and
consists of two dates, the first and last on which the certificate is
valid."

Does that mean this was a change in 2000? Nope.  It's _always been there_,
as far back as ITU-T's X.509 (11/88) -
https://www.itu.int/rec/T-REC-X.509-198811-S/en

It helps to do research before casting aspersions or proposing to
reinterpret meanings that are older than some members here.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7.1: MRSP Issue #187: Require disclosure of incidents in Audit Reports

2020-10-23 Thread Ryan Sleevi via dev-security-policy
On Fri, Oct 23, 2020 at 8:55 AM Matthias van de Meent via
dev-security-policy  wrote:

> The current MRSP do not bind the requirements on the reporting of
> incidents to the CA that the incident was filed on, but generally to
> CAs.
>
> Section 2.4 has the general requirement for a CA to report any
> incident (which is a failure to comply with the MRSP by any CA). So,
> if a CA is aware of an incident with another CA which is included in
> the Mozilla root store, that must be reported, and I agree with that.
>

This sounds like an overly broad reading of Mozilla Policy, and it's not
clear to me how you reached it. Could you walk me through the language and
help me understand how you reached that conclusion?

It would seem like you might be reaching that conclusion from "When a CA"
and "CAs", is that right?


> But, the requirements also extend to having to regularly update these
> incidents, and report these incidents in their audit letter, even if
> they are not related to that CA.
>

As mentioned above, this seems like an overly broad reading, and I'm
wondering if that's the source of confusion here. Understandably, it would
make no logical sense to expect a third-party reporter to provide updates
for a CA incident, whether that third-party is an individual or another CA.

By the logic being applied here, the ultimate sentence in that same
paragraph would imply that, from the moment a CA incident is filed, all CAs
in Mozilla's Root Program must stop issuance until the affected CA has
resolved the issue, which certainly makes no logical or syntactical sense,
or, similarly, that Section 4.2 of the policy obligates CAs to respond on
behalf of other CAs.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: NAVER: Public Discussion of Root Inclusion Request

2020-10-21 Thread Ryan Sleevi via dev-security-policy
On Wed, Oct 21, 2020 at 2:09 PM Matthias van de Meent via
dev-security-policy  wrote:

> Hi,
>
> In the CPS v1.4.3 of NAVER, section 4.9.3, I found the following:
>
> > 4.9.3 Procedure for Revocation Request
> > The NAVER BUSINESS PLATFORM processes a revocation request as follows:
> > [...]
> > 4. For requests from third parties, The NAVER BUSINESS PLATFORM
> personnel begin investigating the request within 24 hours after receipt and
> decide whether revocation is appropriate based on the following criteria:
> >   a. [...], b. [...], c. [...], d. [...]
> >   e. Relevant legislation.
>
> The wording here is concerning, as it points to potential legislation
> that could disallow NAVER from revoking problematic certificates. Also
> of note is that this 'relevant legislation' is not referenced in
> section 9.14, Governing Law, nor in 9.16.3, Severability (as required
> per BRs 9.16.3).
>

If I understand your concern, you're concerned about a decision to /not/
revoke a given certificate, correct? You're indeed accurate that a
certificate that violated the BRs, but was not revoked according to
relevant legislation, would be a BR violation and the CA would have been
required to previously disclose this according to 9.16.3.

However, CAs are also free to *add* reasons for revocation, and to consider
part of their investigation. relevant legislation which might lead to
revocation even if it wasn't a violation of NAVER's CP/CPS. This is totally
fine, and all CAs are entitled to add additional requirements, and for
relying parties/root programs to consider those reasons relevant to their
user communities.

Note that, in this case, the particular language you're concerned about is
part of the BRs themselves, in 4.9.5. However, this is about "when" to
revoke.

I think you raise an interesting point that would benefit from
clarification from NAVER, because I think you're correct that we should be
concerned that the shift from "when" to revoke has become "whether" to
revoke, and that is an important difference.


> I also noticed that the "All verification activities" type of event is
> not recorded, or at least not documented as such. This is a
> requirement from BRs 5.4.1(2)(2).
>

Thanks for the excellent attention to detail! I agree, this would be
concerning, especially given the importance this log has been in
investigating CA misissuance in the past.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7.1: MRSP Issue #152: Add EV Audit exception for Policy Constraints

2020-10-17 Thread Ryan Sleevi via dev-security-policy
On Sat, Oct 17, 2020 at 3:48 PM Ben Wilson  wrote:

> Ryan wrote:
>
> > If we apply this concept to the proposed language, then the requirement
> for an EV audit is
> > simply about whether there is any unexpired, unrevoked path to a root CA
> which can issue
> > EV certificates. Similarly, checking the scope for an EV audit becomes
> “the entire hierarchy”.
>
> > This is accomplished by simply removing your proposed “(i.e. ...)”
> parenthetical, and allowing
> > the language from 3.1 to apply, by changing the language to “if the root
> is recognized for
> > issuing EV certificates”.
>
> I think it might need to be phased in or have some effective date.
>

That’s reasonable. Given that it’s very easy for CAs to transition new TLS
issuance to new intermediates, it’s fairly simple to work out a transition
plan that has an immediate/near term deadline for new issuance, so that
within two years (given that the transition to one year certs only happened
in September, we have to deal with the certs prior to that), the full
policy can be effective.

Would you like language to show how that practically works, along with
diagrams to show what’s expected for CAs?

Also, I haven't mapped out how this might affect CAs that we sometimes add
> to the root store without EV enablement and with the suggestion that they
> apply later for it.
>

Well, as I mentioned to Dimitris, this is something that ballots like SC31
were trying to make easier. However, as with the above, it’s a very simple
transition: when a CA wishes to apply for EV, they cut a new certificate
hierarchy (i.e. a new root and new issuing intermediates), sign that new
root with their old root, and ensure the new root is audited for EV from
the beginning.

This is functionally no different than what Mozilla has required for CA
keypair as that existed pre-BRs, or which have certified things
inconsistent with Mozilla policy (e.g. One CA that cross-signed two
government subordinates for which they were forbidden from disclosing the
certificate or those certificates CP/CPA). In those cases, the CA generates
a new hierarchy to apply for inclusion within Mozilla products, and operate
according to Mozilla’s policies from the creation to destruction of the
key. For interoperability with products that trust their legacy old root,
they have the old root sign the new root, so that paths can also be built
to the legacy CA through the new root.

Given that anything issued “before” the audit is retroactively
grandfathered in, this hopefully simplifies the process for CAs, by
ensuring appropriate audits from cradle to grave, and for the entire given
hierarchy. BR and EV audits would have the exact same scope, which is
already true with respect to ETSI (as, largely for worse, they don’t check
this to begin with) and can be adopted easily to WebTrust, provided the CA
demonstrates appropriate controls (e.g. to provide auditable logs that they
haven’t issued EV, if they assert they haven’t issued EV from a CA)


> On Sat, Oct 17, 2020 at 12:26 AM Ryan Sleevi  wrote:
>
>>
>>
>> On Thu, Oct 15, 2020 at 4:36 PM Ben Wilson via dev-security-policy <
>> dev-security-policy@lists.mozilla.org> wrote:
>>
>>>  This issue is presented for resolution in the next version of the
>>> Mozilla
>>> Root Store Policy. It is related to Issue #147
>>> <https://github.com/mozilla/pkipolicy/issues/147> (previously posted for
>>> discussion on this list on 6-Oct-2020).
>>>
>>> Possible language is presented here:
>>>
>>> https://github.com/BenWilson-Mozilla/pkipolicy/commit/c1acc76ad9f05038dc82281532fb215d71d537d4
>>>
>>> In addition to replacing "if issuing EV certificates" with "if capable of
>>> issuing EV certificates" in two places -- for WebTrust and ETSI audits --
>>> it would be followed by "(i.e. a subordinate CA under an EV-enabled root
>>> that contains no EKU or the id-kp-serverAuth EKU or anyExtendedKeyUsage
>>> EKU, and a certificatePolicies extension that asserts the CABF EV OID of
>>> 2.23.140.1.1, the anyPolicy OID, or the CA's EV policy OID)." Thus,
>>> Mozilla
>>> considers that a CA is capable of issuing EV certificates if it is (1) a
>>> subordinate CA (2) under an EV-enabled root (3) that contains no EKU or
>>> the
>>> id-kp-serverAuth EKU or anyExtendedKeyUsage EKU, and (4) a
>>> certificatePolicies extension that asserts the CABF EV OID of
>>> 2.23.140.1.1,
>>> the anyPolicy OID, or the CA's EV policy OID.
>>>
>>> I look forward to your suggestions.
>>
>>
>> Given the continuing issues we see with respect to audits, it seems like
>> this notion of “technically constrain

Re: Policy 2.7.1: MRSP Issue #152: Add EV Audit exception for Policy Constraints

2020-10-17 Thread Ryan Sleevi via dev-security-policy
On Thu, Oct 15, 2020 at 4:36 PM Ben Wilson via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

>  This issue is presented for resolution in the next version of the Mozilla
> Root Store Policy. It is related to Issue #147
>  (previously posted for
> discussion on this list on 6-Oct-2020).
>
> Possible language is presented here:
>
> https://github.com/BenWilson-Mozilla/pkipolicy/commit/c1acc76ad9f05038dc82281532fb215d71d537d4
>
> In addition to replacing "if issuing EV certificates" with "if capable of
> issuing EV certificates" in two places -- for WebTrust and ETSI audits --
> it would be followed by "(i.e. a subordinate CA under an EV-enabled root
> that contains no EKU or the id-kp-serverAuth EKU or anyExtendedKeyUsage
> EKU, and a certificatePolicies extension that asserts the CABF EV OID of
> 2.23.140.1.1, the anyPolicy OID, or the CA's EV policy OID)." Thus, Mozilla
> considers that a CA is capable of issuing EV certificates if it is (1) a
> subordinate CA (2) under an EV-enabled root (3) that contains no EKU or the
> id-kp-serverAuth EKU or anyExtendedKeyUsage EKU, and (4) a
> certificatePolicies extension that asserts the CABF EV OID of 2.23.140.1.1,
> the anyPolicy OID, or the CA's EV policy OID.
>
> I look forward to your suggestions.


Given the continuing issues we see with respect to audits, it seems like
this notion of “technically constrained (from issuing EV) sub-CA” is going
to encounter many of the same fundamental issues we’ve seen with audit
scopes being incorrect and improper.

It seems the easiest, and best (for users) way, is to ensure that once a
root is trusted for a given purpose, as much as possible it’s ensured that
*all* CA certificates beneath that hierarchy are included within the scope
of the audit.

It’s already well-accepted that, for example, the expectations of the BR
audits still apply even if a CA has not issued, whether that “not issued”
was because it has never issued a very (e.g. just created and not used
yet), whether it is no longer issuing certs (e.g. it is being retired),
*and* for situations where the key had previously issued certs, but did not
issue certs within the audit period (e.g. a pause, rather than simply not
being used yet).

For separate purposes (e.g. S/MIME vs TLS), we know there are practical
reasons to ensure separate hierarchies; most pragmatic of them being the
creation of certificates for one purpose that impact the trustworthiness of
certificates for another purpose (e.g. S/MIME certs, both those technically
separated and those merely “intended” for S/MIME, impacting TLS trust, such
as we saw with the recent OCSP issue).

Because of this, it seems that there is a simpler, clearer, unambiguous
path for CAs that seems useful to move to:
- If a CA is trusted for purpose X, that certificate, and all subordinate
CAs, should be audited against the criteria relevant for X

This would avoid much of the confusion CAs seemingly continue to encounter
with respect to audit scope, disclosure, and intent, by making it as simple
as “If you signed it, you audit it.”

If we apply this concept to the proposed language, then the requirement for
an EV audit is simply about whether there is any unexpired, unrevoked path
to a root CA which can issue EV certificates. Similarly, checking the scope
for an EV audit becomes “the entire hierarchy”.

This is accomplished by simply removing your proposed “(i.e. ...)”
parenthetical, and allowing the language from 3.1 to apply, by changing the
language to “if the root is recognized for issuing EV certificates”.

From an audit ingestion point, and for CAs, it becomes easier now: the
scope of the EV audit would be 1:1 the scope of a BR audit. There’s no
worry or confusion about capability, as those “not intended” and those “not
capable” can easily be audited that they have not issued such certificates.

In addition to the simplification of scope above, further have the benefit
of addressing any scenario in which a subordinate CA uses Policy X for some
time (T=0 ... T=5), and then later (T=6) decides to apply for that policy
to be recognized for EV. Under the language proposed, if Mozilla grants EV
to the root for Policy X at T=10, on the basis of an audit period for (T=6
... T=9), then all the certificates from T=0 to T=5 will be retroactively
granted EV status, and the subordinate CA will not be or have been in
violation of any Mozilla policy requirement.

It seems like this should cause no incompatibility for CAs.

Ben: what do you think of this simplified approach to the problem?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7.1: MRSP Issue #152: Add EV Audit exception for Policy Constraints

2020-10-16 Thread Ryan Sleevi via dev-security-policy
On Fri, Oct 16, 2020 at 9:20 AM Dimitris Zacharopoulos 
wrote:

>
>
> On 2020-10-16 3:21 μ.μ., Ryan Sleevi wrote:
>
>
>
> On Fri, Oct 16, 2020 at 7:31 AM Dimitris Zacharopoulos via
> dev-security-policy  wrote:
>
>>
>>
>> On 2020-10-15 11:36 μ.μ., Ben Wilson via dev-security-policy wrote:
>> >   This issue is presented for resolution in the next version of the
>> Mozilla
>> > Root Store Policy. It is related to Issue #147
>> > <https://github.com/mozilla/pkipolicy/issues/147> (previously posted
>> for
>> > discussion on this list on 6-Oct-2020).
>> >
>> > Possible language is presented here:
>> >
>> https://github.com/BenWilson-Mozilla/pkipolicy/commit/c1acc76ad9f05038dc82281532fb215d71d537d4
>> >
>> > In addition to replacing "if issuing EV certificates" with "if capable
>> of
>> > issuing EV certificates" in two places -- for WebTrust and ETSI audits
>> --
>> > it would be followed by "(i.e. a subordinate CA under an EV-enabled root
>> > that contains no EKU or the id-kp-serverAuth EKU or anyExtendedKeyUsage
>> > EKU, and a certificatePolicies extension that asserts the CABF EV OID of
>> > 2.23.140.1.1, the anyPolicy OID, or the CA's EV policy OID)." Thus,
>> Mozilla
>> > considers that a CA is capable of issuing EV certificates if it is (1) a
>> > subordinate CA (2) under an EV-enabled root (3) that contains no EKU or
>> the
>> > id-kp-serverAuth EKU or anyExtendedKeyUsage EKU, and (4) a
>> > certificatePolicies extension that asserts the CABF EV OID of
>> 2.23.140.1.1,
>> > the anyPolicy OID, or the CA's EV policy OID.
>> >
>> > I look forward to your suggestions.
>>
>> Hello Ben,
>>
>> I am trying to understand the expectations from Mozilla:
>>
>> - If a CA that has an EV-capable RootCA , uses a subCA Certificate that
>> contains the id-kp-serverAuth EKU and the anyPolicy OID that does not
>> issue EV end-entity Certificates, is this considered a policy violation
>> if this subCA is not explicitly included in an EV audit scope (ETSI or
>> WebTrust)?
>
>
> Explicitly, yes, it is 100% the intent that this would be a violation.
>
> Audits that are limited based on whether or not certificates were issued
> are not aligned with the needs of relying parties and users. We need
> assurances, for example, that keys were and are protected, and that audits
> measure technical capability.
>
>
> The same exact assurances are included in the BR audit. There are no
> additional requirements in the EV Guidelines related to the CA Certificates
> except in section 17.7 for the Root CA Key Pair Generation which is the
> same in the BRs.
>
> So from a practical standpoint, unless I'm missing something, there is no
> policy differentiation in terms of CA Certificates (Root or Intermediate
> CA) explicitly for EV. In fact, that's why it was allowed (and to the best
> of my knowledge, is still allowed) for a CA to obtain an EV audit for a
> BR-compliant Root.
>

Apologies, in my attempt to provide an explanation, I failed to include a
link to where this was previously discussed, such as
https://cabforum.org/2018/10/18/minutes-for-ca-browser-forum-f2f-meeting-45-shanghai-17-18-october-2018/

While it is indeed possible to obtain an EV audit for a BR-compliant root,
I do hope you recall the lengthy discussion that highlighted the tangible
and practical issues for relying parties for allowing such gaps in such
audits from the moment of key creation, which is fundamentally what you're
asking.

My quick response above was meant to highlight that /just as/ it would be
unacceptable to have a CA key created, have a gap in the time of that
creation, and then later produce an audit without addressing that gap in
time, it should be equally unacceptable to do so for an EV certificate.

A gap in the BR audit means you have no assurances of key protection.
A gap in the EV audit means you have no assurances with respect to whether
or not they issued certificates with the "EV OID" while not subject to and
operated according to the EV Guidelines.

This should be fairly basic and trivial to see the gap, but it is
unfortunate that we still see CAs audited according to the ETSI standards
fail to account for this. Indeed, you will hopefully recall the discussion
in Shanghai that examined how some ETSI CAs issued certificates bearing a
Qualified certificate OID, well before they had been audited as such.
However, once they were included within the Qualified Trust List, then all
of those certificates were retroactively granted legal effect - undermining
the very objective of qualified certificate

Re: PEM of root certs in Mozilla's root store

2020-10-16 Thread Ryan Sleevi via dev-security-policy
On Fri, Oct 16, 2020 at 5:27 PM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> RFC4180 section 3 explicitly warns that there are other variants and
> specifications of the CSV format, and thus the full generalizations in
> RFC4180 should not be exploited to their extremes.
>

You're referring to this section, correct?

"""
   Interoperability considerations:
  Due to lack of a single specification, there are considerable
  differences among implementations.  Implementors should "be
  conservative in what you do, be liberal in what you accept from
  others" (RFC 793 [8]) when processing CSV files.  An attempt at a
  common definition can be found in Section 2.

  Implementations deciding not to use the optional "header"
  parameter must make their own decision as to whether the header is
  absent or present.
"""


> Splitting the input at newlines before parsing for quotes and commas is
> a pretty common implementation strategy as illustrated by my examples of
> common tools that actually do so.


This would appear to be at fundamental odds with "be liberal in what you
accept from others" and, more specifically, ignoring the remark that
Section 2 is an admirable effort at a "common" definition, which is so
called as it minimizes such interoperability differences.

As your original statement was the file produced was "not CSV", I believe
that's been thoroughly dispelled by highlighting that, indeed, it does
conform to the grammar set forward in RFC 4180, and is consistent with the
IANA mime registration for CSV.

Although you also raised concern that naive and ill-informed attempts at
CSV parsing, which of course fail to parse the grammar of RFC 4180, there
are thankfully alternatives for each of those concerns. With awk, you have
FPAT. With Perl, you have Text::CSV. Of course, as the cut command is too
primitive to handle a proper grammar, there are plenty of equally
reasonable alternatives to this, and which take far less time than the
concerns and misstatements raised on this thread, which I believe we can,
thankfully, end.

Enjoy

Ryan
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7.1: MRSP Issue #152: Add EV Audit exception for Policy Constraints

2020-10-16 Thread Ryan Sleevi via dev-security-policy
On Fri, Oct 16, 2020 at 7:31 AM Dimitris Zacharopoulos via
dev-security-policy  wrote:

>
>
> On 2020-10-15 11:36 μ.μ., Ben Wilson via dev-security-policy wrote:
> >   This issue is presented for resolution in the next version of the
> Mozilla
> > Root Store Policy. It is related to Issue #147
> >  (previously posted for
> > discussion on this list on 6-Oct-2020).
> >
> > Possible language is presented here:
> >
> https://github.com/BenWilson-Mozilla/pkipolicy/commit/c1acc76ad9f05038dc82281532fb215d71d537d4
> >
> > In addition to replacing "if issuing EV certificates" with "if capable of
> > issuing EV certificates" in two places -- for WebTrust and ETSI audits --
> > it would be followed by "(i.e. a subordinate CA under an EV-enabled root
> > that contains no EKU or the id-kp-serverAuth EKU or anyExtendedKeyUsage
> > EKU, and a certificatePolicies extension that asserts the CABF EV OID of
> > 2.23.140.1.1, the anyPolicy OID, or the CA's EV policy OID)." Thus,
> Mozilla
> > considers that a CA is capable of issuing EV certificates if it is (1) a
> > subordinate CA (2) under an EV-enabled root (3) that contains no EKU or
> the
> > id-kp-serverAuth EKU or anyExtendedKeyUsage EKU, and (4) a
> > certificatePolicies extension that asserts the CABF EV OID of
> 2.23.140.1.1,
> > the anyPolicy OID, or the CA's EV policy OID.
> >
> > I look forward to your suggestions.
>
> Hello Ben,
>
> I am trying to understand the expectations from Mozilla:
>
> - If a CA that has an EV-capable RootCA , uses a subCA Certificate that
> contains the id-kp-serverAuth EKU and the anyPolicy OID that does not
> issue EV end-entity Certificates, is this considered a policy violation
> if this subCA is not explicitly included in an EV audit scope (ETSI or
> WebTrust)?


Explicitly, yes, it is 100% the intent that this would be a violation.

Audits that are limited based on whether or not certificates were issued
are not aligned with the needs of relying parties and users. We need
assurances, for example, that keys were and are protected, and that audits
measure technical capability.

- If a subCA Certificate that contains the id-kp-serverAuth EKU and the
> anyPolicy OID was not covered by an EV-scope audit (because it did not
> issue EV Certificates) and it later decides to update the profile and
> policies/practices to comply with the EV Guidelines for everything
> related to end-entity certificates in order to start issuing EV
> Certificates and is later added to an EV-scope audit, is that an allowed
> practice? Judging from the current EV Guidelines I didn't see anything
> forbidding this practice. In fact this is supported via section 17.4.


It has been repeatedly discussed in the CABForum about explicitly why this
is undesirable for users, and why the set of policy updates, in whole, seek
to prohibit this. I would refer you to the discussions in Shanghai, in the
context of audit discussions and lifetimes, about why allowing this is
inherently an unacceptable security risk to end users.

In this scenario, there is zero reason not to issue a new intermediate. The
desired end state, for both roots AND intermediates, is that a change in
capabilities leads to issuing a NEW intermediate or root. There is no
legitimate technical reason not to issue a new intermediate, and transition
that new certificate issuance to that new intermediate.

The discussion about cradle to grave is intentionally: you can only issue
certificates of the type that you have been audited for from the moment of
key creation. Anything less than that exposes users to security risks,
because it allows certificates issued before the audit and supervision to
appear as valid to relying parties.

Mozilla has, for several years now, reliably and consistently done this for
new root inclusions, in which pre-BR CAs have been rejected, and CAs have
been required to issue *new* roots with *new* keys, and ensure BR audits
from the moment of creation for those certificates, in order to be
included. This provides relying parties the assurance that any certificates
they encounter will be and are subject to the requirements.

This is part of the “cradle to grave” discussion moreso than this, but it
is the combination of these that ensures users are protected.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: PEM of root certs in Mozilla's root store

2020-10-16 Thread Ryan Sleevi via dev-security-policy
On Thu, Oct 15, 2020 at 7:44 PM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On 2020-10-15 11:57, Ryan Sleevi wrote:
> > On Thu, Oct 15, 2020 at 1:14 AM Jakob Bohm via dev-security-policy <
> > dev-security-policy@lists.mozilla.org> wrote:
> >
> >>> For example, embedded new lines are discussed in 2.6 and the ABNF
> >> therein.
> >>>
> >>
> >> The one difference from RFC4180 is that CR and LF are not part of the
> >> alternatives for the inner part of "escaped".
> >
> >
> > Again, it would do a lot of benefit for everyone if you would be more
> > precise here.
> >
> > For example, it seems clear and unambiguous that what you just stated is
> > factually wrong, because:
> >
> > escaped = DQUOTE *(TEXTDATA / COMMA / CR / LF / 2DQUOTE) DQUOTE
> >
>
> I was stating the *difference* from RFC4180 being precisely that
> "simple, traditional CSV" doesn't accept the CR and LF alternatives in
> that syntax production.


Ah, that would explain my confusion: you’re using “CSV” in a manner
different than what is widely understood and standardized. The complaint
about newlines would be as technically accurate and relevant as a complaint
that “simple, traditional certificates should parse as JSON” or that
“simple, traditional HTTP should be delivered over port 23”; which is to
say, it seems like this concern is not relevant.

As the CSVs comply with RFC 4180, which is widely recognized as what “CSV”
means, I think Jakob’s concern here can be disregarded. Any implementation
having trouble with the CSVs produced is confused about what a CSV is, and
thus not a CSV parser.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: PEM of root certs in Mozilla's root store

2020-10-15 Thread Ryan Sleevi via dev-security-policy
On Thu, Oct 15, 2020 at 1:14 AM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> > For example, embedded new lines are discussed in 2.6 and the ABNF
> therein.
> >
>
> The one difference from RFC4180 is that CR and LF are not part of the
> alternatives for the inner part of "escaped".


Again, it would do a lot of benefit for everyone if you would be more
precise here.

For example, it seems clear and unambiguous that what you just stated is
factually wrong, because:

escaped = DQUOTE *(TEXTDATA / COMMA / CR / LF / 2DQUOTE) DQUOTE
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: PEM of root certs in Mozilla's root store

2020-10-14 Thread Ryan Sleevi via dev-security-policy
On Wed, Oct 14, 2020 at 7:31 PM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Only the CSV form now contains CSV artifacts.  And it isn't really CSV
> either (even if Microsoft Excel handles it).


Hi Jakob,

Could you be more precise here? Embedded new lines within quoted values is
well-defined for CSV.

Realizing that there are unfortunately many interpretations of what
“well-defined for CSV” here, perhaps you can frame your concerns in terms
set out in
https://tools.ietf.org/html/rfc4180 . This at least helps make sure we can
understand and are in the same understanding.

For example, embedded new lines are discussed in 2.6 and the ABNF therein.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: PEM of root certs in Mozilla's root store

2020-10-13 Thread Ryan Sleevi via dev-security-policy
Jakob,

I had a little trouble following your mail, despite being quite familiar
with PEM, so hopefully you'll indulge me in making sure I've got your
criticisms/complaints correct.

Your objection to the text report is that RFC 7468 requires generators to
wrap lines (except the last line) at exactly 64 characters, right? That is,
the textual report includes the base-64 data with no embedded newlines, and
this causes your PEM decoder trouble, despite being able to easily inject
them programmatically after you download the file.

I'm not sure I fully understand the CSV objection, despite having inspected
the file, so perhaps you can clarify a bit more.

Perhaps the simplest approach would be that you could attach versions that
look how you'd want.

However, it might also be reasonable that if these concerns aren't easily
addressable, perhaps not offering the feed is another option, since it
seems a lot of work for something that should be and is naturally
discouraged.

On Tue, Oct 13, 2020 at 3:26 AM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On 2020-10-12 20:50, Kathleen Wilson wrote:
> > On 10/7/20 1:09 PM, Jakob Bohm wrote:
> >> Please note that at least the first CSV download is not really a CSV
> >> file, as there are line feeds within each "PEM" value, and only one
> >> column.  It would probably be more useful as a simple concatenated PEM
> >> file, as used by various software packages as a root store input format.
> >>
> >
> >
> > Here's updated reports...
> >
> > Text:
> >
> >
> https://ccadb-public.secure.force.com/mozilla/IncludedRootsPEMTxt?TrustBitsInclude=Websites
> >
> >
> >
> https://ccadb-public.secure.force.com/mozilla/IncludedRootsPEMTxt?TrustBitsInclude=Email
> >
> These two are bad multi-pem files, as each certificate likes the
> required line wrapping.
>
> The useful multi-pem format would keep the line wrapping from the
> old report, but not insert stray quotes and commas.
>
> >
> > CSV:
> >
> https://ccadb-public.secure.force.com/mozilla/IncludedRootsPEMCSV?TrustBitsInclude=Websites
> >
> >
> >
> https://ccadb-public.secure.force.com/mozilla/IncludedRootsPEMCSV?TrustBitsInclude=Email
> >
>
> These two are like the old bad multi-pem reports, with the stray
> internal line feeds after/before the - marker lines, but without the
> internal linefeeds in the Base64 data.
>
> Useful actual-csv reports like IncludedCACertificateWithPEMReport.csv
> would have to omit the marker lines and their linefeeds as well as the
> internal linefeeds in the Base64 data in order to keep each CSV record
> entirely on one line.
>
>
>
> >
> >
> > If the Text reports look good, I'll add them to the wiki page,
> > https://wiki.mozilla.org/CA/Included_Certificates
> >
> >
> > Thanks,
> > Kathleen
> >
>
>
> Enjoy
>
> Jakob
> --
> Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
> Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
> This public discussion message is non-binding and may contain errors.
> WiseMo - Remote Service Management for PCs, Phones and Embedded
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: PEM of root certs in Mozilla's root store

2020-10-06 Thread Ryan Sleevi via dev-security-policy
It seems like there should be a link to
https://wiki.mozilla.org/CA/FAQ#Can_I_use_Mozilla.27s_set_of_CA_certificates.3F
there

I realize there’s a tension between making this easily consumable, and the
fact that “easily consumed” doesn’t and can’t relieve an organization of
having to be responsible and being aware of the issues and discussions here
about protecting their users.

I do worry this is going to encourage one of the things that can make it
more difficult for Mozilla to protect Mozilla users, which is when vendors
blindly using/build a PEM file and bake it into a device they never update.
We know from countless CA incidents that when vendors do that, and aren’t
using these for “the web”, that it makes it more difficult for site
operators to replace these certificates. It also makes it harder for
Mozilla to fix bugs in implementations or policies and otherwise take
actions that minimize any disruption for users. At the same time, Mozilla
running a public and transparent root program does indeed mean it’s better
for users than these vendors doing nothing at all, which is what would
likely happen if there were too many roadblocks.

While personally, I want to believe it’s “not ideal” to make it so easy, I
realize the reality is plenty of folks already repackage the Mozilla store
for just this reason, totally ignoring the above link, and make it easy for
others to pull in. At least this way, you could reiterate that this list
doesn’t really absolve these vendors of having to keep users up to date and
protected and be able to update their root stores for their products, by
linking to
https://wiki.mozilla.org/CA/FAQ#Can_I_use_Mozilla.27s_set_of_CA_certificates.3F

On Tue, Oct 6, 2020 at 5:47 PM Kathleen Wilson via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> All,
>
> I've been asked to publish Mozilla's root store in a way that is easy to
> consume by downstreams, so I have added the following to
> https://wiki.mozilla.org/CA/Included_Certificates
>
> CCADB Data Usage Terms
> 
>
> PEM of Root Certificates in Mozilla's Root Store with the Websites
> (TLS/SSL) Trust Bit Enabled (CSV)
> <
> https://ccadb-public.secure.force.com/mozilla/IncludedRootsPEM?TrustBitsInclude=Websites
> >
>
> PEM of Root Certificates in Mozilla's Root Store with the Email (S/MIME)
> Trust Bit Enabled (CSV)
> <
> https://ccadb-public.secure.force.com/mozilla/IncludedRootsPEM?TrustBitsInclude=Email
> >
>
>
> Please let me know if you have feedback or recommendations about this.
>
> Thanks,
> Kathleen
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Sectigo to Be Acquired by GI Partners

2020-10-03 Thread Ryan Sleevi via dev-security-policy
In a recent incident report [1], a representative of Sectigo noted:

The carve out from Comodo Group was a tough time for us. We had twenty
> years’ worth of completely intertwined systems that had to be disentangled
> ASAP, a vast hairball of legacy code to deal with, and a skeleton crew of
> employees that numbered well under half of what we needed to operate in any
> reasonable fashion.


This referred to the previous split [2] of the Comodo CA business from the
rest of Comodo businesses, and rebranding as Sectigo.

In addition to the questions posted by Wayne, I think it'd be useful to
confirm:

1. Is it expected that there will be similar system and/or infrastructure
migrations as part of this? Sectigo's foresight of "no effect on its
operations" leaves it a bit ambiguous whether this is meant as "practical"
effect (e.g. requiring a change of CP/CS or effective policies) or whether
this is meant as no "operational" impact (e.g. things will change, but
there's no disruption anticipated). It'd be useful to frame this response
in terms of any anticipated changes at all (from mundane, like updating the
logos on the website, to significant, such as any procedure/equipment
changes), rather than observed effects.

2. Is there a risk that such an acquisition might further reduce the crew
of employees to an even smaller number? Perhaps not immediately, but over
time, say the next two years, such as "eliminating redundancies" or
"streamlining operations"? I recognize that there's an opportunity such an
acquisition might allow for greater investment and/or scale, and so don't
want to presume the negative, but it would be good to get a clear
commitment as to that, similar to other acquisitions in the past (e.g.
Symantec CA operations by DigiCert)

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1648717#c21
[2]
https://groups.google.com/g/mozilla.dev.security.policy/c/AvGlsb4BAZo/m/p_qpnU9FBQAJ

On Thu, Oct 1, 2020 at 4:55 PM Ben Wilson via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

>  As announced previously by Rob Stradling, there is an agreement for
> private investment firm GI Partners, out of San Francisco, CA, to acquire
> Sectigo. Press release:
> https://sectigo.com/resource-library/sectigo-to-be-acquired-by-gi-partners
> .
>
>
> I am treating this as a change of legal ownership covered by section 8.1
> <
> https://www.mozilla.org/en-US/about/governance/policies/security-group/certs/policy/#81-change-in-legal-ownership
> >
> of the Mozilla Root Store Policy, which states:
>
> > If the receiving or acquiring company is new to the Mozilla root program,
> > it must demonstrate compliance with the entirety of this policy and there
> > MUST be a public discussion regarding their admittance to the root
> program,
> > which Mozilla must resolve with a positive conclusion in order for the
> > affected certificate(s) to remain in the root program.
>
> In order to comply with policy, I hereby formally announce the commencement
> of a 3-week discussion period for this change in legal ownership of Sectigo
> by requesting thoughtful and constructive feedback from the community.
>
> Sectigo has already stated that it foresees no effect on its operations due
> to this ownership change, and I believe that the acquisition announced by
> Sectigo and GI Partners is compliant with Mozilla policy.
>
> Thanks,
>
> Ben
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mandatory reasonCode analysis

2020-10-01 Thread Ryan Sleevi via dev-security-policy
On Thu, Oct 1, 2020 at 6:39 AM Corey Bonnell via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

>
> Although RFC 5280, section 5 [2] mandates that conforming CAs MUST produce
> v2 CRLs, the CAs issuing v1 CRLs pre-date any browser root requirements
> that mandate adherence to the RFC 5280 profile.


To clarify: You mean the CAs were issued prior to 2012, yet are still
trusted? That seems an easy enough problem to fix, in the spirit of
removing roots and hierarchies that predate the Baseline Requirements 1.0
effective date
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mandatory reasonCode analysis

2020-09-30 Thread Ryan Sleevi via dev-security-policy
On Wed, Sep 30, 2020 at 12:56 PM Rob Stradling via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> > I also read this language:
> > If a CRL entry is for a Certificate not subject to these Requirements
> and was either issued on-or-after 2020-09-30 or has a notBefore on-or-after
> 2020-09-30, the CRLReason MUST NOT be certificateHold (6).
>
> I think "was either issued on-or-after 2020-09-30 or has a notBefore
> on-or-after 2020-09-30" is talking about "a Certificate not subject to
> these Requirements", not about when the CRL was issued.
>

Yes. Yet another reason I think our approach to stating requirements in
"plain English" does more harm than good.

The correct parse tree:
If a CRL entry is for:
  * a Certificate not subject to these Requirements; and
  * either:
* was issued on-or-after 2020-09-30; or
* has a notBefore on-or-after 2020-09-30
then:
  * the CRLReason MUST NOT be certificateHold (6).

This was hoped to be "obvious", given that a "CRL entry" (a specific thing
within a CRL, c.f. https://tools.ietf.org/html/rfc5280#section-5.3 and
X.509) is neither issued nor has a notBefore.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: EKU is required in each Subordinate CA certificate

2020-08-29 Thread Ryan Sleevi via dev-security-policy
Glad to see you paying close attention to the Baseline Requirements changes!

On Thu, Aug 27, 2020 at 1:34 PM Sándor dr. Szőke via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Yes, that date comes from the Mozilla Root Program, but this requirement
> is new for the other Root Programs and for the BR.
>

No, it's not. It's been a part of Microsoft's root program for even longer
than Mozilla's, at
https://docs.microsoft.com/en-us/security/trusted-root/program-requirements

You can see this all discussed in the CA/B Forum as part of the ballot, if
you need any assistance understanding where a change came from.


> The other thing is that without having an indicated effect date, the
> requirement can be interpreted in that way, that every valid Subordinate CA
> certificate shall comply this requirement, even if it has been issued years
> ago.
>

No, this is not correct. If you look closely at the changes that have been
made to the BRs in the past, particularly around cleanup ballots, it's to
remove effective dates that are in the past.

The BRs describe what to do at time of issuance. They have always done just
that.


> I would just like to get  confirmation  that this requirement does not
> mean that all subordinate CA certificates that are currently non-compliant
> shall be revoked, which were issued prior to the effective date.
>

You'll need to work with your root program. Mozilla's effective date is
just as it is stated, and Mozilla's policy says you are supposed to revoke
if you violate a root program requirement, as per
https://www.mozilla.org/en-US/about/governance/policies/security-group/certs/policy/


If you've misissued according to another program, which may have an earlier
date, you should work with that root program to figure the expectations for
how to handle root program violations.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Filtering on problem reporting e-mail addresses

2020-07-26 Thread Ryan Sleevi via dev-security-policy
Hey Matt,

Sorry this e-mail went unanswered. I was updating my mail filters today and
noticed it had been missed.

On Thu, May 7, 2020 at 8:03 AM Matt Palmer via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> This has happened twice now, with two different CAs, so I'm going to raise
> it as a general issue.
>
> I've had one CA reject e-mails because the HELO name wasn't to their liking
> (which I was able to fix).  The other, which has just started happening
> now,
> is utterly inscrutible -- "550 Administrative prohibition - envelope
> blocked".  Given that the envelope sender and recipient hasn't changed from
> the numerous other problem reports I've sent over the past month or so, I'm
> really at a loss as to how to proceed.
>
> Questions that arise in my mind:
>
> 1. To what extent is it reasonable for a CA to reject properly-formed
> e-mails to the CPS-published e-mail address for certificate problem
> reporting?
>

I don't know if I have a good answer for this. I think from your own
reporting, we've now seen a variety of responses from CAs - e.g. "antivirus
that prevents e-mailing CSRs" is still a ... rather surprising behaviour
that I don't know we've got a good hard-and-fast rule for.

As you know, to date, Mozilla has so far declined to make e-mail reporting
mandatory for CAs to support, precisely because of the complexities of
specifying a policy that is loophole free. At the same time, we're seeing a
telling trend of CAs restricting submissions for certain problem reports to
only particular mechanisms, and I think that's equally troubling: CAs are
placing barriers for reporters that, I believe, straddle the border if not
outright cross it of unreasonable burden.

Which I guess is to say "we'll take it on a case by case basis", and look
for opportunities for improved policies based on the hurdles that problem
reporters face.


> 2. What is a reasonable response from a problem reporter to a rejected
> problem report e-mail?
>

I would say bringing it here and/or in a CA incident.

Ultimately, both Root Program policies and the Baseline Requirements
themselves are improved based on experience and challenges faced. Knowing
the challenges faced by a problem reporter help us improve the policies.


> 3. In what ways are the required timelines for revocation impacted by the
> rejection of a properly-formed certificate problem report?
>

This is part of the challenge here. If we applied the logic a number of CAs
did during the CA/Browser Forum voting discussions (which lead to Bylaws
reform), we know that several CAs view the act of contacting a mail server,
even if it does not accept the message, as sufficient notification. Of
course, by Bylaws were clarified to take a less extreme position, of
confirmed receipt/acceptance.

I would say the answer is that this is case-by-case. CAs that place
burdensome barriers to receive reports of key compromise and/or
misissuance are, unquestionably, not acting in the public interest.
However, there are legitimate challenges. I think the best answer is
treating these as incident reports, and allowing the CA to provide their
explanation and rationale, and either treating it as a BR-violation
incident or closing as WontFix/Invalid, depending on the depth and quality
of the CA's report and the steps they take in response.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Verifying Auditor Qualifications

2020-07-20 Thread Ryan Sleevi via dev-security-policy
On Mon, Jul 20, 2020 at 10:27 AM Arvid Vermote via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> ACAB'c is a group of a few eIDAS CABs working together for reasons, they
> do not represent all eIDAS CABs neither do they have any recognized or
> official function within the eIDAS ecosystem.


> Can the ACAB'c member list be relied upon as being accurate and providing
> correct and latest information on the accreditation status of member CABs?
> It’s a manual list maintained based on membership applications and their
> acceptance. Isn't the only current accurate source of accredited eIDAS CAB
> the 20+ governmental NABs of participating EU countries that are designated
> to accredit and supervise eIDAS CAB?
>
> Without any visible added value or clear and transparent insights on what
> supervisory function they perform within the context of the WebPKI
> ecosystem (filtering which eIDAS CAB and reports are
> acceptable/qualitiative?), why would a specific subset of eIDAS CAB be
> promoted over other eIDAS CAB? Parties that are interested in becoming a
> WebPKI CA or maintaining that status often go look at root program
> requirements as a first source to understand what needs to be done,
> including what audit attestations that need to be obtained and which
> parties can provide these.
>
> I have difficulties understanding what current reason there is to refer to
> the ACAB'c and why the "simplified check" seems to suggest only ACAB'c
> member audit reports are accepted.


So, I think you make some great points, but I also think this highlights
some confusion that might be worth addressing.

Why would their role within the eIDAS ecosystem have any bearing? We know
that the eIDAS ecosystem is an alternative trust framework for solving a
different set of problems than browsers are trying to solve. Which is
totally OK, but like... it's a bit like saying "The ACAB'c doesn't have a
recognized or official function within the Adobe Document Signing Trust
Framework" and... so?

I think it's reasonable to imagine that if ACAB'c were to provide a similar
model as WebTrust, there might be value in deferring to them *instead of*
the eIDAS ecosystem. I do believe it's entirely a mistake to defer to the
eIDAS ecosystem at all, presently, for the same reason I think it'd be a
mistake to defer to, say, the banking industry's use of ISO 21188 or the
ASC X.9 work. They're separate PKIs with separate goals, and it's totally
OK for eIDAS to do their thing. If ACAB'c were, as you suggest, to provide
filtering of reports, provide normative guidance and enforcement in the
same way that, say, AICPA or CPA Canada provide with respect to their
practitioner standards, and similarly provide a level of assurance similar
to WebTrust licensure, it could make sense.

But I think your criticisms here of ACAB'c are equally criticisms that
could be lobbed at WebTrust, since it's "just" a brand by CPA Canada, and
in theory, "any" auditor participating under IFAC 'could' also provide
audits. That's no different than the discussion here.

However, I do think you're right, whether intentional or not, in pointing
out that the eIDAS ecosystem lacks many of the essential properties that a
_browser_ trust framework relies upon, and that's why I again suggest that
the degree to which ETSI-based audits are accepted should be strongly
curtailed, if not outright prevented. There are too many gaps in the
professional standards, both within ETSI and within the underlying ISO
standards that ETSI builds upon, to provide sufficient assurance. Pivoting
to browser-initiated and/or browser-contracted audits is perhaps the single
most impactful move that could be made with audits. Second to that is the
WebTrust TF's Detailed Control Reports, which I believe should be required
of all CAs.

I don't believe ETSI is even capable of producing a remotely comparable
equivalent. This is because ETSI is not a group of CABs with professional
expertise, but an otherwise 'neutral' (albeit pay-to-play) SDO. Browsers
notable lack of absence within ETSI (in part, due to the pay-for-play
nature) mean that it's unlikely ETSI will produce a standard that reflects
browsers needs, but even if browsers were to participate, it would be the
same amount of work as if they just produced such a document themselves
with the CABs. At least, in this regard, ACAB'c has a chance of success in
producing something comparable: by being CABs with a vested interest in
producing a service useful to and relied upon by browsers, they may choose
to engage with browsers and work to provide a useful audit and reporting
framework.

But, again, the eIDAS ecosystem largely has no bearing on this. Even the
accreditation, against the ETSI ESI standards used to fulfill the eIDAS
Regulation, doesn't really provide much assurance, as Supervisory Bodies
are currently seeing.
___
dev-security-policy mailing list

Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-16 Thread Ryan Sleevi via dev-security-policy
On Thu, Jul 16, 2020 at 12:45 PM Oscar Conesa via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

>
> We should reposition the debate again, because a false reality is being
> assumed.
>
> Some people are assuming that this is the real situation: "A security
> incident has occurred due to the negligence of certain CAs, and this
> security incident has got worse by the CAs' refusal to apply the correct
> measures to resolve the incident, putting the entire community at risk".
>
> But this description of the situation is not true at all, for many
> reasons and on many levels.
>
> By order:
>
> a) There is no security incident. Since no key has been compromised and
> there is no suspicion that a key compromise has occurred.


> b) There has been no negligence, errors, or suspicious or malicious
> behavior by any CA in the issuance of these certificates. All affected
> certificates have been issued following good practices and trying to
> comply with all the applicable security standards and regulations.
>
> c) The only relevant event that occurred recently (July 1) is that a
> person, acting unilaterally and by surprise, reported 14 incidents in
> the Mozilla Root Program about 157 ICA certificates misissued
> (certificates issued with the EKU "OCSPSigning" without the extension
> "OCSP-nocheck"). The wording of the incidents assumes that "his
> interpretation of the standards is the correct one" and that "his
> proposed solution is the only valid one".
>
> d) This is not a new incident since most of the certificates affected
> were issued years ago. The existence of this ambiguity in the standards
> was made public 1 year ago, on September 4, 2019, but the debate was
> closed without conclusion 6 days later
> (
> https://groups.google.com/g/mozilla.dev.security.policy/c/XQd3rNF4yOo/m/bXYjt1mZAwAJ
> ).
>
> e) The reason why these 157 ICA certificates were issued with EKU
> "OCSPSigning" was because at least 14 CAs understood that it was a
> requirement of the standard. They interpreted that they must include
> this EKU to simultaneously implement three requirements: "CA Technical
> Restriction", "OCSP Delegate Responders" and "EKU Chaining".
> Unfortunately, on this point the standard is ambiguous and poorly written.
>
> f) Last year, a precedent was created with the management of the
> "Insufficient Serial Number Entropy" incident. Many CAs were forced to
> do a massive certificate revocation order to comply with the standards
> "literally", even if these standards are poorly written or do not
> reflect the real intention of the editors.
>
> g) None of the 157 ICA certificates affected were generated with the
> intention of being used as "Delegated OCSP Responders". None of the 157
> certificates has been used at any time to sign OCSP responses.
> Additionally, most of them do not include the KU Digital Signature, so
> according to the standard, the OCSP responses that could be generated
> would not be valid.
>
> h) As a consequence of these discussions, a Security Bug has been
> detected that affects multiple implementations of PKI clients (a CVE
> code should be assigned). The security bug occurs when a client PKI
> application accept digitally signed OCSP responses for a certificate
> that does not have the "DigitalSignature" KeyUsage active, as required
> by the standard.
>
> i) There is no procedure in any standard within Mozilla Policy that
> specifies anything about "key destruction" or "reuse of uncompromised
> keys".
>
> j) As long as the CAs maintain sole control of the affected keys, there
> is no security problem. For a real security incident to exist, it should
> happen simultaneously: (i) the keys were compromised, (ii) they were
> used to generate malicious OCSP responses, (iii) some client application
> accepted these OCSP responses as valid and (iv) the revocation process
> of the affected certificate was ineffective because the attacker was
> able to generate fraudulent OCSP responses about its own status. Still,
> in that case there is a quick and easy solution: remove the affected CA
> Root from the list of trusted CAs.
>
> k) The real risk of this situation is assumed by the affected CAs and
> not by the end users, since in the worst case, the ICA key compromise
> would imply the revocation of the CA Root.
>
>

Hi Oscar,

Unfortunately, there's a number of factual errors here that I think greatly
call into question the ability for Firmaprofessional to work with users and
relying parties to understand the risks and to take them seriously.

I would greatly appreciate if Firmaprofesional share their official
response on https://bugzilla.mozilla.org/show_bug.cgi?id=1649943 within the
next 24 hours, so that we can avoid any further delays in taking the
appropriate steps to ensure users are protected and any risks are
appropriately mitigated. If this message is meant to be your official
response, please feel free to paste it there.

Unfortunately, I don't think discussing 

Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-15 Thread Ryan Sleevi via dev-security-policy
On Wed, Jul 15, 2020 at 12:30 PM Chema López via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

>
> So, an ICA or SCA cert. without keyUsage set to digitalSignature is not an
> OCSP Responder. Full stop.


False. Full stop.

I mentioned in my reply to Corey, but I think it's disastrous for trust for
a CA to make this response. I realize you qualified this as a personal
capacity, but I want to highlight you're also seeming to make this argument
in a professional capacity, as highlighted by
https://bugzilla.mozilla.org/show_bug.cgi?id=1649943 . My hope is that, in
a professional capacity, you'll respond to that issue as has been requested.

Absent a further update, it may be necessary and appropriate to have a
discussion as to whether continued trust is warranted, because there's a
lack of urgency, awareness, transparency, and responsiveness to this issue.

I appreciate you quoting replies from the thread, but you also seem to have
cherry-picked replies that demonstrate an ignorance or lack of awareness
about the actual PKI ecosystem.

* macOS does not require the digitalSignature bit for validating OCSP
responses
* OpenSSL does not require the digitalSignature bit for validating OCSP
responses
* GnuTLS does not require the digitalSignature bit for validating OCSP
responses
* Mozilla NSS does not require the digitalSignature bit for validating OCSP
responses
* As best I can tell, Microsoft CryptoAPI does not require the
digitalSignature bit for validating OCSP responses (instead relying on
pkix-nocheck)

Mozilla code explicitly stated and referenced the fact that the
digitalSignature bit was not only seen as not necessary, but harmful for
interoperability, due to CAs.

You cannot pretend these are not OCSP responders simply because you haven't
issued OCSP responses (intent). They are, for every purpose, OCSP
responders that are misissued. And you need to treat this with the urgency,
seriousness, gravity, and trustworthiness required.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-12 Thread Ryan Sleevi via dev-security-policy
On Sun, Jul 12, 2020 at 4:19 PM Oscar Conesa via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> To obtain this confidence, CAs must comply with all the requirements
> that are imposed on them in the form of Policies, Norms, Standards and
> Audits that are decided on an OBJECTIVE basis for all CAs. The
> fulfillment of all these requirements must be NECESSARY, but also
> SUFFICIENT to stay in the Root Program.
>

As Matt Palmer points out, that's not consistent with how any root program
behaves. This is not unique to Mozilla, as you find similar text at
Microsoft and Google, and I'm sure you'd find similar text with Apple were
they to say anything.

Mozilla's process is transparent, in that it seeks to weigh public
information, but it's inherently subjective: this can easily be seen by the
CP/CPS reviews, which are, by necessity, a subjective evaluation of risk
criteria based on documentation. You can see this in the CAs that have been
accepted, and the applications that have been rejected, and the CAs that
have been removed and those that have not been. Relying parties act on the
information available to them, including how well the CA handles and
responds to incidents.

Some CAs may want to assume a leadership role in the sector and
> unilaterally assume more additional strict security controls. That is
> totally legitimate. But it is also legitimate for other CAs to assume a
> secondary role and limit ourselves to complying with all the
> requirements of the Root Program. You cannot remove a CA from a Root
> Program for not meeting fully SUBJETIVE additional requirements.
>

CAs have been, can be, and will continue to be. I think it should be
precise here: we're talking about an incident response. Were things as
objective as you present, then every CA who has misissued such a
certificate would be at immediate risk of total and complete distrust. We
know that's not a desirable outcome, nor a likely one, so we recognize that
there is, in fact, a shade of gray here for judgement.

That judgement is whether or not the CA is taking the issue seriously, and
acting to assume a leadership role. CAs that fail to do so are CAs that
pose risk, and it may be that the risk they pose is unacceptable. Key
destruction is one way to reassure relying parties that the risk is not
possible. I think a CA that asked for it to be taken on faith,
indefinitely, is a CA that fundamentally misunderstands the purpose and
goals of a root program.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-11 Thread Ryan Sleevi via dev-security-policy
On Sat, Jul 11, 2020 at 1:18 PM Oscar Conesa via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> f) For CAs that DO have sole control of the keys: There is no reason to
> doubt the CA's ability to continue to maintain the security of these
> keys, so the CA could reuse the keys by reissuing the certificate with
> the same keys. If there are doubts about the ability of a CA to protect
> its own critical keys, that CA cannot be considered "trusted" in any way.
>

While Filippo has pointed out the logical inconsistency of f and g, I do
want to establish here the problem with f, as written.

For CAs that DO have sole control of the keys: There is no reason to
*trust* the CA's ability to continue to maintain the security of the keys.

I want to be clear here: CAs are not trusted by default. The existence of a
CA, within a Root Program, is not a blanket admission of trust in the CA.
We can see this through elements such as annual audits, and we can also see
this in the fact that, for the most part, CAs have largely not been removed
on the basis of individual incident reports.

CAs seem to assume that they're trusted until they prove otherwise, when
rather, the opposite is true: we constantly view CAs through the lens of
distrusting them, and it is only by the CA's action and evidence that we
hold off on removing trust in them. Do they follow all of the requirements?
Do they disclose sufficient detail in how they operate? Do they maintain
annual audits with independent evaluation? Do they handle incident reports
thoughtfully and thoroughly, or do they dismiss or minimize them?

As it comes to this specific issue: there is zero reason to trust that a
CA's key, intended for issuing intermediates, is sufficiently protected
from being able to issue OCSP responses. As you point out in g), that's not
a thing some CAs have expected to need to do, so why would or should they?
The CA needs to provide sufficient demonstration of evidence that this has
not, can not, and will not happen. And even then, it's merely externalizing
risk: the community has to constantly be evaluating that evidence in
deciding whether to continue. That's why any failure to revoke, or any
revocation by rotating EKUs but without rolling keys, is fundamentally
insufficient.

The question is not "Do these keys need to be destroyed", but rather, "when
do these keys need to be destroyed" - and CAs need to come up with
meaningful plans to get there. I would consider it unacceptable if that
process lasted a year, and highly questionable if it lasted 9 months,
because these all rely on clients, globally, accepting the risk that a
control will fail. If a CA is going beyond the 7 days require by the BRs -
which, to be clear, it would seem the majority are - they absolutely need
to come up with a plan to remove this eventual risk, and detail their logic
for the timeline about when, how, and why they've chosen when they chose.

As I said, there's no reason to trust the CA here: there are plenty of ways
the assumed controls are insufficient. The CA needs to demonstrate why.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-10 Thread Ryan Sleevi via dev-security-policy
On Fri, Jul 10, 2020 at 12:01 PM ccampetto--- via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Wouldn't be enough to check that OCSP responses are signed with a
> certificate which presents the (mandatory, by BR) id-pkix-ocsp-nocheck?
> I've not checked, but I don't think that subordinate CA certificates have
> that extension


You're describing a behaviour change to all clients, in order to work
around the CA not following the profile.

This is a common response to many misissuance events: if the client
software does not enforce that CAs actually do what they say, then it's not
really a rule. Or, alternatively, that the only rules should be what
clients enforce. We see this come up from time to time, e.g. certificate
lifetimes, but this is a way of externalizing the costs/risks onto clients.

None of this changes what clients, in the field, today do. And if the
problem was caused by a CA, isn't it reasonable to expect the problem to be
fixed by the CA?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: New Blog Post on 398-Day Certificate Lifetimes

2020-07-09 Thread Ryan Sleevi via dev-security-policy
>
> Now that I have proven beyond a shadow of a doubt that we are talking
> about phishing, feel free to debate the merits of my points raised in my
> original email.
>

Thanks Paul. I think you're the only person I've encountered who refers to
key compromise as phishing, but I don't think we'll make much progress when
words have no meaning and phishing is used to describe everything from
"monster in the middle" to "cache poisoning".

Since, to use the same analogy, everything from speed limits and signs to
red light cameras to speed traps to roundabouts is being called "speed
bumps", it's not worth engaging further.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: New Blog Post on 398-Day Certificate Lifetimes

2020-07-09 Thread Ryan Sleevi via dev-security-policy
I’m not sure how that answered my question? Nothing about the post seems to
be about phishing, which is not surprising, since certificates have nothing
to do with phishing, but your response just talks more about phishing.

It seems you may be misinterpreting “security risks” as “phishing“, since
you state they’re interchangeable. Just like Firefox’s sandbox isn’t about
phishing, nor is the same-origin policy about phishing, nor is Rust’s
memory safety about phishing, it seems like certificate security is also
completely unrelated to phishing, and the “security risks” unrelated to
phishing.

On Thu, Jul 9, 2020 at 2:48 PM Paul Walsh  wrote:

> Good question. And I can see why you might ask that question.
>

> The community lead of PhishTank mistakenly said that submissions should
> only be made for URLs that are used to steal' credentials. This helps to
> demonstrate a misconception. While this might have been ok in the past,
> it’s not today.
>
> Phishing is a social engineering technique, used to trick consumers into
> trusting URLs / websites so they can do bad things - including but not
> limited to, man-in-the-middle attacks. Mozilla references this attack
> vector as one of the main reasons for wanting to reduce the life of a cert.
> They didn’t call it “phishing” but that’s precisely what it is.
>
> We can remove all of my references to “phishing” and replace it with
> “security risks” or “social engineering” if it makes this conversation a
> little easier.
>
> And, according to every single security company in the world that focuses
> on this problem, certificates are absolutely used by bad actors - if only
> to make sure they don’t see a “Not Secure” warning.
>
> I’m not talking about EV or identity related info here as it’s not
> related. I’m talking about the risk of a bad actor caring to use a cert
> that was issued to someone else when all they have to do is get a new one
> for free.
>
> I don’t see the risk that some people see. Hoping to be corrected because
> the alternative is that browsers are about to make life harder and more
> expensive for website owners with little to no upside - outside that of a
> researchers lab.
>
> Warmest regards,
> Paul
>
>
> On Jul 9, 2020, at 11:26 AM, Ryan Sleevi  wrote:
>
>
>
> On Thu, Jul 9, 2020 at 1:04 PM Paul Walsh via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
>>
>> According to Google, spear phishing
>
>
> I didn't see phishing mentioned in Mozilla's post, which is unsurprising,
> since certificates have nothing to do with phishing. Did I overlook
> something saying it was about phishing?
>
> It seems reasonable to read it as it was written, which doesn't mention
> phishing, which isn't surprising, because certificates have never been able
> to address phishing.
>
>
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: New Blog Post on 398-Day Certificate Lifetimes

2020-07-09 Thread Ryan Sleevi via dev-security-policy
On Thu, Jul 9, 2020 at 1:04 PM Paul Walsh via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

>
> According to Google, spear phishing


I didn't see phishing mentioned in Mozilla's post, which is unsurprising,
since certificates have nothing to do with phishing. Did I overlook
something saying it was about phishing?

It seems reasonable to read it as it was written, which doesn't mention
phishing, which isn't surprising, because certificates have never been able
to address phishing.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-07 Thread Ryan Sleevi via dev-security-policy
On Tue, Jul 7, 2020 at 10:36 PM Matt Palmer via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Mon, Jul 06, 2020 at 10:53:50AM -0700, zxzxzx9--- via
> dev-security-policy wrote:
> > Can't the affected CAs decide on their own whether to destroy the
> > intermediate CA private key now, or in case the affected intermediate CA
> > private key is later compromised, revoke the root CA instead?
>
> No, because there's no reason to believe that a CA would follow through on
> their decision, and rapid removal of trust anchors (which is what "revoke
> the root CA" means in practice) has all sorts of unpleasant consequences
> anyway.
>

Er, not quite?

I mean, yes, removing the root is absolutely the final answer, even if
waiting until something "demonstrably" bad happens.

The question is simply whether or not user agents will accept the risk of
needing to remove the root suddenly, and with significant (e.g. active)
attack, or whether they would, as I suggest, take steps to remove the root
beforehand, to mitigate the risk. The cost of issuance plus the cost of
revocation are a fixed cost: it's either pay now or pay later. And it seems
like if one needs to contemplate revoking roots, it's better to do it
sooner, than wait for it to be an inconvenient or inopportune time. This is
why I meant earlier, when I said a solution that tries to wait until the
'last possible minute' is just shifting the cost of misissuance onto
RPs/Browsers, by leaving them to clean up the mess. And a CA that tries to
shift costs onto the ecosystem like that seems like it's not a CA that can
be trusted to, well, be trustworthy.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CPS URLs

2020-07-06 Thread Ryan Sleevi via dev-security-policy
On Mon, Jul 6, 2020 at 1:22 PM Matthias van de Meent via
dev-security-policy  wrote:

> Hi,
>
> As per BR v1.7.0, section 7.1.2.3, a Subscriber Certificate MAY
> include `certificatePolicies:policyQualifiers:qualifier:cPSuri`, which
> must then contain:
>
> > HTTP URL for the Subordinate CA's Certification Practice Statement,
> Relying Party Agreement or other pointer to online information provided by
> the CA
>
> (this section has existed as such since at least BR v1.3.0 as such,
> and can be traced back all the way to BR v1.0)
>
> I notice that a lot of Subscriber Certificates contain https-based
> URLs (e.g. PKIOverheid/KPN, Sectigo, DigiCert), and that other
> http-based urls redirect directly to an https-based website (e.g.
> LetsEncrypt, GoDaddy).
>
> As I am not part of the CA/B Forum, and could not find open (draft)
> ballots in the cabforum/docs repository about updating this section;
> I'll ask this:
>
> 1.) What was the reasoning behind not (also / specifically) allowing
> an HTTPS url? Was there specific reasoning reasoning?
>

Nope, no specific reasoning. The ambiguity here is whether it's resources
dereferenced via an HTTP protocol (which would include HTTP over TLS) or
whether it's HTTP schemed resources (which would not). The meaningful
distinction was to exclude other forms of scheme/protocols, such as LDAP
(inc. LDAPS) and FTP (inc. FTPS)


> 2.) Should this be fixed, or should the batch of certificates with an
> http `certificatePolicies:policyQualifiers:qualifier:cPSuri` be
> revoked as misissued?
>

Yeah, this is something that was already flagged as part of the validation
WG work to clean up certificate profiles, in that there's other forms of
ambiguity here. For example, if one includes an HTTP(S) URL, can they also
include one of the undesirable schemes? How many CPS URIs can they include?
etc.


> 3.) If HTTPS is disallowed for a good reason, then should redirecting
> to HTTPS also be disallowed?
>
> Note: In other sections (e.g. 3.2.2.4.18) https is specifically called
> out as an allowed protocol.
>
> My personal answer regarding 2. is 'yes, this should be fixed in the
> BR', unless there is solid reasoning behind 1.
>
>
> With regards,
>
> Matthias
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-06 Thread Ryan Sleevi via dev-security-policy
On Mon, Jul 6, 2020 at 3:38 AM Dimitris Zacharopoulos 
wrote:

> On 6/7/2020 9:47 π.μ., Ryan Sleevi wrote:
>
> I can understand wanting to wait to see what others do first, but that’s
> not leadership.
>
>
> This is a security community, and it is expected to see and learn from
> others, which is equally good of proposing new things. I'm not sure what
> you mean by "leadership". Leadership for who?
>

Leadership as a CA affected by this, taking steps to follow through on
their commitments and operate beyond reproach, suspicion, or doubt.

As a CA, the business is built on trust, and that is the most essential
asset. Trust takes years to build and seconds to lose. Incidents, beyond
being an opportunity to share lessons learned and mitigations applied,
provide an opportunity for a CA to earn trust (by taking steps that are
disadvantageous for their short-term interests but which prioritize being
irreproachable) or lose trust (by taking steps that appear to minimize or
dismiss concerns or fail to take appropriate action).

Tim’s remarks on behalf of DigiCert, if followed through on, stand in stark
contrast to remarks by others. And that’s encouraging, in that it seems
that past incidents at DigiCert have given rise to a stronger focus on
security and compliance than May have existed there in the past, and which
there were concerns about with the Symantec PKI acquisition/integration.
Ostensibly, that is an example of leadership: making difficult choices to
prioritize relying parties over subscribers, and to focus on removing
any/all doubt.

You mean when I dismissed this line of argument? :)
>
>
> Yep. You have dismissed it but others may have not. If no other voices are
> raised, then your argument prevails :)
>

I mean, it’s not a popularity contest :)

It’s a question of what information is available to the folks ultimately
deciding things. If there is information being overlooked, if there are
facts worth considering, this is the time to bring it up. Conclusions will
ultimately be decided by those trusting these certificates, but that’s why
it’s important to introduce any new information that may have been
overlooked.

>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-06 Thread Ryan Sleevi via dev-security-policy
entity Certificates*, not CA Certificates.
> Should RFC 6960 need to be read in conjunction with RFC 5280 and not on
> its own?


Even when you read them together, nothing prohibits them from appearing on
CA certificates, not prohibits their semantic interpretation. As mentioned
above, this is no different than the hypothetical misissued “google.com” CA
cert, by putting a SAN, a TLS EKU, no KU/only CertSign/CRLSign in the EKU,
but adding a SAN. Is that misissued?

The argument you’re applying here would say it’s not misissued: after all,
it’s a CA cert, the KU means it “cannot” be used for TLS, and, using your
logic, the BRs don’t explicitly prohibit it. Yet that very certificate can
be used to authenticate “google.com”, and it shouldn’t matter how many
certificates that hypothetical CA has issued, the security risk is there.

Similarly, it might be argued that you know it hasn’t been used to attack
Google because, hey, it’s in an HSM, at the CA, so what’s the big deal? The
big deal there, as with here, is the potential for mischief and the mere
existence of that cert, for which no control prohibits the CA from MITMing
*using* their HSM.

If you can understand that, then you can understand the risk, and why the
burden falls to the CA to demonstrate.

I support analyzing the practical implications on existing Browser
> software to check if existing Web Browsers verify and accept an OCSP
> response signed by a delegated CA Certificate (with the
> id-kp-OCSPSigning EKU) on behalf of its parent. We already know the
> answer for Firefox.


Note: NSS is vulnerable, which implies Thunderbird (which does not, AFAIK,
use Mozilla::pkix, from what I can tell) is also vulnerable.

Do we know whether Apple, Chromium and Microsoft web
> browsers treat OCSP responses signed from delegated CA Certificates
> (that include the id-kp-OCSPSigning EKU) on behalf of their parent
> RootCA, as valid?


Yes

Some tests were performed by Paul van Brouwershaven
> https://gist.github.com/vanbroup/84859cd10479ed95c64abe6fcdbdf83d.


As mentioned, those tests weren’t correct. I’ve provided sample test cases
to several other browser vendors, and heard back or demonstrated that
they’re vulnerable. As are the majority of open-source TLS libraries with
support for OCSP.

This
> does not dismiss any of the previous statements of putting additional
> burden on clients or the security concerns, it's just to assess this
> particular incident with particular popular web browsers. This analysis
> could be taken into account, along with other parameters (like existing
> controls) when deciding timelines for mitigation, and could also be used
> to assess the practical security issues/impact. Until this analysis is
> done, we must all assume that the possible attack that was described by
> Ryan Sleevi can actually succeed with Apple, Chrome and Microsoft Web
> browsers.


You should assume this, regardless, with any incident report. That is,
non-compliance arguments around the basis of “oh, but it shouldn’t work”
aren't really a good look. The CA needs to do their work, and if they know
they don’t plan to revoke on time, make sure they publish their work for
others to check.

It’s not hard to whip up a synthetic response for testing: I know, because
I’ve done it (more aptly, constructed 72 possible permutations of EKU, KU,
and basicConstraints) and tested those against popular libraries or traced
through their handling of these.

Mozilla’s only protection seems to be the CA bit check, not the KU argument
as Corey made, and as noted when it was introduced, doesn’t seem like it
was a prohibited configuration/required check by any specification.

This is why it is important for CAs to reliably demonstrate that the key
cannot be misused, whether revoking at 7 days or longer, and why any
proposal for “longer” needs to come with some objectively verifiable plan
for demonstrating it wasn’t / isn’t misused, and which does something more
than “trust us (and/or our auditor)”. I gave one such proposal to WISeKey,
and HARICA is free to make its own proposal based on the facts specific to
HARICA.

But, again, I want to stress when it comes to audits: they are not magic
pixie dust we sprinkle on and say “voilà, we have created assurance”. It’s
understandable for CAs to misunderstand audits, particularly if they’re the
ones producing them and don’t consume any. Understanding the limitations
they provide is, unfortunately, necessary for browsers, and my remarks here
and to WISeKey reflect knowing where, and how, a CA could exploit audits to
provide the semblance of assurance while causing trouble. ETSI and WebTrust
provide different assurance levels and different fundamental approaches, so
what works for WebTrust may not (almost certainly, will not) work for ETSI.
Unfortunately, when an incident relies on audits as part of the
justification for delays, this means that those limits and differences have
to be 

Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-05 Thread Ryan Sleevi via dev-security-policy
On Sun, Jul 5, 2020 at 5:30 PM Buschart, Rufus via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> > From: dev-security-policy 
> On Behalf Of Matt Palmer via dev-security-policy
> > At the limits, I agree with you.  However, to whatever degree that there
> is complaining to be done, it should be directed at the CA(s)
> > which sold a product that, it is now clear, was not fit for whatever
> purpose it has been put to, and not at Mozilla.
>
> Let me quote from the NSS website of Mozilla (
> https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/Overview):
>
>   If you want to add support for SSL, S/MIME, or other Internet security
> standards to your application, you can use Network Security Services (NSS)
> to implement
>   all your security features. NSS provides a complete open-source
> implementation of the crypto libraries used by AOL, Red Hat, Google, and
> other companies in a
>   variety of products, including the following:
>   * Mozilla products, including Firefox, Thunderbird, SeaMonkey, and
> Firefox OS.
>   * [and a long list of additional reference applications]
>
> Probably it would be good if someone from Mozilla team steps in here, but
> S/MIME _is_ an advertised use-case for NSS. And the Mozilla website says
> nowhere, that the demands and rules of WebPKI / CA/B-Forum overrule all
> other demands. It is simply not to be expected by a consumer of S/MIME
> certificates that they become invalid within 7 days just because the BRGs
> for TLS certificates are requiring it. This feels close to intrusive
> behavior of the WebPKI community.
>

Mozilla already places requirements on S/MIME revocation:
https://github.com/mozilla/pkipolicy/blob/master/rootstore/policy.md#62-smime
- The only difference is that for Subscriber certificates, a timeline is
not yet attached.

The problem is that this is no different than if you issued a TLS-capable
S/MIME issuing CA, which, as we know from past incidents, many CAs did
exactly that, and had to revoke them due to the lack of appropriate audits.
Your Root is subject to the TLS policies, because that Root was enabled for
TLS, and so everything the Root issues is, to some extent, subject to those
policies.

The solution here for CAs has long been clear: maintaining separate
hierarchies, from the root onward, for separate purposes, if they
absolutely want to avoid any cohabitation of responsibilities. They *can*
continue on the current path, but they have to plan for the most
restrictive policy applying throughout that hierarchy and designing
accordingly. Continuing to support other use cases from a common root -
e.g. TLS client authentication, document signing, timestamping, etc -
unnecessarily introduces additional risk, and for limited concrete benefit,
either for users or for the CA. Having to maintain two separate "root"
certificates in a root store, one for each purpose, is no more complex than
having to maintain a single root trusted for two purposes; and
operationally, for the CA, is is vastly less complex.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Ryan Sleevi via dev-security-policy
On Sat, Jul 4, 2020 at 10:42 PM Peter Bowen via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> As several others have indicated, WebPKI today is effectively a subset
> of the more generic shared PKI. It is beyond time to fork the WebPKI
> from the general PKI and strongly consider making WebPKI-only CAs that
> are subordinate to the broader PKI; these WebPKI-only CAs can be
> carried by default in public web browsers and operating systems, while
> the broader general PKI roots can be added locally (using centrally
> managed policies or local configuration) by those users who what a
> superset of the WebPKI.
>

+1.  This is the only outcome that, long term, balances the tradeoffs
appropriately.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Ryan Sleevi via dev-security-policy
On Sat, Jul 4, 2020 at 9:41 PM Peter Gutmann 
wrote:

> Ryan Sleevi  writes:
>
> >And they are accomodated - by using something other than the Web PKI.
>
> That's the HTTP/2 "let them eat cake" response again.  For all intents and
> purposes, PKI *is* the Web PKI.  If it wasn't, people wouldn't be worrying
> about having to reissue/replace certificates that will never be used in a
> web
> context because of some Web PKI requirement that doesn't apply to them.
>

Thanks Peter, but I fail to see how you're making your point.

The problem that "PKI *is* the Web PKI" is the problem to be solved. That's
not a desirable outcome, and exactly the kind of thing we'd expect to see
as part of a CA transition.

PKI is a technology, much like HTTP/2 is a protocol. Unlike your example,
of HTTP/2 not being considerate of SCADA devices, PKI is an abstract
technology fully capable of addressing the SCADA needs. The only
distinction is that, by design and rather intentionally, it doesn't mean
that the billions of devices out there, in their default configuration, can
or should expect to talk to SCADA servers. I'm would hope you recognize why
that's undesirable, just like it would be if your phone were to ship with a
SCADA client. At the end of the day, this is something that should require
a degree of intentionality. Whether it's HL7 or SCADA, these are limited
use cases that aren't part of a generic and interoperable Web experience,
and it's not at all unreasonable to think they may require additional,
explicit configuration to support.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Ryan Sleevi via dev-security-policy
On Sat, Jul 4, 2020 at 9:21 PM Peter Gutmann 
wrote:

> So the problem isn't "everyone should do what the Web PKI wants, no matter
> how
> inappropriate it is in their environment", it's "CAs (and protocol
> designers)
> need to acknowledge that something other than the web exists and
> accommodate
> it".


And they are accomodated - by using something other than the Web PKI.

Your examples of SCADA are apt: there's absolutely no reason to assume a
default phone device, for example, should be able to manage a SCADA device.
Of course we'd laugh at that and say "Oh god, who would do something that
stupid?"

Yet that's what happens when one tries to make a one-size fits-all PKI.

Of course the PKI technologies accommodate these scenarios: you use locally
trusted anchors, specific to your environment, and hope that the OS vendor
does not remove support for your use case in a subsequent update. Yet it
would be grossly negligent if we allowed SCADA, in your example, to hold
back the evolution of the Web. As you yourself note, it's something other
than the Web. And it can use its own PKI.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


  1   2   3   4   5   6   7   8   9   10   >