Re: Third party use of OneCRL

2017-11-07 Thread Ryan Sleevi via dev-security-policy
Apologies, my understanding is that the XML is synced from the JSON, rather
than the other way around

See https://wiki.mozilla.org/Firefox/Kinto#Blocklists

That is, the canonical source is Kinto (JSON), that is then used to drive
the generation of the blocklist.xml (so that released binaries match the
remotely-provided blocklist at the time of binary release)

You can see an example of a OneCRL modification at
https://bugzilla.mozilla.org/show_bug.cgi?id=1407559 - in which Kinto is
updated with the new set, and then that propagates to the blocklist.xml

On Tue, Nov 7, 2017 at 9:58 AM, Niklas Bachmaier <
niklas.bachma...@googlemail.com> wrote:

> Thanks a lot, Ryan! Your comment on the Firefox specific selection of
> revoked certificates contained in the list is definitely a point we'll have
> to consider.
> One more question: do I see it correctly that what is being called OneCRL
> is the "certItems" part of https://hg.mozilla.org/
> mozilla-central/file/tip/browser/app/blocklist.xml? And the link which
> provides the JSON file (which I included in my message before) is derived
> from the blocklist XML?
>
> 2017-11-07 14:47 GMT+01:00 Ryan Sleevi <r...@sleevi.com>:
>
>> Note that additions and removals are made in OneCRL relate to the
>> behaviour of mozilla::pkix and the trust lists expressed by the associated
>> version of NSS shipping with the supported versions of Firefox.
>>
>> For example, this includes revocation of 'email only' CAs (that are not
>> appropriately constrained), which of course would not be appropriate for an
>> e-mail consuming application, or the revocation of particular
>> cross-certificates tied to the status of trust of particular roots.
>>
>> As for the blocklist update, it's maintained in
>> https://hg.mozilla.org/mozilla-central/filelog/tip/browse
>> r/app/blocklist.xml
>>
>> On Tue, Nov 7, 2017 at 8:08 AM, niklas.bachmaier--- via
>> dev-security-policy <dev-security-policy@lists.mozilla.org> wrote:
>>
>>> Hi all
>>>
>>> I'm working for a big managed security provider. We would like to
>>> benefit from OneCRL as a means of improving our certificate revocation
>>> checking.
>>>
>>> I could download OneCRL at https://firefox.settings.servi
>>> ces.mozilla.com/v1/buckets/blocklists/collections/certificates/records.
>>> My question is if there is a license on OneCRL or if we are free to use it?
>>> Further I'm wondering if Mozilla has already thought about third party
>>> users and provides another way of getting the most recent version of OneCRL
>>> than getting the above mentioned website and comparing if the content has
>>> changed?
>>>
>>> Thanks a lot already for any feedback on this!
>>>
>>> Niklas
>>> ___
>>> dev-security-policy mailing list
>>> dev-security-policy@lists.mozilla.org
>>> https://lists.mozilla.org/listinfo/dev-security-policy
>>>
>>
>>
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Third party use of OneCRL

2017-11-07 Thread Ryan Sleevi via dev-security-policy
Note that additions and removals are made in OneCRL relate to the behaviour
of mozilla::pkix and the trust lists expressed by the associated version of
NSS shipping with the supported versions of Firefox.

For example, this includes revocation of 'email only' CAs (that are not
appropriately constrained), which of course would not be appropriate for an
e-mail consuming application, or the revocation of particular
cross-certificates tied to the status of trust of particular roots.

As for the blocklist update, it's maintained in
https://hg.mozilla.org/mozilla-central/filelog/tip/browser/app/blocklist.xml

On Tue, Nov 7, 2017 at 8:08 AM, niklas.bachmaier--- via dev-security-policy
 wrote:

> Hi all
>
> I'm working for a big managed security provider. We would like to benefit
> from OneCRL as a means of improving our certificate revocation checking.
>
> I could download OneCRL at https://firefox.settings.
> services.mozilla.com/v1/buckets/blocklists/collections/certificates/
> records. My question is if there is a license on OneCRL or if we are free
> to use it? Further I'm wondering if Mozilla has already thought about third
> party users and provides another way of getting the most recent version of
> OneCRL than getting the above mentioned website and comparing if the
> content has changed?
>
> Thanks a lot already for any feedback on this!
>
> Niklas
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: .tg Certificates Issued by Let's Encrypt

2017-11-06 Thread Ryan Sleevi via dev-security-policy
On Mon, Nov 6, 2017 at 6:34 AM, Fotis Loukos via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On 04/11/2017 02:36 μμ, Daniel Cater via dev-security-policy wrote:
> > I notice that on https://crt.sh/mozilla-onecrl there are lots of
> certificates that have recently been added to OneCRL from the .tg TLD
> (Togo), including ones for high-profile domains such as google.tg. The
> issuances occurred 3 days ago, on 1st November.
>
> According to LE CP section 4.2.1:
> The CA SHALL develop, maintain, and implement documented procedures that
> identify and require additional verification activity for High Risk
> Certificate Requests prior to the Certificate’s approval, as reasonably
> necessary to ensure that such requests are properly verified under these
> Requirements.
>
> The same language also exists in section 4.2.1 of the CA/B Forum BRs.
>
> Has Lets Encrypt implemented the documented procedures? Is a request for
> google.tg considered a high risk certificate request based on the
> LetsEncrypt risk-mitigation criteria?
>

Does it matter? We've discussed this on the list several times in the past
- the fact is that it can be whatever a CA defines, and is itself not
meaningful for assurance. We've also seen how CA's "high risk" lists have
ended up denying legitimate requests or causing security issues, so it
hardly seems the thing to hang our hat on, or the thing of substance worth
discussing.

Should all CAs treat .tg as high risk now? Should all domains be treated as
high risk, since, of course, registries can have issues? You can see how we
can quickly devolve into arguing everything is High Risk, while, in
practice, nothing is High Risk.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: .tg Certificates Issued by Let's Encrypt

2017-11-05 Thread Ryan Sleevi via dev-security-policy
Neither CAA nor DNSSEC mitigate registry compromises.

On Sun, Nov 5, 2017 at 9:15 AM Daniel Cater via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Hmm, CAA records could also potentially be spoofed in this situation, in
> which case they would also not be trustworthy (save for cached records with
> a long TTL).
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: ETSI Audits Almost Always FAIL to list audit period

2017-10-31 Thread Ryan Sleevi via dev-security-policy
On Tue, Oct 31, 2017 at 5:29 PM, Dimitris Zacharopoulos via
dev-security-policy  wrote:
>
> I don't believe your statement is supported by the evidence - which is why
>> I'm pushing you to provide precise references. Consider from the
>> perspective as a consumer of such audits - there is zero awareness of the
>> contract as to whether or not the BRs were in scope - after all, 319 411-1
>> is meant to be inclusive of the normative requirements with respect to
>> audit supervision.
>>
>
> My statement that auditors are governed by 17065 and 403 is supported by
> evidence (section1 of 411-1 where it says that ETSI EN 319 403 provides
> guidance to auditors that wish to audit the 411-1 standard). Also, the BRs
> are normative for 411-1 as stated in section 2.1 of the same document.
> Normative references to the BRs are all over the 411-1 document, unless I
> misunderstood your statement.


I think you did, so I'll try to repeat.

As you know, for both ETSI and WebTrust criteria, what is normative is the
requirements within the respective documents. That is, regardless of what
the BRs say (or don't say), what is audited is the criteria. Section 2.1
lists the BRs as a normative reference, but that is because specific
auditable criteria are derived from it, not because it's fully incorporated
by reference. That is, if you imagine a ballot passing the CABF (which
itself is hard to imagine) that said "CAs shall keep a rubber duck next to
the HSM", and it was adopted, this wouldn't necessarily immediately cause
ETSI or WebTrust audits to fail, because that requirement hasn't yet had an
auditable criteria derived from it. That's why I'm suggesting that, for
sake of discussion, auditors ignore whats in the BRs - unless specifically
told (by the WebTrust or ETSI documents) to examine specific sections.

As to "whether or not they were in scope", my point was that a 319 403/401
audit has the contract define the scope of the period and activities, and
that's not part of the final reporting mechanism. As such, there's no
public attestation as to the period of evidence examined. I expand on that
more below.


> But stepping back further from the contract, the claim that "the audit
>> covers operations for one year" is also not part of the 17065, 17021, or
>> 319 403 oversight. That is, the certification is forward looking (as
>> evidenced by the expiration), and while it involves historic review, it is
>> not, in and of itself, a statement of assurance of the historic
>> activities.
>> This is the core difference between the 17021/17065 evaluation of
>> processes
>> and products versus, say, the ISAE3000 assurance evaluation.
>>
>
> I read the ISAE3000 and can't find specific language to support a core
> difference in auditor guidance, especially related to the assurance of the
> historic activities. Perhaps there is a more specific section you can
> reference.


http://www.ifac.org/system/files/downloads/b012-2010-iaasb-handbook-isae-3000.pdf

Pages 304 and 305
Assurance Report Content
49. The assurance report should include the following basic elements:
...
(c) An identification and description of the subject matter information
and, when appropriate, the subject matter: this includes for example:
The point in time or period of time to which the evaluation or
measurement of the subject matter relates;


The eIDAS Regulation mandates for 2-year audits (not the ETSI EN 319
>>> 411-1). This has been reflected in the ETSI EN 319 403 audit scheme,
>>> under
>>> 7.4.6 (Audit Frequency), which states:
>>>
>>> "There shall be a period of no greater than two years for a full
>>> (re-)assessment audit unless otherwise required by the applicable
>>> legislation or commercial scheme applying the present document.
>>>
>>> NOTE: A surveillance audit can be required by an entitled party at any
>>> time or by the conformity assessment
>>> body as defined by the surveillance programme according to clause 7.9."
>>>
>>> I'm patently aware of that, but I'm trying to highlight to you that this
>> statement itself lacks the specificity to give confidence of the
>> operations. We've seen CAs try to present surveillance audits as full
>> audits. We've seen CAs try to present point in time assessments as periods
>> of time.
>>
>
> This means that some CAs can't tell the difference between a point-in-time
> and a period-in-time audit (nothing to do with the audit scheme).


No, it has everything to do with the audit scheme - whether or not
something is a point-in-time or period-of-time audit is very much tied to
the auditing standards being used - which is the point. That is, 17021 and
17065 are certification audits, and as such, do not have a 1:1 mapping to
the IFAC's ISAE 3000. This itself is called out by ENISA in discussing ISMS
auditing approaches -

Re: Francisco Partners acquires Comodo certificate authority business

2017-10-31 Thread Ryan Sleevi via dev-security-policy
On Tue, Oct 31, 2017 at 3:44 PM, Peter Kurrasch via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Both articles are long on names, short on dates. I don't fault the authors
> for that but it is troubling that better information wasn't made available
> to them.
>
> When can we expect a proper announcement in this forum? I would expect any
> such announcement to provide details on the skills and experience that this
> new leadership team has in running a CA. ‎For example, are they aware of
> section 8 of the Mozilla Root Store Policy?
>

Such announcements are not part of the Mozilla Policy expectations. Could
you clarify why you expect such an announcement?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Francisco Partners acquires Comodo certificate authority business

2017-10-31 Thread Ryan Sleevi via dev-security-policy
You didn't really leave room for productive discussion between your
options, did you? :)

As you can see from
https://github.com/mozilla/pkipolicy/blob/master/rootstore/policy.md#8-ca-operational-changes
, notification is required for certain changes - but that notification goes
to a Mozilla mail alias, not to the public lists. As such, one should not
presume that because of a lack of public discussion, there was a lack of
notice.

With respect to "rumor mill reported as fact", considering the people named
in the first article you mentioned include the CEO of Comodo CA and the
Chairman of the Board, it seems that the only way this would be "rumor
mill" is based on whether or not eweek and securityweek are reputable
organizations, right?

On Tue, Oct 31, 2017 at 1:51 PM, Kyle Hamilton via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Another article about this is http://www.securityweek.com/fr
> ancisco-partners-acquires-comodo-ca .
>
> Notably, I'm not seeing anything in the official news announcements pages
> for either Francisco Partners or Comodo.  Is this an attempt at another
> StartCom (silent ownership transfer), or is it a case of "rumor mill
> reported as fact"?
>
> -Kyle H
>
>
>
> On 2017-10-31 06:21, Kyle Hamilton wrote:
>
>> http://www.eweek.com/security/francisco-partners-acquires-co
>> modo-s-certificate-authority-business
>>
>>
>>
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: ETSI Audits Almost Always FAIL to list audit period

2017-10-31 Thread Ryan Sleevi via dev-security-policy
On Tue, Oct 31, 2017 at 8:34 AM, Dimitris Zacharopoulos via
dev-security-policy  wrote:
>
> Do you believe that the requirements stated in the policy are unclear? That
>> is, as Kathleen mentioned, the Mozilla policy states all the information
>> that must be present, as a template of what needs to be there. Perhaps
>> this
>> is just confusion as to expecting, say, Mozilla to provide a PDF of a
>> cover
>> sheet?
>>
>
> I do not believe the requirements are unclear which is why we have seen
> this information included properly in many ETSI audit reports.
> If Mozilla finds this problem repeating for some ETSI reports, perhaps a
> guidance on the expected audit template would be a good place to start.
> Webtrust had different-looking reports in the past until the Webtrust
> Committee issued templates as guidance for practitioners.


Alternatively, might it also suggest that those CABs and NABs are not
aligned with the (long-established) community norms, and as such, is
indicative of both quality and comprehensiveness (or lack thereof)?


> I think you are looking at this from the opposite side. Auditors have
> their own scheme to follow which is governed under ISO 17065 and ETSI EN
> 319 403. These schemes provide guidance on how to conduct an effective
> audit for ETSI EN 319 401, 411-1, 411-2, 421 and so on. In addition to
> these, there are National Accreditation Body schemes for specific audits
> which provide additional guidance. When the audit covers operations for one
> year (mandated by the Baseline Requirements and which finds it's way in the
> contract between the CA and the CAB), the sampling must include evidence
> from the entire year.
>

I don't believe your statement is supported by the evidence - which is why
I'm pushing you to provide precise references. Consider from the
perspective as a consumer of such audits - there is zero awareness of the
contract as to whether or not the BRs were in scope - after all, 319 411-1
is meant to be inclusive of the normative requirements with respect to
audit supervision.

But stepping back further from the contract, the claim that "the audit
covers operations for one year" is also not part of the 17065, 17021, or
319 403 oversight. That is, the certification is forward looking (as
evidenced by the expiration), and while it involves historic review, it is
not, in and of itself, a statement of assurance of the historic activities.
This is the core difference between the 17021/17065 evaluation of processes
and products versus, say, the ISAE3000 assurance evaluation.


> The eIDAS Regulation mandates for 2-year audits (not the ETSI EN 319
> 411-1). This has been reflected in the ETSI EN 319 403 audit scheme, under
> 7.4.6 (Audit Frequency), which states:
>
> "There shall be a period of no greater than two years for a full
> (re-)assessment audit unless otherwise required by the applicable
> legislation or commercial scheme applying the present document.
>
> NOTE: A surveillance audit can be required by an entitled party at any
> time or by the conformity assessment
> body as defined by the surveillance programme according to clause 7.9."
>

I'm patently aware of that, but I'm trying to highlight to you that this
statement itself lacks the specificity to give confidence of the
operations. We've seen CAs try to present surveillance audits as full
audits. We've seen CAs try to present point in time assessments as periods
of time.


> Also, as we discussed at F2F 38 in Bilbao and is covered in the minutes <
> https://cabforum.org/2016/05/25/2016-05/#ETSI-and-eIDAS-Update>, even
> though the general guidance for ETSI EN 319 403 is full re-assessment audit
> every two years, it is up to the CA/TSP to request that a full audit is
> conducted yearly and that this information is declared in the audit report.
> If you see ETSI audit reports that don't specifically state that a
> full-audit took place, then this information probably wasn't clear to the
> specific TSP. This is not related to the audit scheme.
>

Respectfully, I disagree. If the audit scheme routinely encourages audits
that are insufficient, then one MUST look at the root cause for that.

It would appear your argument - and apologies if I misrepresent, but I
rephrase it here in the hopes of highlighting our disagreement - is that
"319 401, 319 401, and 319 411-1 provide frameworks for audits, but it's up
the the CAB, NAB, and TSP to establish a procedure that will suitably
satisfy the user agents". My argument is that, especially when combined
with the fact that Regulation No 910/2014 establishes a process that is
unquestionably insufficient, and TSPs and CABs are using that as the
framework for their procedures and agreements, this is not sufficient.

Or, put differently, your argument seems to be that 319 411-1 "could" be
used to satisfy the requirements of the Mozilla Root Program (if supplanted
with additional contractual and procedural requirements, documented

Re: ETSI Audits Almost Always FAIL to list audit period

2017-10-31 Thread Ryan Sleevi via dev-security-policy
On Tue, Oct 31, 2017 at 5:21 AM Dimitris Zacharopoulos via
dev-security-policy  wrote:

>
> It is not the first time this issue is brought up. While I have a very
> firm opinion that ETSI auditors under the ISO 17065 (focused on the
> quality of products/services) and ETSI EN 319 403 definitely check
> historical data to assess the level of conformance, I will communicate
> this to our auditor and ask if they would like to provide more specific
> feedback.
>
> During the CA/Browser Forum F2F 41 in Berlin, it was stated that TUV-IT
> (CAB and chair in ACAB-c), was in discussions with Root Programs to
> determine an "ETSI audit report template" that would include all
> critical information that Root programs would like to be included in the
> public (or browser) audit letter/report. Minutes
> (https://cabforum.org/2017/06/21/2017-06-21-f2f-minutes-meeting-41-berlin/
> )
>
> --- BEGIN QUOTE ---
>
> Clemens Wanko from TÜVIT/ACABc – “Update: Addressing Browser Audit
> Requirements under eIDAS/ETSI”
>
> Clemens said that there were several discussions with the Browsers that
> resulted in an audit report template that would meet the Browser’s
> expectations.
>
> Dimitris asked if this template could be posted on the public mailing list.
>
> --- END QUOTE ---
>
> Until today, such a template has not been published or circulated either
> in the CA/Browser Forum or the m.d.s.p. I hope this discussion will push
> for this template to be published.


Do you believe that the requirements stated in the policy are unclear? That
is, as Kathleen mentioned, the Mozilla policy states all the information
that must be present, as a template of what needs to be there. Perhaps this
is just confusion as to expecting, say, Mozilla to provide a PDF of a cover
sheet?

I believe the issue being raised here is more of an audit report issue
> and not of audit criteria. Auditors under the ETSI audit scheme, just as
> with the Webtrust scheme, in order for the audit to be "effective", must
> obtain evidence of actions that took place in the past. How far back,
> should be determined by the audit criteria and requirements. For
> example, the Baseline Requirements and Root programs require a full
> audit to occur once a year which means auditors must collect evidence
> from "at least" one year. Auditors may examine evidence even further
> back if they consider that this is required in order for them to get a
> better understanding of CA operations for their assessment.


I don’t believe this is an accurate representation. You are correct that
historical evidence must be examined, but none of the aforementioned audit
criteria establish that a year must be examined. The BRs state annual
certification, but this is both irrelevant (the audits are to 319 411, not
the BRs) and misleading (you can be annually certified without examining
annual performance).

Perhaps you can highlight where the requirement is to opine on the past
year of activities. As you know, 319 411-1 is itself insufficient in this
regard, as it expects (full) audits every other year - a problem that has
occurred with a number of auditors performing surveillance audits rather
than full audits.

The 17065 scheme is used to assess products like food. You can't make an
> effective assessment in such a critical area by just looking at the
> "current status" without looking at historical data. In the ETSI/ISO
> audit schemes, CABs are supervised and audited by National Accreditation
> Bodies (NABs), at least annually which provides some extra level of
> assurance that the audits conducted by CABs are examined by an
> independent party. NABs also have the right to witness audits conducted
> by CABs.


That’s great, but the independence is unrelated. As to your remarks about
17065, note that no one has said that some historic data is not evaluated -
Phase 1 and Phase 2 make it clear there’s both document review and historic
evaluation - but there is no requirement to consider the fullness of
activities over a year. Similarly, if a TSP was derelict in its duties for
6 months, took 3 months to fix, and then compliant for 3 months, they
absolutely could be given a clean certification - because they’re currently
compliant.

Can you highlight where, specifically in the requirements and norms, this
scenario is forbidden?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: ETSI audits not listing audit periods

2017-10-30 Thread Ryan Sleevi via dev-security-policy
On Mon, Oct 30, 2017 at 5:50 PM, Kathleen Wilson via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> To give us a concrete example, here's a Bugzilla Bug that I filed this
> morning:
>
> https://bugzilla.mozilla.org/show_bug.cgi?id=1412950
>
> The CA's 2015-2016 audit was WebTrust.
>
> Their current audit statement is ETSI.
>
> When I filed the bug I thought there was a gap in auditing from March 10
> 2016 to January 29 2017.
>
> However, based on Ryan's explanation above, my understanding now is that
> the ETSI audit is a point-in-time audit, so the CA's activities from March
> 10 2016 until now have not been audited, with the exception of one month
> (January 30 to March 1 2017).
>
> Correct?


The auditor granted a certificate on 2017-06-21, after having made a
determination to do so at some point earlier, based on their engagement of
Phase 1 and Phase 2, which was conducted between January 30 and March 1,
2017.

There is no requirement that I can find - within 319 411-1, 319 411-2, 319
403, or 319 401, that would require the CAB to evaluate or consider
evidence from March 10 2016. In particular, 319 403 (7.4.5.2) states "The
objective of the audit is to confirm and certify that the TSP and the trust
services it provides complies with the applicable assessment criteria."

Thus, on the basis of the public information provided, I do not believe we
have a sufficient level of assurance that the CA's activities between March
10 2016 until January 29 2017 were consistent. Further, given the
opportunity for corrective actions without qualification, I do not believe
we have a sufficient level of assurance that the CA's activities between
January 30, 2017 and March 1, 2017 were consistent.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: ETSI audits not listing audit periods

2017-10-30 Thread Ryan Sleevi via dev-security-policy
On Mon, Oct 30, 2017 at 5:39 PM, Kathleen Wilson via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> > Importantly,
> > within 17065 and 17021, the way of ensuring continued compliance is done
> > based on contracts and reporting - that is, the client is responsible for
> > reporting changes to their conformity assessment body, and the CAB may
> > determine to revoke the certification or indicate corrective actions
> should
> > be taken (see ISO/IEC 17065:2012(E) 4.1.2.2, ISO/IEC 17021:2006(E)
> 8.6.3).
> > ISO/IEC 17065:2012(E), Section 7.7 describes the general reporting: the
> > name of the CAB, the date the certification is granted, the name and
> > address of the client, the scope of the certification, the validity
> > period/expiration date of the certificate, and anything else required by
> > the certification scheme (319 411-1).
>
>
> If I am understanding correctly...  an ETSI auditor performs a
> point-in-time audit and then relies on the CA to notify the auditor of
> material changes.
>

That is my understanding as well.


> Wish I could say that I believe that would work. Unfortunately, based on
> my experience with CAs I do not believe the CAs will be pro-active in this
> way. And I think the only way to really know what a CA is doing is to look
> at their data -- look at the actual certs they are issuing, and their
> documentation regarding such certs/issuance.
>

17065 and 17021 do specify that there is review of historic data - largely,
to determine whether or not the specified control was operating correctly.

However, 17065/17021 do not specify how far back the CAB needs to look -
that's left for the specific criteria being applied (e.g. EN 319 411-1, 319
401, and 319 403)

The best way to explain how the ETSI audits work that I can think is to
have a read on Section 7.4.5 of EN 319 403 v2.2.2 (the latest version at
https://portal.etsi.org/tbsitemap/esi/trustserviceproviders.aspx ). The
audit is done in two stages. In the first stage, the TSP provides
documentation to the CAB about their systems and practices and the CAB
works with the TSP to gather documentation. After this, there's a site
visit - Stage 2 - in which the auditor attempts to confirm the TSP is
adhering to its practices.

As you can read, these are about evaluating processes going forward. There
is retrospective analysis - such as document review and the Stage 2
evidence collection - but not the period-of-time analysis as done within
WebTrust-based audits.

Further, if you look at Section 7.6, it's worth noting that:
"""
a TSP audit may be passed with pending nonconformities provided that these
do not impact the ability of the
TSP to meet the the intended service. This certification decision is
conditional upon to the implementation of
corrective actions within 3 months after conclusion of the audit (depending
on the type and criticality of the
correction(s))
"""


> > The challenge is in determining whether this is a correct understanding
> of
> > the respective differences,
>
>
> How do we verify that this is correct?
>

I would expect that it would be incumbent on the CABs and the CAs providing
EN 319 411-1 certificates to help the community better understand the level
of assurance provided. That is, I think those supporting the continued
recognition of ETSI should attempt to demonstrate where either the
understanding of WebTrust-based audits or EN 319 411-1 certificates is
incorrect or inaccurate. Otherwise, I think your conclusions - about no
longer recognizing such schemes - are reasonable.


> Based on what I've seen with CAs over the past several years, I do not
> believe the 'forward-looking' approach with self-reporting is sufficient.
>

Agreed


> I think we have to (and we do!) require the backward-looking approach.
>

As noted, there is some retrospective analysis - document review and
evidence gathering - but the fundamental process of a 319 411-1 audit seems
to be with such a different objective and way of measuring that WebTrust to
ETSI is like comparing apples to oranges.


> CAs can make all the "future promises" they want, but the proof is in the
> resulting data -- the certs that they issue.
>
> Based on the above information, I do not think the ETSI audits meet the
> requirements of Mozilla's Root Store Policy or the CA/Browser Forum
> Baseline Requirements. Maybe that's the real problem here.
>
> Am I missing something?


I've been increasingly thinking the same thing.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: ETSI audits not listing audit periods

2017-10-30 Thread Ryan Sleevi via dev-security-policy
On Mon, Oct 30, 2017 at 3:50 PM, Kathleen Wilson via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:
>
> How do we get all auditors to start meeting our audit statement
> requirements?
>
> Why haven't all included CAs communicated these requirements to their
> auditors?
>
> Why am I seeing so many audit statements (particularly ETSI audit
> statements) that do not meet our requirements?
>
> I will greatly appreciate thoughtful and constructive ideas on this.
>
> Thanks,
> Kathleen
>

Kathleen,

Thanks for raising this matter! I think it's an important highlighting of
the very different approaches to auditing employed by WebTrust and ETSI,
and the underlying reliability and assurance of those audits.

Below is my attempt to summarize my understanding so far:
- ETSI EN 319 411-1 specifies generally applicable policy and security
requirements for trust service providers (TSPs) - including website
certificates.
- The purpose of EN 319 411-1 is to provide a framework for assessment of a
TSP, but does not specify how that assessment is to be caried out (c.f.
Section 1 of EN 319 411-1)
- EN 319 411-1 mentions EN 319 403 for guidance on such assessments
- EN 319 403 provides the framework for conformity assessment bodies (CABs)
to evaluate TSPs. It's based on/extends ISO/IEC 17065 specific to TSPs.
- ISO/IEC 17065 is incorporated due to Regulation (EC) No 765/2008 to
ensure consistency in CABs in evaluating TSPs

As noted within 319 403 (Introduction), several other documents are
incorporated as well - ISO/IEC 17021 (common requirements for conformity
assessment bodies evaluating management systems) and ISO/IEC 27006 (common
requirements for CABs evaluating information security management systems),
for example.

If we put the layer cake together and simplify:
* ISO/IEC 17065 - Common requirements for conformity assessment bodies
looking at products/services (e.g. "What should all auditors do")
* ISO/IEC 17021 - Common requirements for conformity assessment bodies
looking at management systems
* ISO/IEC 27006 - Common requirements for conformity assessment bodies
looking at information security management systems
* EN 319 403 - Common requirements for conformity assessment bodies
evaluating TSPs (e.g. "What makes an auditor qualified to be a CA auditor")
* EN 319 411-1 - Common requirements on the TSP for websites (combined with
EN 319 401, which 319 411-1 incorporates-and-builds-on)

In trying to understand why the reports are what they are, we need to look
in particular at 17021 and 17065, and the framework they use for both audit
engagements and reporting. 17065 describes a certification scheme.
Reproducing a paragraph from the introduction:

"""
Certification of products, processes or services is a means of providing
assurance that they comply with
specified requirements in standards and other normative documents. Some
product, process or service
certification schemes may include initial testing or inspection and
assessment of its suppliers' quality
management systems, followed by surveillance that takes into account the
quality management system and
the testing or inspection of samples from the production and the open
market. Other schemes rely on initial
testing and surveillance testing, while still others comprise type testing
only.
"""

319 411-1 certification describes a system that is based on an initial
testing or inspection, along with periodic surveillance. Importantly,
within 17065 and 17021, the way of ensuring continued compliance is done
based on contracts and reporting - that is, the client is responsible for
reporting changes to their conformity assessment body, and the CAB may
determine to revoke the certification or indicate corrective actions should
be taken (see ISO/IEC 17065:2012(E) 4.1.2.2, ISO/IEC 17021:2006(E) 8.6.3).
ISO/IEC 17065:2012(E), Section 7.7 describes the general reporting: the
name of the CAB, the date the certification is granted, the name and
address of the client, the scope of the certification, the validity
period/expiration date of the certificate, and anything else required by
the certification scheme (319 411-1).

Within the WebTrust scheme, the reports are based on either US (AICPA)
Standards of AT101, Canadian (CPA Canada) Standards of Section 5025, or
IFAC's ISAE3000 standards. What is important and notable about these is
that they are reports over information, with a scope and norm (e.g.
WebTrust for CAs) applied. This is why there's a consistent period of time
- information is collected and the auditor evaluates, on the basis of that
information, the level of assurance that is being met.

I'm still working to have a better understanding here (and this is all in a
personal capacity), but my conclusion is this:
- "ETSI" audits reflect an engagement at a particular point in time, where
a series of system controls are evaluated, and the result is a certificate
indicating the process and products comply with the relevant criteria (319
411-1). This 

Re: Mozilla’s Plan for Symantec Roots

2017-10-27 Thread Ryan Sleevi via dev-security-policy
Without commenting on the Symantec aspect of this, there is a rather
substantial correction to the behaviour of client software - including
Firefox.

Unfortunately, very few libraries and path validators support chain
building terminating at trust anchors in the way you describe. Recent
changes in Firefox itself have resulted in it preferring longer chains
(validating to the cross-signed root), rather than terminating at the trust
anchor, thus affecting measurements of per-root usage.

Examples:
- macOS versions prior to 10.11 were biased to use the presented chain -
meaning if cross-certified trust anchors were presented, they may result in
a longer chain (DigiCert should recall the effect of this). 10.11+ has a
weighting scale for certificate chains, but this is somewhat opaque
- OpenSSL versions prior to 1.0.2 were biased to prefer the presented chain
and try to build to a self-signed root. Thus if intermediates had explicit
trust settings (including if it was because a self-signed version was
trusted), these settings would be ignored. As noted in the 1.0.2 changelog,
this feature is still 'experimental'
- Firefox recently regressed with this behaviour -
https://bugzilla.mozilla.org/show_bug.cgi?id=1364159 introduced the bug
(Firefox 55), and https://bugzilla.mozilla.org/show_bug.cgi?id=1400913
(Firefox 57) tried to resolve it.
- Windows bases its path-preferences on a complex set of signals, some
internal (such as the order of store creation and the ordering of MD5/SHA-1
hashes of the certs), some external (such as the notBefore date of the
certificate). This can be further compounded by whether or not users have
AuthRoot updates disabled and/or misconfigured (e.g. improper proxy
settings). For example, if your new roots were added in a subsequent root
update, there's no guarantee that Windows users would consistently build
paths to that new root, because they may not yet know that the new root is
trusted, or the configuration of intermediates may result in a longer path
being preferred.

In short, you cannot reliably assume that in the case of cross-signing, the
shorter path to a trust anchor will be build or preferred. For this reason,
cross-signing to existing roots is complex.

On Fri, Oct 27, 2017 at 12:37 PM, Jeremy Rowley via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Yes. Or any root that is cross signed by the Symantec sub cas. I assume
> there would be zero impact as the chain building should stop with the
> trustees root and not look at the Symantec roots, but it’s definitely good
> to double check.
>
> On Oct 27, 2017, at 10:32 AM, Peter Bowen  w...@gmail.com>> wrote:
>
> On Fri, Oct 27, 2017 at 9:21 AM, Jeremy Rowley
> > wrote:
> I'm also very interested in this scenario.
>
> I'm also interested in what happens if a trusted DigiCert root is signed by
> a Symantec root. I assume this wouldn't impact trust since the chain
> building would stop at a DigiCert root, but I wanted to be sure.
>
> Jeremy,
>
> To clarify your scenario, do you mean what happens if a DigiCert owned
> and operated CyberTrust or DigiCert branded root is cross-signed by a
> DigiCert owned and operated VeriSign, Thawte, or GeoTrust branded
> root? (Assuming all the roots are roots currently listed at
> https://ccadb-public.secure.force.com/mozilla/IncludedCACertificateReport)
>
> Thanks,
> Peter
>
>
> -Original Message-
> From: dev-security-policy
> [mailto:dev-security-policy-bounces+jeremy.rowley=
> digicert.com@lists.mozilla
> .org] On Behalf Of Peter Bowen via dev-security-policy
> Sent: Friday, October 27, 2017 9:52 AM
> To: Gervase Markham >
> Cc: mozilla-dev-security-pol...@lists.mozilla.org la-dev-security-pol...@lists.mozilla.org>; Kathleen Wilson
> >
> Subject: Re: Mozilla's Plan for Symantec Roots
>
> On Tue, Oct 17, 2017 at 2:06 AM, Gervase Markham > wrote:
> On 16/10/17 20:22, Peter Bowen wrote:
> Will the new managed CAs, which will operated by DigiCert under
> CP/CPS/Audit independent from the current Symantec ones, also be
> included on the list of subCAs that will continue to function?
>
> AIUI we are still working out the exact configuration of the new PKI
> but my understanding is that the new managed CAs will be issued by
> DigiCert roots and cross-signed by old Symantec roots. Therefore, they
> will be trusted in Firefox using a chain up to the DigiCert roots.
>
> Gerv,
>
> I'm hoping you can clarify the Mozilla position a little, given a
> hypothetical.
>
> For this, please assume that DigiCert is the owner and operator of the
> VeriSign, Thawte, and GeoTrust branded roots currently included in NSS and
> that they became the owner and operator on 15 November 2017 (i.e.
> unquestionably before 1 December 2017).
>
> If DigiCert 

Re: Incident Report : GoDaddy certificates with ROCA Fingerprint

2017-10-27 Thread Ryan Sleevi via dev-security-policy
On Tue, Oct 24, 2017 at 12:28 PM, Daymion Reynolds via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Godaddy LLC first became aware of possible ROCA vulnerability exposure on
> Monday October 16th 2017 at 9:30am. The following are the steps we took for
> detection, revocation, and the permanent fix of certificate provisioning:


> •   Monday October 16th 2017 AZ, first became aware of the ROCA
> vulnerability.  We downloaded and modified the open source detection tool
> to audit 100% of the non-revoked and non-expired certs we had issued.
> •   Early am Wednesday October 18th AZ we had our complete list of 7
> certs with the ROCA defect. We verified the results and proceeded to start
> the revocation process. While cert revocation was in progress we started
> researching the long-term detection and prevention of the weak CSR
> vulnerability.
> •   Early am Wednesday October 18th Rob Stradling released a list of
> certs with the vulnerability. 2/7 we revoked were on the list.
> https://misissued.com/batch/28/
> •   Thursday October 19th by 2:02am AZ, we completed the 7 cert
> revocations. Revocations included customer outreach to advise the customer
> of the vulnerability.
> •   Thursday October 19th AZ, two CSRs were submitted for commonNames “
> scada2.emsglobal.net” & “scada.emsglobal.net” and were issued. Each
> request had used the vulnerable keys for CSR generation.  We revoked the
> certs again on Thursday October 19th AZ. During this period, we reached out
> to the customer to educate them regarding the vulnerability and informing
> them they needed to generate a new keypair from an unimpacted device.
> Customer was unreachable. Friday October 20thAZ,  another cert was issued
> for commonName “scada.emsglobal.net” using a CSR generated with a weak
> key. We then took measures to prevent future certs from being issued to the
> same common name and revoked the cert on October 20th 2017 AZ.
> commonName   crt.sh-link
> scada.emsglobal.net  https://crt.sh/?id=3084867
>
> scada.emsglobal.net  https://crt.sh/?id=238721704
>
> scada.emsglobal.net  https://crt.sh/?id=238721807
>
> scada2.emsglobal.net https://crt.sh/?id=238720969
>
> scada2.emsglobal.net https://crt.sh/?id=238721559
>
> •   Saturday October 21st 2017 AZ & Sunday October 22nd 2017 AZ, we
> scanned our cert store and identified 0 vulnerable certs.
> •   Monday October 23, 2017 AZ, we have deployed a permanent fix to
> prevent future CSRs generated using weak keys from being submitted. Post
> scanning of the environment concluded 0 certificates at risk.
>
> Below is a complete list of certs under GoDaddy management impacted by
> this vulnerability.
>
> Alias  crt.sh-link
> alarms.realtimeautomation.net  https://crt.sh/?id=33966207
>
> scada.emsglobal.nethttps://crt.sh/?id=3084867
>https://crt.sh/?id=238721704
>https://crt.sh/?id=238721807
>
> www.essicorp-scada.com https://crt.sh/?id=238720405
>
> marlboro.bonavistaenergy.com   https://crt.sh/?id=238720743
>
> scada2.emsglobal.net   https://crt.sh/?id=238720969
>https://crt.sh/?id=238721559
>
> www.jointboardclearscada.com   https://crt.sh/?id=238721242
>
> *.forgenergy.com   https://crt.sh/?id=238721435
>
>
Daymion,

Thanks for providing this detailed report. I want to especially thank you
for providing an actual timeline - so many CAs have unfortunately
misunderstood what a timeline means, or how to effectively communicate it.
Your timeline provides a useful description of pre-existing state, when the
issue or incident was introduced, when it was detected, what steps were
taken initially, when the issue was resolved, and what steps will be taken
in the future.

In looking at what the expectations of CAs are, and how well GoDaddy upheld
them, the specific view is that:
- With the disclosure of the ROCA vulnerability, private keys subject to it
are noted to have suffered a Key Compromise event (BRs 1.5.1, Section
1.6.1, "Key Compromise")
- CAs are required to revoke a certificate within 24 hours if "The CA
obtains evidence that the Subscriber's Private Key corresponding to the
Public Key in the Certificate has suffered a Key compromise" (BRs 1.5.1,
4.9.1.1, "Reasons for Revoking a Subscriber Certificate", Item 3)
- CAs are required to reject CSRs if the private key does not meet the
requirements set forth in Sections 6.1.5, 6.1.6, or if it has a known-weak
Private Key (BRs 1.5.1, Section 6.1.1.3, "Subscriber Key Pair Generation")

Looking at the timing, it looks like:
- ~36 hours to detecting certificates
- ~60 hours to revoke
- ~60 hours to set up initial CSR rejection
- ~1 week to setup full scanning/rejection

That said, the level of detail provided - and the many challenges a number
of folks encountered with the initial ROCA code (including seemingly some
obfuscation by 

Re: Proposed policy change: require private pre-notification of 3rd party subCAs

2017-10-24 Thread Ryan Sleevi via dev-security-policy
I think this would be of great benefit to the community.

1) It provides meaningful opportunity to ensure that the Mozilla-specific
program requirements are being met. The spate of misissuances discussed in
the past few months have revealed an unfortunately common trend of CAs not
staying aware of changes.
2) It helps ensure decisions Mozilla has taken to protect users - such as
OneCRL or removal of trust - are not unilaterally bypassed and that
remediation steps are followed.
3) It helps ensure that proper policies are followed prior to issuance -
such as the correct audit reports, key generation ceremonies, etc - have
been followed prior to any signatures being created.

On Tue, Oct 24, 2017 at 11:28 AM Gervase Markham via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> One of the ways in which the number of organizations trusted to issue
> for the WebPKI is extended is by an existing CA bestowing the power of
> issuance upon a third party in the form of control of a
> non-technically-constrained subCA. Examples of such are the Google and
> Apple subCAs under GeoTrust, but there are others.
>
> Adding new organizations to the list of those trusted is a big deal, and
> currently Mozilla has little pre-insight into and not much control over
> this process. CAs may choose to do this for whoever they like, the CA
> then bears primary responsibility for managing that customer, and as
> long as they are able to file clean audits, things proceed as normal.
>
> Mozilla is considering a policy change whereby we require private
> pre-notification of such delegations (or renewals of such delegations).
> We would not undertake to necessarily do anything with such
> notifications, but lack of action should not be considered permissive in
> an estoppel sense. We would reserve the right to object either pre- or
> post-issuance of the intermediate. (Once the intermediate is issued, of
> course, the CA has seven days to put it in CCADB, and then the
> relationship would probably become known unless the fields in the cert
> were misleading.)
>
> This may not be where we finally want to get to in terms of regulating
> such delegations of trust, but it is a small step which brings a bit
> more transparency while acknowledging the limited capacity of our team
> for additional tasks.
>
> Comments are welcome.
>
> Gerv
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla’s Plan for Symantec Roots

2017-10-17 Thread Ryan Sleevi via dev-security-policy
On Tue, Oct 17, 2017 at 5:06 AM, Gervase Markham via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On 16/10/17 20:22, Peter Bowen wrote:
> > Will the new managed CAs, which will operated by DigiCert under
> > CP/CPS/Audit independent from the current Symantec ones, also be
> > included on the list of subCAs that will continue to function?
>
> AIUI we are still working out the exact configuration of the new PKI but
> my understanding is that the new managed CAs will be issued by DigiCert
> roots and cross-signed by old Symantec roots. Therefore, they will be
> trusted in Firefox using a chain up to the DigiCert roots.


Hi Gerv,

That doesn't seem to line up with the discussion in
https://groups.google.com/d/topic/mozilla.dev.security.policy/_EnH2IeuZtw/discussion
to date. Do you have any additional information to share?

Note that the path you just described is the one that poses non-trivial
risk to the ecosystem, from an interoperability standpoint, and thus may
not be desirable.

See
https://groups.google.com/d/msg/mozilla.dev.security.policy/_EnH2IeuZtw/yr2vSBdhAAAJ
and
https://groups.google.com/d/msg/mozilla.dev.security.policy/_EnH2IeuZtw/BNR6gJHCBgAJ
for further technical details.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: PROCERT issues

2017-10-03 Thread Ryan Sleevi via dev-security-policy
Hi Kathleen,

With respect to providing a list - is there any requirement to ensure
Mozilla accepts that as a reasonable remediation?

For example, would "We plan to not do the same in the future" be an
acceptable remediation plan? As currently worded, it would seem to meet the
letter of this requirement.

This would be useful to ensure before [2]

On Tue, Oct 3, 2017 at 10:38 AM, Kathleen Wilson via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Here's a draft of the Bugzilla Bug that I plan to file to list the action
> items for PROCERT to complete before they may re-apply for inclusion in
> Mozilla's Root Store. I will appreciate feedback on this.
>
> == DRAFT ==
> Subject: PROCERT: Action Items
>
> As per Bug #1403549 the PSCProcert certificate will be removed from
> Mozilla’s Root Store due to a long list of problems and they way that
> PROCERT responded to those problems (and to previous CA Communications).
> For details about the problems, see Bug #1391058 and
> https://wiki.mozilla.org/CA:PROCERT_Issues
>
> The purpose of this bug is to record the action items that PROCERT must
> complete before their certificate may be included as a trust anchor in
> Mozilla’s Root Store again.
>
> PROCERT may apply for inclusion of a new certificate[1] following
> Mozilla's normal root inclusion/change process[2], after they have
> completed all of the following action items.
>
> 1. Provide a list of changes that the CA plans to implement to ensure that
> there are no future violations of Mozilla's CA Certificate Policy and the
> CA/Browser Forum's Baseline Requirements.
>
> 2. Implement the changes, and update their CP/CPS to fully document their
> improved processes.
>
> 3. Provide a reasonably detailed[4] public-facing attestation from a
> licensed auditor[3] acceptable to Mozilla that the changes have been made.
> This audit may be part of an annual audit.
>
> 4. Provide auditor[3] attestation that a full performance audit has been
> performed confirming compliance with the CA/Browser Forum's Baseline
> Requirements. This audit may be part of an annual audit.
>
>
> Notes:
> [1] The new certificate (trust anchor) may be cross-signed by the removed
> certificate. However, the removed certificate may *not* be cross-signed by
> the new certificate, because that would bring the concerns about the
> removed certificate into the scope of the new trust anchor.
> [2] Mozilla's root inclusion/change process includes checking that
> certificates in the CA hierarchy comply with the CA/Browser Forum's
> Baseline Requirements.
> [3] The auditor must be an external company, and approved by Mozilla.
> [4] “detailed” audit report means that management attests to their system
> design and the controls they have in place to ensure compliance, and the
> auditor evaluates and attests to those controls. This assertion by
> management - and the auditor's independent assessment of the factual
> veracity of that assertion - will help provide a greater level of assurance
> that PROCERT has successfully understood and integrated the BRs.
> ==
>
> Thanks,
> Kathleen
>
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: PROCERT issues

2017-10-02 Thread Ryan Sleevi via dev-security-policy
On Mon, Oct 2, 2017 at 10:42 AM, Kathleen Wilson via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Friday, September 29, 2017 at 2:52:49 PM UTC-7, Eric Mill wrote:
> > That dynamic is natural, but accepting that this dynamic exists is
> > different than giving into it in some absolute way. When offering second
> > chances, requiring that the person/org fulfill certain conditions that
> > speak directly to their ability to have learned and adapted from the
> thing
> > they failed at the first time is an approach that accepts this dynamic,
> > without shutting the door on people or organizations that have grown as a
> > result of the experience.
> >
> > I think it would arguably lead to worse behavior, and less disclosure of
> > incidents and mistakes, if Mozilla adopted a posture where second chances
> > are rarely given. Not saying that's what's being said here, but I think
> > it's worth emphasizing that the first principle here should be to
> optimize
> > for incentivizing the behavior you want out of the CA community that
> > protects users and increases information sharing.
> >
> > -- Eric
>
>
> I agree with Eric on this.
>
> The last sentence in our CA Communications and Mozilla Security Blog Posts
> regarding our CA Program is frequently:
> "... we believe that the best approach to safeguard that security is to
> work with CAs as partners, to foster open and frank communication, and to
> be diligent in looking for ways to improve."
>
> Below are my personal feelings...
>
> In the case of WoSign and StartCom, I felt such a level of deception that
> it will be extremely difficult for either CA to ever gain my trust again.
> Rightly or wrongly so, I have not recognized that level of deception from
> other CAs in Mozilla's program. The deception happened before Inigo went to
> StartCom, and I appreciate all of Inigo's efforts, but due to the position
> he is now in, he will have to do an outstanding job, be test-driven, and
> demonstrate a truly clean CA hierarchy in order to regain my trust in
> StartCom. Unfortunately, I just don't feel that I've seen that so far.
>
> In regards to PROCERT, I do not believe that they have intentionally
> deceived me, but that their representatives responded to previous CA
> Communications and the Bugzilla Bug without getting their technical people
> involved. That is very bad! and I am very disappointed! But perhaps there
> are actions that they can take to demonstrate their commitment to not
> repeating those mistakes, to putting code into place to prevent
> non-BR-compliant cert issuance, and to show that they do have the level of
> technical knowledge in their organization that is needed to operate a good
> CA.
>

These are also my personal feelings, and not speaking on behalf of Mozilla
or Google.

I think these are excellent ways of framing the discussion, given the
current state of the ecosystem and the incentives. I think attempting to
blackball a CA organization is largely ineffective, for reasons others have
highlighted - such as the lack of transparency around beneficial or
controlling interests.

With respect to PROCERT, I think it's important to note that there are
several problematic practices:
- The displayed technical competence was grossly insufficient for the level
of trust afforded.
- The technical responses (continue) to demonstrate a degree of
interpretative leeway that is not supported by the text.
- The sum totality of the number and nature of the incidents raise serious
concerns about the overall operation.

Unlike other incidents, there's no one "Fix this thing" incident - it's a
systemic failure of both implementation and oversight that caused the loss
of a trust, a dozen of incidents both big and small, and the response to
those incidents, that caused a loss of trust.

I think it's reasonable to suggest that developing technical competence is
not something that will happen overnight. And systemically addressing the
organizational and operational failures will require a careful degree of
analysis of the past issues and preventive steps (of which revocation is,
again, completely insufficient).

We've also seen from past incidents and discussions that one year is very
likely too short a time to allow such remediations. If anything, it
incentivizes hasty changes, rather than methodical analysis. It's very
likely that, for an organization like PROCERT, it would be far more
appropriate to suggest something like three or more years - to allow them
the opportunity to invest and improve.

Similarly, given the nature of these incidents, it seems reasonable to
suggest that new keys must be used, without any relation to the old keys or
infrastructure. This hopefully goes without saying.

In terms of oversight and audits, it seems beneficial to have a
community-agreed upon auditor, a Mozilla-delegated auditor, or even an open
RFP, so that auditors themselves can compete on the thoroughness and
oversight that they can provide. 

Re: DigiCert-Symantec Announcement

2017-09-23 Thread Ryan Sleevi via dev-security-policy
On Fri, Sep 22, 2017 at 1:00 PM, Peter Bowen via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:
>
> Ryan,
>
> As an existing Symantec customer, I'm not clear that this really
> addresses the challenges we face.
>
> So far we have found several different failure modes.  We hope that
> any solution deployed will assure that these don't trigger.
>
> First, we found that some clients have a limited set of roots in their
> trust store.   The "VeriSign Class 3 Public Primary Certification
> Authority - G5" root with SPKI SHA-256 hash of
> 25b41b506e4930952823a6eb9f1d31def645ea38a5c6c6a96d71957e384df058 is
> the only root trusted by some clients. They do, somewhat
> unfortunately, check the certificate issuer, issuer key id, and
> signature, so they changing any will break things.  However they don't
> update their trust store.  So the (DN, key id, public key) tuple needs
> to be in the chain for years to come.
>
> Second, we have found that some applications use the system trust
> store but implement additional checks on the built and validated
> chain.  The most common case is  checking that at least one public key
> in the chain matches a list of keys the application has internally.
>
> As there is an assumption that the current root (DN, public key)
> tuples will be replaced relatively soon by some trust store
> maintainers, there needs to be a way that that both of these cases can
> work.  The only way I can see this working long term on both devices
> with updated trust stores as well as devices that have not updated the
> trust store is to do a little bit of hackery and create new (DN,
> public key) tuples with the existing public key.  This way apps with
> pinning will work on systems with old trust stores and one systems
> with updated trust stores.
>
> As a specific example, again using the Class 3 G5 root, today a chain
> looks like:
>
> 1) End-entity info
> 2) spkisha256:f67d22cd39d2445f96e16e094eae756af49791685007c76e4b66f154b7f3
> 5ec6,KeyID:5F:60:CF:61:90:55:DF:84:43:14:8A:60:2A:B2:F5:7A:F4:43:18:EF,
> DN:CN=Symantec Class 3 Secure Server CA - G4, OU=Symantec Trust
> Network, O=Symantec Corporation, C=US,
> 3) spkisha256:25b41b506e4930952823a6eb9f1d31def645ea38a5c6c6a96d71957e384d
> f058,
> KeyID:7F:D3:65:A7:C2:DD:EC:BB:F0:30:09:F3:43:39:FA:02:AF:33:31:33,
> DN:CN=VeriSign Class 3 Public Primary Certification Authority - G5,
> OU=(c) 2006 VeriSign, Inc. - For authorized use only, OU=VeriSign
> Trust Network, O=VeriSign\, Inc., C=US
>
> If there is a desire to (a) remove the Class 3 G5 root and (b) keep
> the pin to its key working, the only solution I can see is to create a
> new root that uses the same key.  This would result in a chain that
> looks something like:
>
> 1) End-entity info
> 2b) spkisha256:,KeyID:, DN:CN=New Server Issuing CA, O=DigiCert,
> C=US,
> 3b) spkisha256:25b41b506e4930952823a6eb9f1d31
> def645ea38a5c6c6a96d71957e384df058,
> KeyID:6c:e5:3f:7b:45:1f:66:b4:e6:7c:70:05:86:19:79:4f:a6,
> DN:CN=VeriSign Class 3 Public Primary Certification Authority - G5,
> OU=DigiCert Compatibility Root, OU=(c) 2006 VeriSign, Inc. - For
> authorized use only, OU=VeriSign Trust Network, O=VeriSign\, Inc.,
> C=US
> 3) spkisha256:25b41b506e4930952823a6eb9f1d31def645ea38a5c6c6a96d71957e384d
> f058,
> KeyID:7F:D3:65:A7:C2:DD:EC:BB:F0:30:09:F3:43:39:FA:02:AF:33:31:33,
> DN:CN=VeriSign Class 3 Public Primary Certification Authority - G5,
> OU=(c) 2006 VeriSign, Inc. - For authorized use only, OU=VeriSign
> Trust Network, O=VeriSign\, Inc., C=US
>
> Note that 3b and 3 have the same public key and intersecting sets of
> attributes in the DN, but have different key IDs and different DNs.
>
> In order for this to work, 3b would have to be included in new trust
> stores.  This likely implies that 3b is created under new controls
> (e.g. by DigiCert), but presumably this cannot happen until the deal
> closes.  If this doesn't happen prior to December 1, then there will
> likely need to be an interim phase where a different issuing CA is
> created by DigiCert that is signed by #3 -- something like
> "CN=Symantec Class 3 Secure Server CA - GD1, OU=Symantec Trust
> Network, O=Symantec Corporation, C=US".  This would be the "Managed
> CA".
>
> I realize this is somewhat more complex than what you, Ryan, or Jeremy
> proposed, but it the only way I see root pins working across both
> "old" and "new" trust stores.
>
> Thanks,
> Peter


Peter,

Thanks a ton for sharing the challenges some customers face. It’s unclear,
however, why it’s necessary to re-use the existing key, particularly in
browser-based applications, so hopefully that can be expanded upon.

If we take the existing path:
1) End-entity info
2)
spkisha256:f67d22cd39d2445f96e16e094eae756af49791685007c76e4b66f154b7f35ec6,KeyID:5F:60:CF:61:90:55:DF:84:43:14:8A:60:2A:B2:F5:7A:F4:43:18:EF,
DN:CN=Symantec Class 3 Secure Server CA - G4, OU=Symantec Trust
Network, O=Symantec Corporation, C=US,
3)

Re: DigiCert-Symantec Announcement

2017-09-21 Thread Ryan Sleevi via dev-security-policy
. Thawte Primary Root CA
> 20. Thawte Primary Root CA – G2
> 21. Thawte Primary Root CA – G3
> 22. Thawte Server CA
> 23. Thawte Timestamping CA
> 24. UTN-Userfirst-Network Applications
> 25. Verisign Class 1 Public PCA
> 26. Verisign Class 3 Public Primary Certificate Authority – G4
> 27. Verisign Class 3 Public Primary Certificate Authority – G5
> 28. Verisign Class 4 Public Primary Certificate Authority – G3
> 29. Verisign Time Stamping CA
> 30. Verisign Universal Root Certificate Authority
> 31. Verisign4
> 32. Verisign Class 1 Public PCA – G3
> 33. Verisign Class 1 Public PCA – G2
> 34. Verisign Class 2 Public PCA – G3
> 35. Verisign Class 2 Public PCA – G2
> 36. Verisign Class 3 Public PCA
> 37. Verisign Class 3 Public PCA – MD2
> 38. Verisign Class 3 Public PCA – G2
> 39. Verisign Class 2 Public Primary Certification Authority – G3
>
>
>
> The current end-state plan for root cross-signing is provided at
> https://bugzilla.mozilla.org/show_bug.cgi?id=1401384. The diagrams there
> show all of the existing sub CAs along with the new Sub CAs and root
> signings planned for post-close. Some of these don’t have names so they are
> lumped in a general “Intermediate” box.
>
>
>
> We reached the same conclusion as you about segmentation by root (that
> compartmentalization by root won’t work), not to mention that
> compartmentalization will be near impossible considering what we’ve
> previously issued and how the roots are trusted in various root programs.
> Instead, we plan on creating sub CAs based on the number of entities using
> the intermediate.  For example, every 2000 or so entities will use another
> Sub CA. This will roughly segment customers based on excepted volumes.  We
> also plan on providing a lot more unique intermediates on a per customer
> basis.  Permitting large companies to have a dedicated intermediate will
> help shield them from being revoked if another Sub CA needs to be revoked
> and allow browsers to selectively whitelist/blacklist entities.  Of course,
> not every company will want this, but it’ll be available for anyone who
> wants it.
>
>
>
> The plan, based on your suggestions, is to cross-sign the DigiCert Global
> G2 root with the four Symantec roots:
>
>
>
> 1.  GeoTrust Global CA
> 2.  GeoTrust Global CA 2
> 3.  Verisign Class 3 Public Primary Certificate Authority – G5
> 4.  Thawte Primary Root CA
>
>
>
> The exact roots cross-signing the DigiCert root is very much in flux.
> Until close, we aren’t reaching out to current Symantec customers for, I
> think, obvious reasons.  However, we do plan on communicating with these
> customers immediately post close to determine which roots are pinned in
> applications and what roots are required for custom applications. We are
> trying to limit the number of primary roots to six (three ECC, three RSA)
> plus one transition root to keep the chains and use manageable.
>
>
>
> The Global G2 root will become the transition root to DigiCert for
> customers who can’t move fully to an operational DigiCert roots prior to
> September 2018. Any customers that require a specific root can use the
> transition root for as long as they want, realizing that path validation
> may be an issue as Symantec roots are removed by platform operators.
> Although we cannot currently move to a single root because of the lack of
> EV support and trust in non-Mozilla platforms, we can move to the existing
> three roots in an orderly fashion.
>
>
>
> If the agreement closes prior to Dec 1, the Managed CA will never exist.
> Instead, all issuance will occur through one of the three primary DigiCert
> roots mentioned above with the exception of customers required to use a
> Symantec root for certain platforms or pinning. The cross-signed Global
> root will be only transitory, meaning we’d hope customers would migrate to
> the DigiCert roots once the systems requiring a specific Symantec roots are
> deprecated or as path validation errors arise.
>
>
>
> Jeremy
>
>
>
>
>
> From: Ryan Sleevi [mailto:r...@sleevi.com]
> Sent: Thursday, September 14, 2017 1:28 PM
> To: Jeremy Rowley <jeremy.row...@digicert.com>
> Cc: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Re: DigiCert-Symantec Announcement
>
>
>
>
>
>
>
> On Wed, Aug 2, 2017 at 5:12 PM, Jeremy Rowley via dev-security-policy <
> dev-security-policy@lists.mozilla.org <mailto:dev-security-policy@
> lists.mozilla.org> > wrote:
>
> Hi everyone,
>
>
>
> Today, DigiCert and Symantec announced that DigiCert is acquiring the
> Symantec CA assets, includi

Re: Incident Report format

2017-09-21 Thread Ryan Sleevi via dev-security-policy
Hi Gerv,

Based on the number of reports reviewed recently, I suspect we've got
opportunities for improvement, but I'm not quite sure yet what the concrete
suggestions on that should look like. A few thoughts below:

- The current report format biases itself towards "misissuance", when we
know there's a spectrum of BR compliance issues that can arise (for
example, the OCSP responders) that don't necessarily involve the
certificates and crt.sh IDs, but on other factors

- I would say that the majority of CAs had difficulty understanding what a
"timeline" was, often providing statements of steps they took, without any
context as to when - either date or time. They also tended to focus on when
the incident was reported to them, without taking into the full
consideration of when the situation causing the incident began.
  - For example, if the BRs changed at Date X, and required the CA do
something by Date Y, and they didn't, then it would seem that at least Date
X and Y are relevant to the discussion (and potentially when the change was
first discussed in the CA/B Forum, which often far pre-dates Date X).
Further, if CAs' do audits or annual CPS reviews, understanding those in
the timeline are equally valuable, even when they predate the incident
itself.

- It's unclear the value of the confirmation of non-issuance or resolution.
If the intent is for the CA to make a binding pledge to the community with
respect to their corrective actions, perhaps it should be expanded (and the
consequences of misrepresenting that pledge captured)

- It's unclear what the reasonable expected update period should be when
CAs are taking corrective steps. Weekly updates?

If it helps, Microsoft requires the following of CAs that participate in
its program -
https://social.technet.microsoft.com/wiki/contents/articles/31633.microsoft-trusted-root-program-requirements.aspx#D_CA_Responsibilities_in_the_Event_of_an_Incident
- so one might expect that every CA participating in both programs has such
information available to them, when it's a BR violation.

On Thu, Sep 21, 2017 at 10:34 PM, Gervase Markham via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> It seems like the list of topics to cover on the Responding to a
> Misissuance page:
> https://wiki.mozilla.org/CA/Responding_To_A_Misissuance#Incident_Report
> has become a de facto template for incident reports.
>
> We've now had quite a few CAs use this outline to respond to issues. If
> people (CAs or others) have feedback on how this template could be
> improved, that would be a fine thing.
>
> Gerv
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Old roots to new roots best practice?

2017-09-18 Thread Ryan Sleevi via dev-security-policy
Hi Ben,

While I wasn't trying to suggest the reasoning was the same, I was trying
to highlight that for many implementations, the revocation of a single
certificate (where there may exist multiple cross-signs) induces enough
non-determinism to effectively constitute revoking all of them. That is,
clients that encounter the revoked cert - which cannot reliably be
predicted - may treat the entire chain as revoked even if alternative,
unrevoked paths exist.

This should mean that CAs should be aware of, and cautious of, such
revocations. The best mitigation to this is avoiding a large number of
cross-signs and rotating keys or names often.

On Tue, Sep 19, 2017 at 12:28 AM Ben Wilson <ben.wil...@digicert.com> wrote:

> Ryan,
> Could you please explain what you mean by saying that if you revoke a
> single
> certificate that it is akin to revoking all variations of that certificate?
> I don't think I agree.  There are situations where the certificate is
> revoked for reasons (e.g. issues of certificate format/content) that have
> nothing to do with distrusting the underlying key pair.
> Thanks,
> Ben
>
>
> -Original Message-
> From: dev-security-policy
> [mailto:dev-security-policy-bounces+ben=digicert....@lists.mozilla.org] On
> Behalf Of Ryan Sleevi via dev-security-policy
> Sent: Sunday, September 17, 2017 7:57 PM
> To: userwithuid <userwith...@gmail.com>
> Cc: mozilla-dev-security-policy
> <mozilla-dev-security-pol...@lists.mozilla.org>
> Subject: Re: Old roots to new roots best practice?
>
> Hi there,
>
> I agree, Gerv's remarks are a bit confusing with respect to the concern.
> You are correct that the process of establishing a new root generally
> involves the creation of a self-signed certificate, and then any
> cross-signing that happens conceptually creates an 'intermediate' - so you
> have a key shared by a root and an intermediate.
>
> This is not forbidden; indeed, you can see in my recent suggestions to
> Symantec/DigiCert, it can and often is the best way for both compatibility
> and interoperability. Method #2 that you mentioned, while valid, can bring
> much greater compatibility challenges, and thus requires far more careful
> planning and execution (and collaboration both with servers and in
> configuring AIA endpoints)
>
> However, there is a criticism to be landed here - and that's using the same
> name/keypair for multiple intermediates and revoking one/some of them. This
> creates all sorts of compatibility problems in the ecosystem, and is thus
> unwise practice.
>
> As an example of a compatibility problem it creates, note that RFC5280
> states how to verify a constructed path, but doesn't necessarily specify
> how
> to discover that path (RFC 4158 covers many of the strategies that might be
> used, but note, it's Informational). Some clients (such as macOS and iOS,
> up
> to I believe 10.11) construct a path first, and then perform revocation
> checking. If any certificate in the path is rejected, the leaf is rejected
> -
> regardless of other paths existing. This is similar to the behaviour of a
> number of OpenSSL and other (embedded) PKI stacks.
> Similarly, applications which process their own revocation checks may only
> be able to apply it to the constructed path (Chrome's CRLSets are somewhat
> like this, particularly on macOS platforms). Add in caching of
> intermediates
> (like mentioned in 4158), and it quickly becomes complicated.
>
> For this reason - if you have a same name/key pair, it should generally be
> expected that revoking a single one of those is akin to revoking all
> variations of that certificate (including the root!)
>
> Note that all of this presumes the use of two organizations here, and
> cross-signing. If there is a single organization present, or if the
> 'intermediate' *isn't* intended to be a root, it's generally seen as an
> unnecessary risk (for the reasons above).
>
> Does that help explain?
>
>
> On Sun, Sep 17, 2017 at 11:37 AM, userwithuid via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
> > Forgot the links:
> >
> > [1] https://groups.google.com/forum/#!topic/mozilla.dev.
> > security.policy/hNOJJrN6WfE
> > [2] https://groups.google.com/forum/#!msg/mozilla.dev.
> > security.policy/RJHPWUd93xE/RqnC3brRBQAJ
> > [3] https://crt.sh/?spkisha256=fbe3018031f9586bcbf41727e417b7
> > d1c45c2f47f93be372a17b96b50757d5a2
> > [4] https://crt.sh/?spkisha256=82b5f84daf47a59c7ab521e4982aef
> > a40a53406a3aec26039efa6b2e0e7244c1
> > [5] https://crt.sh/?spkisha256=706bb1017c855c59169bad5c1781cf
> > 597f12d2cad2f63d1a4aa37493800ffb80
> > [6] https://crt.sh/?spkisha256=f7cd08a27aa9df0918b

Re: FW: StartCom inclusion request: next steps

2017-09-18 Thread Ryan Sleevi via dev-security-policy
On Mon, Sep 18, 2017 at 8:12 AM, Inigo Barreira 
wrote:
>
> We are not seeking to identify personal blame. We are seeking to
> understand what, if any, improvements have been made to address such
> issues. In reading this thread, I have difficulty finding any discussion
> about the steps that StartCom has proposed to improve its awareness of the
> expectations placed upon it as a potential participant in the Mozilla
> store. Regardless of who bears responsibility for that, the absence of a
> robust process - and, unfortunately, the absence of a deep understanding -
> does mean that the restablishing of trust can present a significant risk to
> the community.
>
>
>
> I think I´ve posted everything we did to improve our systems. I replied to
> every error posted in the crt.sh explaining what happened and what we did
> to fix it for not having the same issue again, but will try to recap here
> again.
>
>
>
> -  Test certificates. We issued test certificates in production
> to test the CT behaviour. After the checking those certs were revoked,
> within minutes. This was due to an incorrect configuration in the EJBCA
> roles that was changed and updated accordingly for not allowing anyone to
> issue certs from the EJBCA directly
>
> -  Use of unallowed curves. We issued certificates with P-521
> which is not allowed by Mozilla. We revoked all those certs and configure
> the system to not allow it. This remediation was put into production on
> mid-july not issuing certs with that curve.
>
> -  RSA parameter not included. We issued one certificate which no
> RSA parameter included. We revoked that certificate and started an
> investigation. The EJBCA system didn´t check the keys, concretely for this
> issue. We developed a solution to check the CSR files properly before
> sending to sign
>
> -  Country code not allowed. We issued one certificate with
> country code ZR for Zaire, which does not exist officially. We revoked the
> cert and checked our internal country code database with the ISO one. We
> made the correspondent changes. The cert was reissued with the right code
> representing the Democratic Republic of Congo.
>
>
>
> Furthermore, we have added x509lint and cablint to our issuance process.
> We have integrated crt.sh tool into our CMS system. We have developed a CSR
> checking tool. We have updated the EJBCA system to the latest patch,
> 6.0.9.5 which also came with a Key (RSA and ECC) validator, and we are also
> willing to integrate the zlint once is getting more stable. We have applied
> all these tools and we are not misissuing certificates.
>

Unfortunately, I am not sure how to more effectively communicate that this
pattern of issues indicates an organization failure in the review of,
application of, and implementation of the Baseline Requirements. Through
both coding practices and issuing practices, security and compliance are
not being responded to as systemic objectives, but rather as 'one offs',
giving the impression of 'whack-a-mole' and ad-hoc response.

For example, the country code failure indicates a more deeper failing - did
you misunderstand the BRs? Were they not reviewed? Was the code simply not
implemented correctly? With the RSA parameters, it similarly indicates a
lack of attention.

I greatly appreciate the use of and deployment of x509lint and cablint, but
those merely offer technical checking that, as an aspiring trusted root CA,
you should have already been implementing - whether your own or using those
available COTS.

The continued approach to issue-and-revoke rather than holistically review
the practices and take every step possible to ensure compliance -
particularly at a CA that was previously distrusted due to non-compliance -
is a particularly egregious oversight that hasn't been responded to.

In every response, it still feels as if you're suggesting these are
one-offs and coding errors, when the concern being raised is how deeply
indicative they are of systemic failures from top to bottom, from policy to
technology, from oversight to implementation. Rather than demonstrate how
beyond reproach StartCom is, it feels like an excessive emphasis is being
put on the ineffective revocation of these certificates, while ignoring the
issues that lead to them being issued in the first place - both from policy
and from code.

It´s not like disagreements, but the example was about a root certificate
> private key in a USB stick, so IMO that example starts with a very
> problematic issue, because it´s about a root private key and in a USB stick
> left in the table, while the issues Startcom did was about end entity
> certificates, and nothing related to private keys and not in the root,
> that´s what I meant with “quality”.
>

I understand what you meant. I'm suggesting the community has a perception
that the issues StartCom is presently being faced with is as egregious and
as serious. I understand you disagree it's as 

Re: Old roots to new roots best practice?

2017-09-17 Thread Ryan Sleevi via dev-security-policy
Hi there,

I agree, Gerv's remarks are a bit confusing with respect to the concern.
You are correct that the process of establishing a new root generally
involves the creation of a self-signed certificate, and then any
cross-signing that happens conceptually creates an 'intermediate' - so you
have a key shared by a root and an intermediate.

This is not forbidden; indeed, you can see in my recent suggestions to
Symantec/DigiCert, it can and often is the best way for both compatibility
and interoperability. Method #2 that you mentioned, while valid, can bring
much greater compatibility challenges, and thus requires far more careful
planning and execution (and collaboration both with servers and in
configuring AIA endpoints)

However, there is a criticism to be landed here - and that's using the same
name/keypair for multiple intermediates and revoking one/some of them. This
creates all sorts of compatibility problems in the ecosystem, and is thus
unwise practice.

As an example of a compatibility problem it creates, note that RFC5280
states how to verify a constructed path, but doesn't necessarily specify
how to discover that path (RFC 4158 covers many of the strategies that
might be used, but note, it's Informational). Some clients (such as macOS
and iOS, up to I believe 10.11) construct a path first, and then perform
revocation checking. If any certificate in the path is rejected, the leaf
is rejected - regardless of other paths existing. This is similar to the
behaviour of a number of OpenSSL and other (embedded) PKI stacks.
Similarly, applications which process their own revocation checks may only
be able to apply it to the constructed path (Chrome's CRLSets are somewhat
like this, particularly on macOS platforms). Add in caching of
intermediates (like mentioned in 4158), and it quickly becomes complicated.

For this reason - if you have a same name/key pair, it should generally be
expected that revoking a single one of those is akin to revoking all
variations of that certificate (including the root!)

Note that all of this presumes the use of two organizations here, and
cross-signing. If there is a single organization present, or if the
'intermediate' *isn't* intended to be a root, it's generally seen as an
unnecessary risk (for the reasons above).

Does that help explain?


On Sun, Sep 17, 2017 at 11:37 AM, userwithuid via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Forgot the links:
>
> [1] https://groups.google.com/forum/#!topic/mozilla.dev.
> security.policy/hNOJJrN6WfE
> [2] https://groups.google.com/forum/#!msg/mozilla.dev.
> security.policy/RJHPWUd93xE/RqnC3brRBQAJ
> [3] https://crt.sh/?spkisha256=fbe3018031f9586bcbf41727e417b7
> d1c45c2f47f93be372a17b96b50757d5a2
> [4] https://crt.sh/?spkisha256=82b5f84daf47a59c7ab521e4982aef
> a40a53406a3aec26039efa6b2e0e7244c1
> [5] https://crt.sh/?spkisha256=706bb1017c855c59169bad5c1781cf
> 597f12d2cad2f63d1a4aa37493800ffb80
> [6] https://crt.sh/?spkisha256=f7cd08a27aa9df0918b4df5265580c
> cee590cc9b5ad677f134fc137a6d57d2e7
> [7] https://crt.sh/?spkisha256=60b87575447dcba2a36b7d11ac09fb
> 24a9db406fee12d2cc90180517616e8a18
> [8] https://crt.sh/?spkisha256=d3b8136c20918725e848204735755a
> 4fcce203d4c2eddcaa4013763b5a23d81f
> [9] https://bugzilla.mozilla.org/show_bug.cgi?id=1311832
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: FW: StartCom inclusion request: next steps

2017-09-15 Thread Ryan Sleevi via dev-security-policy
On Fri, Sep 15, 2017 at 12:30 PM, Inigo Barreira via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> >
> > Hi Inigo,
> >
> > On 14/09/17 16:05, Inigo Barreira wrote:
> > > Those tests were done to check the CT behaviour, there was any other
> > testing of the new systems, just for the CT.
> >
> > Is there any reason those tests could not have been done using a parallel
> > testing hierarchy (other than the fact that you hadn't set one up)?
>
> I think I provided the reasons. We were distrusted, not re-applied yet,
> those certs lived for minutes, ...


Can you point where the Baseline Requirements and/or Root Store policies
exclude "certs lived for minutes" from being in scope? Or where revocation
absolves a CA of the issuance in the first place?


> So, I think these are the reasons. It´s not an excuse and we didn´t expect
> this maremágnum.


I believe the lack of expectation is precisely why there is significant
concern here. CAs are expected to be global stewards of trust, operating
beyond reproach - which generally means assuming all things are forbidden
unless expressly permitted, and even when it seems they _may_ be permitted,
to consider the full implications of the interpretation and to check if
there are multiple interpretations.


> There´s been long discussions about the harm long term certificates can
> make and asked CAs about short term to avoid damages. These certs, that
> it´s true was a mistake, only lived for minutes, so I don´t know what else
> I can add to this matter.
>

I think you've added a lot, and while the engagement has been valuable,
hopefully this clarifies precisely why these responses have been so deeply
concerning towards demonstrating trustworthiness.


> > But it is the job of a CA to be aware of browser policies.
>
> Yes, you´re right. And it was my fault for not have checked in deep this
> particular one.
>

We are not seeking to identify personal blame. We are seeking to understand
what, if any, improvements have been made to address such issues. In
reading this thread, I have difficulty finding any discussion about the
steps that StartCom has proposed to improve its awareness of the
expectations placed upon it as a potential participant in the Mozilla
store. Regardless of who bears responsibility for that, the absence of a
robust process - and, unfortunately, the absence of a deep understanding -
does mean that the restablishing of trust can present a significant risk to
the community.


> > I think lack of monitoring and lack of integrity of logs are serious
> issues.
>
> There wasn´t a lack of integrity and monitoring, of course not. All PKI
> logs were and are signed, it´s just the auditors wanted to add the
> integrity to other systems which is not so clear that should have this
> enabled. For example, if you want to archive database information for not
> managing a big one, the integrity of the logs could be a problem when
> trying to "move" to an archive system. I had some discussions about the
> "scope" of the integrity.


I am wholly uncertain how to interpret what you're saying here.


> Regarding the monitoring, well, we monitor many things, in both data
> centers, 24x7, etc. For this specific issue, it´s true that we didn´t have
> it automatically but manually, but well, and we implement a solution, but
> this is not a lack of monitoring. I think the audits are to correct and
> improve the systems and don´t think any CA at the first time had everything
> correct. So, for example, I thought this finding was good because made us
> improve.
>

I agree that a well-executed audit can help a CA identify areas of
improvement. However, a well-executed audit can also identify issues of
non-compliance or identify issues of risk that the community may find
unacceptable, independent of the auditors own assessment.


> > Repairing them afterwards does not remove the uncertainty.
>
> Well, then any issue that you could find, even repaired or fixed, does not
> provide you any security and hence you should not trust anyone.
>

I do not think this demonstrates a positive awareness of the issues being
discussed. Again, as CAs are expected to be stewards of global trust, it is
expected that CAs seek to both individually improve and rise above 'the
minimum' requirements, and to seek ways to improve those minimums.

> If you said "I left the root certificate private key on a USB stick on
> the desk in my unlocked
> > office over the weekend - but it's OK, I've remediated the problem now,
> and
> > it's back in the safe", that would not remove the uncertainty about
> whether
> > someone had done something with it in the mean time.
>
> Well, I don´t think this is the same "quality" of example.
>

It is useful to know what you think, as it highlights disagreement.
However, it's more useful to know why you think this. I would suggest that,
for most people examining both the information provided on the issues, the
facts, and the discussion so far, it is very 

Re: DigiCert-Symantec Announcement

2017-09-14 Thread Ryan Sleevi via dev-security-policy
On Wed, Aug 2, 2017 at 5:12 PM, Jeremy Rowley via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Hi everyone,
>
>
>
> Today, DigiCert and Symantec announced that DigiCert is acquiring the
> Symantec CA assets, including the infrastructure, personnel, roots, and
> platforms.  At the same time, DigiCert signed a Sub CA agreement wherein we
> will validate and issue all Symantec certs as of Dec 1, 2017.  We are
> committed to meeting the Mozilla and Google plans in transitioning away
> from
> the Symantec infrastructure. The deal is expected to close near the end of
> the year, after which we will be solely responsible for operation of the
> CA.
> From there, we will migrate customers and systems as necessary to
> consolidate platforms and operations while continuing to run all issuance
> and validation through DigiCert.  We will post updates and plans to the
> community as things change and progress.
>
>
>
> I wanted to post to the Mozilla dev list to:
>
> 1.  Inform the public,
> 2.  Get community feedback about the transition and concerns, and
> 3.  Get an update from the browsers on what this means for the plan,
> noting that we fully commit to the stated deadlines. We're hoping that any
> changes
>
>
>
> Two things I can say we plan on doing (following closing) to address
> concerns are:
>
> a.  We plan to segregate certs by type on each root. Going forward, we
> will issue all SSL certs from a root while client and email come from
> different roots. We also plan on limiting the number of organizations on
> each issuing CA.  We hope this will help address the "too big to fail"
> issue
> seen with Symantec.  By segregating end entities into roots and sub CAs,
> the
> browsers can add affected Sub CAs to their CRL lists quickly and without
> impacting the entire ecosystem.  This plan is very much in flux, and we'd
> love to hear additional recommendations.
> b.  Another thing we are doing is adding a validation OID to all of our
> certificates that identifies which of the BR methods were used to issue the
> cert. This way the entire community can readily identify which method was
> used when issuing a cert and take action if a method is deemed weak or
> insufficient.  We think this is a huge improvement over the existing
> landscape, and I'm very excited to see that OID rolled out.
>
>
>
> Thanks a ton for any thoughts you offer.
>
>
>
> Jeremy
>

eremy,

Thanks for sharing details about your rough plans. There’s a lot at play
here, particularly when trying to fully visualize DigiCert’s existing and
proposed hierarchy, so I’m wondering if it might be easier to explore what
the ‘ideal PKI’ may look like, and then try to work backwards to figure out
how this acquisition can help that.

At the core, we can imagine there is a root CA for each major long-term
cryptographic configuration - which, in today’s world, generally means
RSA/2048, RSA/4096, ECC/256, and ECC/384. In tomorrow’s world, this may
also accommodate additional curves Ed25519 and Ed448, such as via
https://tools.ietf.org/id/draft-ietf-curdle-pkix . In total, this means the
ideal PKI only needs four to six root certificates.

Within each root, you can build out the appropriate segmentation. For
performance reasons, it’s likely preferable to have a ‘wide’ PKI (many
sub-CAs off the root, each constrained in capability and used for a limited
amount of time) versus a ‘deep’ PKI (hierarchically reducing the
capabilities at each level in the trust graph- for example, “All TLS” ->
“All DV” -> “All first party DV” -> “All first party DV in Q12017”), even
if a deep PKI can provide better compartmentalization for some use cases.

It isn’t clear that compartmentalizing on root provides any obvious
benefits to users, especially as it’s the same infrastructure and audits,
but it does seem that it increases the management overhead (for root
stores), the configuration challenges (for site operators), not to mention
the management (and, occasionally, network & memory) challenges for users
to support all of those roots.

It would be ideal to see DigiCert streamline its PKI to better align with
that vision, and to understand what challenges might prevent that. For
example, is there a path to transition all new DigiCert issuance to a
single root? If it can’t be done for all certs, can it be done for TLS?
Understanding if there are challenges to that goal can provide invaluable
insight into how the Managed CA transition can look.

A significant reason for the Managed CA plan was to provide a temporary
bridge for those TLS servers who had made risky and fragile technical
decisions - such as pinning to a single CA or only supporting a single CA
on a device - while minimizing the risk to the broader TLS ecosystem. As
Symantec, like other organizations wishing to operate a trusted CA, would
be permitted to apply to have new roots (and a new infrastructure)
included, once it had met the minimum required security standards, the

Re: CAA Certificate Problem Report

2017-09-11 Thread Ryan Sleevi via dev-security-policy
On Mon, Sep 11, 2017 at 3:09 PM Jonathan Rudenberg via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

>
> > On Sep 11, 2017, at 17:41, Ryan Sleevi via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
> >
> > That seems like very poor logic and justification.
> >
> > Given that CAA and DNSSEC has been discussed in the CA/Browser Forum for
> > literally years now, perhaps it's worth asking why CAs are only now
> > discovering issues. That is, is the only reason we're discovering issues
> > because CAs waited for the last possible moment? If so, why.
>
> I think the BR clause that brings DNSSEC in is poorly drafted.


Why?

It seems like the intent may be to require full DNSSEC validation for CAA
> lookups, but that’s not what it says. I don’t think the issues under
> discussion have anything to do with the last moment. There appear to be
> significant differences in understanding, which were not discussed publicly
> until now. The ideal path here would have been for CAs to consult with the
> community about the interpretation and implementation details of this
> clause well before it came into force.


I'm not sure I would agreee with that, because it is completely
unmeasurable, and every CA being "compliant" with such a requirement would
still have resulted in the same outcome.

>
> Additionally, it may be a stretch to say that DNSSEC in the context of CAA
> has been discussed extensively.


I'm not sure I understand why you feel some special discussion is or was
necessary, given the discussion that occurred in IETF on this. That is, are
you asserting that these issues - such as CAs not even checking CAA - are
because of the ballot language?

I’m not familiar with relevant discussions that are not indexed by Google,
> but when I researched this I only found a few exchanges about this specific
> requirement on the public mailing list.


This was discussed at nearly every single F2F since late 2013/early 2014.
The DNSSEC discussion was very much part of the IETF discussions.

What discussions do you feel should have happened, but didn't?

>
> > I think arguments that suggest that failing to do the right thing makes
> it
> > OK to do the wrong thing are the worst arguments to make :)
>
> My argument is not that it’s okay to do the wrong thing. Instead, I think
> it’s worth evaluating the DNSSEC requirement to decide whether it should
> continue to be defined as "the right thing” in the BRs. I did not see any
> such analysis on cabfpub.


I'm surprised to even see the suggestion that it isn't. Do you feel the
security considerations are insufficiently documented in the CAA RFC? Do
you feel it's not sufficiently obvious the risks of not using DNSSEC?

This feels very knee-jerk, but I may be misunderstanding part of your
argument. Perhaps you could do a small write-up of why you feel things are
problematic, since the original argument - which seems to be "some CAs had
trouble" - is not at all compelling given the facts and the years of
discussion that lead up to this ballot.

>
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CAA Certificate Problem Report

2017-09-11 Thread Ryan Sleevi via dev-security-policy
That seems like very poor logic and justification.

Given that CAA and DNSSEC has been discussed in the CA/Browser Forum for
literally years now, perhaps it's worth asking why CAs are only now
discovering issues. That is, is the only reason we're discovering issues
because CAs waited for the last possible moment? If so, why.

Because they didn't write test suites? If not, why not? If so, what were
they?

I think arguments that suggest that failing to do the right thing makes it
OK to do the wrong thing are the worst arguments to make :)

On Mon, Sep 11, 2017 at 2:28 PM Jeremy Rowley via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> I would support that.  I can't recall why it's in there.
>
> -Original Message-
> From: Jonathan Rudenberg [mailto:jonat...@titanous.com]
> Sent: Monday, September 11, 2017 3:19 PM
> To: Jeremy Rowley 
> Cc: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Re: CAA Certificate Problem Report
>
>
> > On Sep 11, 2017, at 17:03, Jeremy Rowley via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
> >
> > For a little more context, the idea is that we can speed up the CAA
> check for all customers while working with those who have DNSSEC to make
> sure they aren't killing performance.  If there's a way to group them
> easily into buckets (timeout + quick does DNSSEC exist check), working on
> improving the experience for that particular set of customers is easier.
> That bucket can then be improved later.
>
> Given the disaster that DNSSEC+CAA has been over the past few days for
> multiple CAs and the fact that it’s optional in the CAA RFC, what do you
> think about proposing a ballot to remove the DNSSEC requirement from the
> BRs entirely?
>
> Jonathan
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: PROCERT issues

2017-09-08 Thread Ryan Sleevi via dev-security-policy
On Fri, Sep 8, 2017 at 2:39 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On 07/09/2017 17:17, Gervase Markham wrote:
>
>> Mozilla has decided that there is sufficient concern about the
>> activities and operations of the CA "PROCERT" to collect together our
>> list of current concerns. That list can be found here:
>> https://wiki.mozilla.org/CA:PROCERT_Issues
>>
>> Note that this list may expand or reduce over time as issues are
>> investigated further, with information either from our or our
>> community's investigations or from PROCERT.
>>
>> We expect PROCERT to engage in a public discussion of these issues and
>> give their comments and viewpoint. We also hope that our community will
>> make comments, and perhaps provide additional information based on their
>> own investigations.
>>
>> When commenting on these issues, please clearly state which issue you
>> are addressing on each occasion. The issues have been given identifying
>> letters to help with this.
>>
>> At the end of a public discussion period between Mozilla, our community
>> and PROCERT, which we hope will be no longer than a couple of weeks,
>> Mozilla will move to make a decision about the continued trust of
>> PROCERT, based on the picture which has then emerged.
>>
>> Gerv
>>
>>
> Although violating the same rules, and involving the same certificates;
> for purposes of risk assessment I think issue K should be divided into
> two issues:
>

Note, I was explicitly suggesting we not do this, because this introduces a
greater level of subjectivity of assessment, and based on incomplete or
unknowable information. For this reason, ensuring a consistent application
of risk (e.g. the factors that allowed this to happen are the same) is far
more beneficial for the community and for consistency in application of
policy.

So I do not believe we should split these issues up, and do not think it
would help the discussions.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Idea for a stricter name constraint interpretation

2017-09-07 Thread Ryan Sleevi via dev-security-policy
On Thu, Sep 7, 2017 at 5:22 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On 07/09/2017 21:00, Ryan Sleevi wrote:
>
Then there is your suggestion of requiring technically constrained
>>
>>> SubCAs (that were constrained under a previous set of relevant name
>>> types) could survive by subjecting themselves to the massive overhead of
>>> satisfying the requirements for an unconstrained SubCA (audits, dual
>>> user authentication, specially secured server facilities, geographic
>>> redundancy, etc.), where as a constrained SubCA they could operate under
>>> normal enterprise internal security rules.
>>>
>>>
>> Yup.
>>
>>
> What do you mean "Yup"?
>

This is a correct statement about what is currently required of CAs, and is
a technically viable and workable solution, albeit one with tradeoffs, and
does not require any breaking of compatibility.


> The goalposts have not moved at all.
>
> When you failed to understand the goals outlined in the first two and
> last paragraphs of my initial short post, I listed the two purposes
> explicitly in my post dated 2017-09-01 06:07 UTC (As "primary problem"
> and "secondary problem").
>

Respectfully, you are changing the goals as solutions are produced.

For example, your notation of primary/secondary fails to consider (or
explicitly ignores) the repeated attempts to highlight the principle in
https://www.mozilla.org/en-US/about/manifesto/#principle-06 outlined to you.

As I highlighted, your proposal (and all variations of it that you've
offered so far) fail to meet that. I offered you a variety of suggestions
that meet that principle - some of which do not achieve what you value, but
do achieve what Mozilla has explicitly valued.

At this point, I feel like there's not much productive communication to be
made here. I understand your goals. They are ignoring publicly-stated goals
and principles, and present compatibility issues, but I wish you the best
of luck in demonstrating how your solution can meet those goals.

I don't believe you realize you're setting value-based criteria and those
values are not shared, nor reasonable, but in any event, you have a
solution you believe works, I've offered you several solutions that balance
for other values, and it seems profoundly unlikely that you can be
convinced that interoperability and standards-compliance is more important
in the concrete than an abstract perception of cost that doesn't actually
manifestly exist.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: PROCERT issues

2017-09-07 Thread Ryan Sleevi via dev-security-policy
On Thu, Sep 7, 2017 at 11:17 AM, Gervase Markham via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Mozilla has decided that there is sufficient concern about the
> activities and operations of the CA "PROCERT" to collect together our
> list of current concerns. That list can be found here:
> https://wiki.mozilla.org/CA:PROCERT_Issues
>
> Note that this list may expand or reduce over time as issues are
> investigated further, with information either from our or our
> community's investigations or from PROCERT.
>
> We expect PROCERT to engage in a public discussion of these issues and
> give their comments and viewpoint. We also hope that our community will
> make comments, and perhaps provide additional information based on their
> own investigations.
>
> When commenting on these issues, please clearly state which issue you
> are addressing on each occasion. The issues have been given identifying
> letters to help with this.
>
> At the end of a public discussion period between Mozilla, our community
> and PROCERT, which we hope will be no longer than a couple of weeks,
> Mozilla will move to make a decision about the continued trust of
> PROCERT, based on the picture which has then emerged.
>

(Unless stated, wearing a personal hat)

Hi Gerv,

Do you have an anticipated time period for discussion? That is, what
represents a time for which PROCERT may submit feedback to have it
considered, and at what point you will consider discussion closed?

Based on the information provided, Issue K represents an unconditional
security issue, in as much as names such as "autodiscover" and "owaserver"
are widely-used domains for Outlook Web Access. Many clients attempt to
access resources at this (unqualified) domain, relying on the combination
of DNS suffix search and locally-trusted certificates to ensure proper
resolution. By issuing a publicly trusted certificate for this name - and
then failing to revoke it - represents a critical security risk and
arguably a dereliction of responsibility.

Combined with Issue D and Issue G, it is questionable as to how it was ever
validated, and suggest serious failings over the most critical security
control of a CA - which is validation of a domain.

Combined with Issue L, Issue Q, Issue R, Issue X, and Issue W, serious
questions are raised about the oversight and technical ability of the
staff, as these are indicative of serious control failures.

Outside of Issue K, I would suggest that Issue O and Issue S show a lack of
awareness of developments in the CA ecosystem, as both of these controls
were direct responses to widely reported CA security issues. The failure to
take appropriate steps - or to appreciate the reasons behind such steps -
are indicative of a systematic misunderstanding of the security function of
a CA.

On the basis of the sum of these issues, it would seem that the criteria in
Section 7.3 of Mozilla policy -
https://www.mozilla.org/en-US/about/governance/policies/security-group/certs/policy/
- is met: "Mozilla will disable or remove a certificate if the CA
demonstrates ongoing or egregious practices that do not maintain the
expected level of service or that do not comply with the requirements of
this policy."
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Idea for a stricter name constraint interpretation

2017-09-07 Thread Ryan Sleevi via dev-security-policy
On Thu, Sep 7, 2017 at 1:20 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> All but one of your suggestions would require the revocation of existing
> SubCA certificates, essentially invalidating all existing uses of
> certificates issued by those SubCAs (Each certificate holder would need
> to obtain and install at least a new SubCA cert, possibly a complete new
> end cert).
>

This is not at all an accurate representation of what it would require, but
I'm not sure it's worth pointing out the code to you, since I think the
broader conversation is whether or not that's a bad thing. While you're
presupposing it's objectively bad, you haven't demonstrated that it's an
unreasonable requirement.

Then there is your suggestion of requiring technically constrained
> SubCAs (that were constrained under a previous set of relevant name
> types) could survive by subjecting themselves to the massive overhead of
> satisfying the requirements for an unconstrained SubCA (audits, dual
> user authentication, specially secured server facilities, geographic
> redundancy, etc.), where as a constrained SubCA they could operate under
> normal enterprise internal security rules.
>

Yup.


> Could you suggest an alternative solution that does not impose such
> significant costs?
>

Why? You're moving a goalpost as it suits, and that's not a productive line
of discussion. To the point that there are multiple ways to address this
problem is established. That there are paths forward that avoid radically
breaking backwards compatibility is also established.

What you haven't stated, but which is clear from your replies, is you view
these costs to exceed the cost of breaking backwards compatibility. That's
certainly a viewpoint you can take, and I respect your view, but you
haven't advanced any arguments to support that, and merely stated it as
factual.

I hope we can agree there are multiple ways to address the introduction of
new nametypes. These different approaches have different tradeoffs for them
- revocation, auditing, new functionality, breaking old functionality - and
I've presented this multitude in the hopes of demonstrating to you that
jumping to a solution which notably runs counter to Mozilla's Principles
isn't necessary - there are alternatives.

You may still wish to view a solution that breaks backwards compatibility
favorably, and it's unlikely I can convince you otherwise. But I can
highlight for those on the list that alternatives do exist, and your
solution has notable costs - costs that, in many cases, are deemed
unacceptably high.


> You seem to be ignoring my actual arguments and arguing only against
> specific words and phrases.


Hopefully, you can see from above that I haven't done so at all, but in
fact been explaining to you why your proposal is unacceptably costly. It
may be simply you disagree.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Idea for a stricter name constraint interpretation

2017-09-01 Thread Ryan Sleevi via dev-security-policy
On Fri, Sep 1, 2017 at 2:07 AM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> RFC2818 postdates real world https by several years.  The original de
> facto standard by Netscape/Mozilla used the commonName semantics, which
> survived for more than a decade in commonly used software (GNU wget
> added SAN support sometime in 2011 for example).


Respectfully, you're conflating two different things.

You're suggesting support for _only_ common names versus support for
subjectAltNames.

You are correct to highlight the publication of 2818 happened in 2000, with
work beginning in 98. However, you're not correct to suggest the SAN wasn't
supported in the original implementations.

You are also correct to highlight some implementations ignored 2818 and/or
didn't implement support for SAN. However, I don't believe that distinction
is relevant to your underlying discussion of nameConstraints or their
applicability, no more than a discussion of macOS's lack of support until
macOS 10.9 (if memory serves correctly)

I realize we're ratholing on trivia here, and perhaps that was a goal, but
I think it's important to note that your original statement of fact was
inaccurate, and has remained inaccurate, and to the extent that mistatement
affects the subsequent design, bears highlighting. The application of
nameConstraints to subjectAltName has been aligned, and the use of
subjectAltName in preference to commonName has been documented for a
considerable amount of time. The application of dNSName nameConstraints to
commonNames is, while a note of historical practice for a small number of
(widely deployed) clients, is equally not necessary as same clients move
towards the deprecation or removal of commonName support.

That is, put differently, if we're talking about how systems will/should
change, I would argue that it's not relevant to argue how systems
previously changed, because the only thing that matters is what _new_
changes will be implemented. You can see this in my explanation about why
changing the semantics of nameConstraints (without any other signal) is a
fundamentally flawed and problematic idea, and you can see this in a
discussion about why constraining commonNames (when new clients don't,
won't, or shouldn't support commonNames) is equally a flawed and
problematic idea.


> So, from the get-go with the standards, it was possible to name constrain
>> DNS. Unless you were referencing certificates prior to them being bound to
>> domain names, but I can't see how that would be relevant, since the
>> context
>> is about DNS names.
>>
>>
> Point was that RFC2818 (and RFC2459 which it references for SAN usage)
> changed the established interpretation of WebPKI certificates from the
> established Mozilla standard.  And that this is an obvious precedent to
> making such changes.
>

I'm sorry, this is simply factually false.


> The primary problem is the need to not weaken security for code that
> starts looking at new (or previously unused) name types after existing
> PKI restricted CAs have (obviously) not mentioned the "new" type(s) as
> "deny all" entries in their name restrictions.
>
> The secondary problem is not to burden such restricted CAs with
> additional audit or other compliance requirements when such "new" name
> types are added to standards such as the CAB/F BRs and the Mozilla root
> program polices.


I gave multiple suggestions on how to avoid both of those.


> Indeed, I am just trying to see those very requirements from the
> perspective of the already deployed PKI and its subscribers being the
> "existing users" for which interop needs to be ensured.


Unfortunately, I do not believe you are succeeding in doing so through
proposing semantic changes.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Idea for a stricter name constraint interpretation

2017-08-31 Thread Ryan Sleevi via dev-security-policy
On Thu, Aug 31, 2017 at 5:21 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On 31/08/2017 22:26, Ryan Sleevi wrote:
>
>> Agreed. But in general, in order to maintain interoperability, there's a
>> process for building consensus, and repurposing extensions as you propose
>> is generally detrimental to that :)
>>
>
> But sometimes necessary.
>

There is a tremendous burden of proof to demonstrate this, and it's the
path of last resort, not first.


> Moving the information to a new extension would basically just bloat
>>> certificates with more redundant data to be sent in every certificate
>>> based protocol exchange.  But changing the original decision in a
>>> backwards compatible manner may still be a good idea, either as a
>>> "stricter security policy" or (better, if it works well in controlled
>>> tests) as part of an update RFC for the IETF standard that specified the
>>> original semantics.
>>>
>>
>>
>> I can understand your perspective, but I must disagree with you that it's
>> "backwards compatible". It isn't - it's a meaningful semantic change that
>> breaks interoperability.
>>
>>
> I meant backwards compatible and interoperable with the actual real
> world CAs (as opposed to all the CAs that could be built under the old
> standard).  Compare to how the standard was changed from DNS name in the
> CN element to DNS name exclusively in the SAN extension, but hopefully
> with less transition time needed.


I believe this may be operating on an incomplete knowledge of history. RFC
2818 (aka the HTTPS RFC) always indicated commonName was deprecated (and
SAN was preferred), and nameConstraints have similarly always expressed a
path for constraining the nameConstraints.

So, from the get-go with the standards, it was possible to name constrain
DNS. Unless you were referencing certificates prior to them being bound to
domain names, but I can't see how that would be relevant, since the context
is about DNS names.


> Yes, it means that technically constrained sub-CA certificates may be
>> 'bloated' in order to ensure the desired degree of security. That's a
>> trade-off for the compromises necessary to avoid audits. That's not,
>> however, an intrinsic argument against the process, or a suggestion it
>> cannot be deployed.
>>
>>
> Avoiding audit failures is a legal, not a technical need.  Anything that
> would only fail audits could be fixed by changing audit requirements, if
> the organizations setting those (such a Mozilla and CAB/F) desires.
>

I didn't suggest avoiding audit failures, but rather, avoiding audits. That
is, the material difference between a TCSC and a CA is not one of technical
requirements (they're the same, effectively), but one of whether or not a
self-audit is seen as acceptable versus an independent third-party audit.

I highlight this because it makes the tradeoff something more concrete: An
organization that wishes to avoid the administrative hassle of an
independent audit could opt for a technically constrained sub-CA, which
would be "bloated" in your view. If they didn't want the bloat, they could
accept the administrative hassle of an independent third-party audit.

That is, there are options to satisfy an organizations needs, and allows
them to prioritize whether it's more important to have the size reduction
or to have organizational flexibility. There's no innate requirement to
allow both - and while that may be an optimization, is one that comes with
the compatibility and interoperability risks I highlighted, so it may not
actually be achievable in the world we have. But that's OK - organizations
and individuals routinely have to operate in the world we have and make
choices based on priorities, and we've made it so far :)


>
>
>> The interaction between a nameConstraints extension not specifying
>>> directorynames and the directoryname in the Subject field would be an
>>> area
>>> needing careful specification, based on compatibility and historic
>>> concerns.
>>>
>>>
>> Yes. Which would not be appropriate for m.d.s.p (for reasons of both
>> consensus and intellectual property). That is a concern for some members,
>> and is why organizations like W3C and groups such as WICG exist :)
>>
>>
> Ok, I was simply hoping informal discussion in a place like m.d.s.p.
> would be a better place to initial evaluate such an idea before starting
> up the whole standardization process.
>

Fair enough. This makes a great venue for that, but certainly, as it shifts
to technical details, working through a process like WICG - in which you
could write up an 'explainer' explaining the idea and how

Re: Idea for a stricter name constraint interpretation

2017-08-31 Thread Ryan Sleevi via dev-security-policy
On Thu, Aug 31, 2017 at 4:13 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> I am aware that this was the original specification.  However like many
> other parts of PKIX it may not be as good in 20/20 hindsight.


Agreed. But in general, in order to maintain interoperability, there's a
process for building consensus, and repurposing extensions as you propose
is generally detrimental to that :)


> Moving the information to a new extension would basically just bloat
> certificates with more redundant data to be sent in every certificate
> based protocol exchange.  But changing the original decision in a
> backwards compatible manner may still be a good idea, either as a
> "stricter security policy" or (better, if it works well in controlled
> tests) as part of an update RFC for the IETF standard that specified the
> original semantics.


I can understand your perspective, but I must disagree with you that it's
"backwards compatible". It isn't - it's a meaningful semantic change that
breaks interoperability.

At the least, that's something that bears documentation and consensus -
that is, don't "embrace, extend, extinguish" - if you will.

Yes, it means that technically constrained sub-CA certificates may be
'bloated' in order to ensure the desired degree of security. That's a
trade-off for the compromises necessary to avoid audits. That's not,
however, an intrinsic argument against the process, or a suggestion it
cannot be deployed.


> The interaction between a nameConstraints extension not specifying
> directorynames and the directoryname in the Subject field would be an area
> needing careful specification, based on compatibility and historic concerns.
>

Yes. Which would not be appropriate for m.d.s.p (for reasons of both
consensus and intellectual property). That is a concern for some members,
and is why organizations like W3C and groups such as WICG exist :)


> A first order approximation would be that the absence of directory name
> constraints in a non-empty nameconstraints extension would not prohibit the
> (mandatory) inclusion of a directory name as the Subject Name of a
> certificate (but would still block its inclusion as a SAN).
>

It does, as proposed, so your proposal is not 'backwards compatible'
anymore :)


> A second order approximation could apply certain non-directory-name
> constraint semantics to certain directory name elements.  For example
> an "emailAddress" directory name field could be subject to the
> constraints for rfc822name absent explicitly contradictory directory
> name constraints.  An official specification would have to enumerate
> these situations explicitly.


Indeed.


> However in practice, such a replacement of (relying part redistributed)
> SubCA certificates would be at least as difficult as simply switching to
> a new SubCA.  Either solution would be unlikely to make subscribers stop
> distributing the old SubCA cert with their existing non-expired end
> cert, but changing to a new SubCA would at least prevent accidental
> reuse of the expired/revoked SubCA cert with new end certs.


They are not relying party distributed. They're subscriber distributed. And
as the Subscriber has a direct or transitive relationship with the Issuer,
that's not unreasonable.

I agree that it's an error prone process, and I agree that changing the
name (and not just the key) is an ideal scenario to transition. However,
unless you revoke the old certificate, it's unconstrained. And if you
revoke the old certificate, then everything it's issued is no longer valid,
unless you reissue for the same name and key with new constraints. Which is
why folks thought it was a good idea at the time.

I'm not trying to defend the ideas as good - the idea of making
nameConstraints a whitelist, rather than a blacklist, has been something
bandied about for the better part of a decade. However, it's a mistake we
have to live with, and if we want to correct it, it means specifying
something new, and in a way that is interoperable, both with the old and
others. I tried to give you a suggestion on how to do so, should you feel
motivated to pursue it.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Idea for a stricter name constraint interpretation

2017-08-31 Thread Ryan Sleevi via dev-security-policy
On Thu, Aug 31, 2017 at 8:18 AM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Would it be beneficial to Mozilla in particular and the larger PKI
> community in general if the following was added to implementations:
>

Hi Jakob,

This was rather extensively discussed in the IETF during the production of
the nameConstraints extension - whether to fail open or fail closed, if you
will, with unrecognzied name types.

The existing RFCs document the behaviour that is implemented by clients, so
no, it would not be wise to repurpose this behaviour in a contradictory way
- that would go against the mission of ensuring an interoperable Web based
on shared standards.

The approach to change the semantic meaning of extensions in such a way is
to define a new such extension - which may have the same encoding, but have
the semantic processing restrictions you mentioned. One way that could be
done in an interoperable way is to ensure both the 'legacy' nameConstraints
extension (to handle the existing Web PKI) and the 'new'
nameConstraintsVersion2 extension (poorly named, but with the same ASN.1
production and different semantic meaning for processing) be required for a
certificate to be considered 'constrained'.

Such a document would ideally go through the IETF, but COULD be incubated
in an appropriate venue (such as WICG) to allow iterative and flexible
development.

However, the implementation isn't quite as 'simple' as you describe, and so
would require more work. For example, one flaw would be that unless such a
constrained certificate specified a Directory Name constraint, then every
certificate issued would be a violation of the name constraint unless it
was a fully empty subject ( see https://no-subject.badssl.com/ to
understand the compatibility issues this causes; particularly, try it on
macOS). So more work is needed.

But that general concern - of being able to extend new name types and not
necessarily have the CA be able to issue for them - has been discussed in
the past. The original idea was that as new name types are introduced that
have semantic meaning for applications, you would revoke the existing cert
and reissue a new one (with new constraints) using the same Subject/Key, as
that would ensure a continuity of validity for situations where you did
want to restrict the naming scheme. So, technically, there are options for
CAs today :)
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Violations of Baseline Requirements 4.9.10

2017-08-29 Thread Ryan Sleevi via dev-security-policy
On Tue, Aug 29, 2017 at 8:47 AM, Paul Kehrer via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:
>
> Symantec / GeoTrust
>
> CCADB does not list an email address. Not CC'd.
>
> DN: C=IT, O=UniCredit S.p.A., CN=UniCredit Subordinate External
> Example cert:
> https://crt.sh/?q=049462100743d2bcb10780e7c4eb2c
> e1197a3f8bea7fad5ef9141f008eb1e6ca
> OCSP URI: http://ocsp.unicredit.eu/ocsp


Note: There are 7 associated certificates for this CA (
https://crt.sh/?caid=294 )

Of those:
5 are issued by Symantec / GeoTrust:
  - 1 is expired ( https://crt.sh/?id=9219 )
  - 4 are revoked ( https://crt.sh/?id=12722071 / https://crt.sh/?id=6941850
/ https://crt.sh/?id=47086214 / https://crt.sh/?id=12165934)
2 are issued by Actalis
  - 2 are technically constrained sub-CAs ( https://crt.sh/?id=147626411 /
https://crt.sh/?id=47081615 )

As they are technically-constrained subordinate CAs, they are (presently)
exempted from that MUST requirement.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: BR compliance of legacy certs at root inclusion time

2017-08-22 Thread Ryan Sleevi via dev-security-policy
On Tue, Aug 22, 2017 at 12:01 PM, Gervase Markham via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On 21/08/17 06:20, Peter Kurrasch wrote:
> > The CA should decide what makes the most sense for their particular
> > situation, but I think they‎ should be able to provide assurances that
> > only BR-compliant certs will ever chain to any roots they submit to the
> > Mozilla root inclusion program.
>
> So you are suggesting that we should state the goal, and let the CA work
> out how to achieve it? That makes sense.
>
> I agree with Nick that transparency is important.
>
> Is there room for an assessment of risk, or do we need a blanket
> statement? If, say, a CA used short serials up until 2 years ago but has
> since ceased the practice, we might say that's not sufficiently risky
> for them to have to stand up and migrate to a new cross-signed root. I
> agree that becomes subjective.


I think it'd be useful if we knew of reasons why standing up (and
migrating) to a new infrastructure was not desirable?

It helps avoid value-based judgement of risk, which, like human processes
for verifying certificates, can fail - and instead sets up objective
criteria and processes that provide greater assurance. It's also useful to
consider that the function of cost (whether fiduciary or in complexity) is
something that is amortized over time, and achieves economies of scale
through its mandate, so we should keep a critical eye in remembering that
the associated costs will go down over time as CAs develop processes to
routinely do so.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: BR compliance of legacy certs at root inclusion time

2017-08-20 Thread Ryan Sleevi via dev-security-policy
On Sun, Aug 20, 2017 at 3:44 PM, Peter Bowen  wrote:

> From the perspective of being "clean" from a given NSS version this,
> makes sense.  However the reality for most situations is there is
> demand to support applications and devices with trust stores that have
> not been updated for a while.  This could be something as similar as
> Firefox ESR or it could be a some device with an older trust store.
> Assuming there is a need to have the same certificate chain work in
> both scenarios, the TLS server may need to send a chain with multiple
> root to root cross-certificates.
>

I'm not sure it's fair to say there needs to be the 'same' certificate
chain working in both. The variety of trust stores already shows how that's
not necessary today. Merely, one needs to have 'a' certificate chain.

Perhaps I've missed a point you weren't stating, but I'm not sure why you
would need root-to-root cross-certificates, as this proposal only applies
to the roots included in Mozilla's store, and offers a transition path for
those roots.


> https://gist.github.com/pzb/cd10fbfffd7cb25bb57c38c3865f18f2 is just
> the roots in each unique disconnected graph.  Having the entries there
> does not imply that all have cross-signed each other, rather than
> there is a path from each pair of roots to a common node.  For
> example, Root A and Root B might each have a subordinate CA that have
> each cross-certified the same, third subordinate.
>

I'm not sure if you're arguing that this is a desired config, or merely, a
config that exists. I certainly would not be willing to suggest CAs have
(effectively) managed their cross-certificates well, and it would seem as
if some of these paths are reflective of business transitions/deals (and
their expirations) rather than intrinsic needs of the Web PKI.

As it sounds like you agree that the overall design is both sound and
desirable, from a Web PKI perspective, perhaps you could clarify what you
believe is a case not supported by this design. This would be useful to
understanding what, if any, material consequence there would be of
implementing this saner approach to root store management.


> Considering we already see paths like:
>
> OU=Class 3 Public Primary Certification Authority,O=VeriSign\, Inc.,C=US =>
> CN=VeriSign Class 3 Public Primary Certification Authority - G3,OU=(c)
> 1999 VeriSign\, Inc. - For authorized use only,OU=VeriSign Trust
> Network,O=VeriSign\, Inc.,C=US =>
> CN=VeriSign Class 3 Public Primary Certification Authority - G5,OU=(c)
> 2006 VeriSign\, Inc. - For authorized use only,OU=VeriSign Trust
> Network,O=VeriSign\, Inc.,C=US =>
> CN=VeriSign Universal Root Certification Authority,OU=(c) 2008
> VeriSign\, Inc. - For authorized use only,OU=VeriSign Trust
> Network,O=VeriSign\, Inc.,C=US =>
> CN=Symantec Class 3 Extended Validation SHA256 SSL CA,OU=Symantec
> Trust Network,O=Symantec Corporation,C=US * =>
> (End-Entity Certificate)
>
> I think we need to be careful when considering root rotations.
>

While a useful real world example, I think the cross-signing activities of
several CAs (and one can examine Entrust for a similar issue, or the
multiple paths StartCom previously did) are not necessarily designed with
either interoperability or consideration of the ecosystem in mind. After
all, this is the same set of activities that make it easy for 'forget'
disclosure of critical intermediates.

Rather, with appropriate advice, one can easily end up with a linear path,
where the only 'cost' is paid by legacy systems that don't update, and the
servers that need to support such legacy systems. As there is an inherent
lifetime for how long something can be 'safely' connected to the Internet,
this doesn't seem unreasonable to build upon.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: BR compliance of legacy certs at root inclusion time

2017-08-18 Thread Ryan Sleevi via dev-security-policy
On Fri, Aug 18, 2017 at 11:02 AM, Gervase Markham via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Sometimes, CAs apply for inclusion with new, clean roots. Other times,
> CAs apply to include roots which already have a history of issuance. The
> previous certs issued by that CA aren't always all BR-compliant. Which
> is in one sense understandable, because up to this point the CA has not
> been bound by the BRs. Heck, the CA may never even have heard of the BRs
> until they come to apply - although this seems less likely than it would
> once have been.
>
> What should our policy be regarding BR compliance for certificates
> issued by a root requesting inclusion, which were issued before the date
> of their request? Do we:
>
> A) Require all certs be BR-compliant going forward, but grandfather in
>the old ones; or
> B) Require that any non-BR-compliant old certs be revoked; or
> C) Require that any seriously (TBD) non-BR-compliant old certs be
>revoked; or
> D) something else?
>

D) Require that the CA create a new root certificate to be included within
Mozilla products, and which all future BR-compliant certificates will be
issued from this new root. In the event this CA has an existing root
included within one or more software products, this CA may cross-certify
their new root with their old root, thus ensuring their newly-issued
certificates (which are BR compliant) work with such legacy software.

This ensures that all included CAs operate from a 'clean slate' with no
baggage or risk. It also ensures that the slate always starts from "BR
compliant" and continues forward.

However, some (new) CAs may rightfully point out that existing, 'legacy'
CAs have not had this standard applied to them, and have operated in a
manner that is not BR compliant in the past.

To reduce and/or eliminate the risk from existing CAs, particularly those
with long and storied histories of misissuance, which similar present
unknowns to the community (roots that may have been included for >5 years,
thus prior to the BR effective date), require the same of existing roots
who cannot demonstrate that they have had BR audits from the moment of
their inclusion. That is, require 'legacy' CAs to create and stand up new
roots, which will be certified by their existing roots, and transition all
new certificate issuance to these new 'roots' (which will appear to be
cross-signed/intermediates, at first). Within 39 months, Mozilla will be
able to remove all 'legacy' roots for purposes of website authentication,
adding these 'clean' roots in their stead, without any disruption to the
community. Note that this is separable from D, and represents an effort to
holistically clean up and reduce risk.

The transition period at present cannot be less than 39 months (the maximum
validity of a newly issued certificate), plus whatever time is afforded to
CAs to transition (presumably, on the order of 6 months should be
sufficient). In the future, it would also be worth considering reducing the
maximum validity of certificates, such that such rollovers can be completed
in a more timely fashion, thus keeping the ecosystem in a constant 'clean'
state.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: O=U.S. Government for non-USG entity (IdenTrust)

2017-08-18 Thread Ryan Sleevi via dev-security-policy
Doesn't RFC 5280 clearly indicate that already, through its normative
description of the EKU?

That is, I can understand there being confusion or misinterpretation, but
I'm not sure that the problem itself is rooted in the documents, and thus,
may not be something the documents need to address. :)

On Fri, Aug 18, 2017 at 10:49 AM, Jeremy Rowley <jeremy.row...@digicert.com>
wrote:

> I don't (as these are the exact type of cert I've been trying to kill for
> years), but Identrust did based on their response. Looking at it from their
> POV, the language could probably be clarified to state thar any cert with
> no equipment, sever Auth,  or anyEKU is considered a BR cert regardless of
> other content.
>
> On Aug 18, 2017, at 8:26 AM, Ryan Sleevi vb<r...@sleevi.com> wrote:
>
> Do you believe https://github.com/mozilla/pkipolicy/blob/master/
> rootstore/policy.md#11-scope is ambiguous in this context? That is what
> is referenced in the text.
>
> It sounds as if you're suggesting they're in scope, via 1.1, but that
> they're out of scope, because the policy does not state that (id-kp-anyEKU
> || id-kp-serverAuth) are SSL certificates and (id-kp-anyEKU ||
> id-kp-emailProtection) are email certificates, even though this would
> logically flow (from RFC 5280 https://tools.ietf.org/
> html/rfc5280#section-4.2.1.12) stating that anyEKU places no restrictions
> on the subject key as to its purpose. Is that correct?
>
> On Fri, Aug 18, 2017 at 9:53 AM, Jeremy Rowley via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
>> Right, but can you call these SSL certs without an FQDN?
>>
>>
>>   *   Insofar as the Baseline Requirements attempt to define their own
>> scope, the scope of this policy (section 1.1) overrides that. Mozilla thus
>> requires CA operations relating to issuance of all SSL certificates in the
>> scope of this policy to conform to the Baseline Requirements.
>>
>> Is SSL certificate defined?
>>
>> On Aug 18, 2017, at 7:33 AM, Gervase Markham <g...@mozilla.org> g...@mozilla.org>> wrote:
>>
>> On 17/08/17 20:31, Jeremy Rowley wrote:
>> Without an FQDN, I doubt they are in scope for the baseline requirements.
>>
>> Not according to the BRs themselves. However, the Mozilla Policy 2.5
>> specifically says:
>>
>> "Insofar as the Baseline Requirements attempt to define their own scope,
>> the scope of this policy (section 1.1) overrides that. Mozilla thus
>> requires CA operations relating to issuance of all SSL certificates in
>> the scope of this policy to conform to the Baseline Requirements."
>>
>> Now, whether we are right to include anyEKU in scope, given that it
>> pulls in certs such as those in question, is still something I am unsure
>> about :-) But the current policy says what it says.
>>
>> They are in scope for the Mozilla policy. The BRs require the cert to
>> be intended for web tls. These are not.
>>
>> But the Mozilla Policy re-scopes the BRs to remove the ambiguous
>> language about "intent".
>>
>> The Mozilla policy covers client certs as well as tls.
>>
>> Er, no it doesn't (except insofar as we make anyEKU in scope)? Our
>> policy covers server certs and email certs.
>>
>> Gerv
>> ___
>> dev-security-policy mailing list
>> dev-security-policy@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-security-policy
>>
>
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: O=U.S. Government for non-USG entity (IdenTrust)

2017-08-18 Thread Ryan Sleevi via dev-security-policy
Do you believe
https://github.com/mozilla/pkipolicy/blob/master/rootstore/policy.md#11-scope
is ambiguous in this context? That is what is referenced in the text.

It sounds as if you're suggesting they're in scope, via 1.1, but that
they're out of scope, because the policy does not state that (id-kp-anyEKU
|| id-kp-serverAuth) are SSL certificates and (id-kp-anyEKU ||
id-kp-emailProtection) are email certificates, even though this would
logically flow (from RFC 5280
https://tools.ietf.org/html/rfc5280#section-4.2.1.12) stating that anyEKU
places no restrictions on the subject key as to its purpose. Is that
correct?

On Fri, Aug 18, 2017 at 9:53 AM, Jeremy Rowley via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Right, but can you call these SSL certs without an FQDN?
>
>
>   *   Insofar as the Baseline Requirements attempt to define their own
> scope, the scope of this policy (section 1.1) overrides that. Mozilla thus
> requires CA operations relating to issuance of all SSL certificates in the
> scope of this policy to conform to the Baseline Requirements.
>
> Is SSL certificate defined?
>
> On Aug 18, 2017, at 7:33 AM, Gervase Markham > wrote:
>
> On 17/08/17 20:31, Jeremy Rowley wrote:
> Without an FQDN, I doubt they are in scope for the baseline requirements.
>
> Not according to the BRs themselves. However, the Mozilla Policy 2.5
> specifically says:
>
> "Insofar as the Baseline Requirements attempt to define their own scope,
> the scope of this policy (section 1.1) overrides that. Mozilla thus
> requires CA operations relating to issuance of all SSL certificates in
> the scope of this policy to conform to the Baseline Requirements."
>
> Now, whether we are right to include anyEKU in scope, given that it
> pulls in certs such as those in question, is still something I am unsure
> about :-) But the current policy says what it says.
>
> They are in scope for the Mozilla policy. The BRs require the cert to
> be intended for web tls. These are not.
>
> But the Mozilla Policy re-scopes the BRs to remove the ambiguous
> language about "intent".
>
> The Mozilla policy covers client certs as well as tls.
>
> Er, no it doesn't (except insofar as we make anyEKU in scope)? Our
> policy covers server certs and email certs.
>
> Gerv
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certificates with less than 64 bits of entropy

2017-08-18 Thread Ryan Sleevi via dev-security-policy
On Fri, Aug 18, 2017 at 1:34 AM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Since QuoVadis has not yet responded, let me point to a few (partial)
> answers already known from previous messages from QuoVadis or others:


I believe it would be far more productive for this community if you allow
CAs to respond rather than attempting to speak for them. While I recognize
your desire to help, your replies unfortunately tend to introduce more
confusion and new issues. While not wanting to discourage participation on
this list, some discussions on this list are transparent and public
discussions between CAs and Root Stores, and thus your involvement is
neither beneficial nor necessary. Please allow CAs to answer for themselves.

In this case, for example, you do not actually reference any answers, as
you suggest you have, because these matters have not been answered.

I appreciate your enthusiasm on this topic, and for participation, but as
your replies attempt to speak for either CAs or root stores, they
unfortunately introduce significant harm and confusion. As I am sure you
intend to make productive contributions, it may be best in such cases to
simply observe and ask questions to better understand things, rather than
attempt to provide answers on behalf of others. This applies both on-list
and in bugs.

Thank you for your interest in learning more about these topics, and
hopefully these requests will lead to more productive discussions. As this
transparency is a valuable benefit to the community, it would be truly
unfortunate if such attempts to assist were to undermine and harm such
discussions and result in removing that transparency in order to ensure
that only the authorized representatives were responding.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Bugzilla Bugs re CA issuance of non-compliant certs

2017-08-15 Thread Ryan Sleevi via dev-security-policy
On Tue, Aug 15, 2017 at 4:01 PM, Kathleen Wilson via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Tuesday, August 15, 2017 at 12:46:36 PM UTC-7, Ryan Sleevi wrote:
> >
> > The requirement for revocation comes from the Baseline Requirements.
> >
> > Could you clarify your expectations regarding CAs' violation of the
> > Baseline Requirements with respect to these issues and Section 4.9.1.1.
>
> Are you specifically referring to item #9 of Section 4.9.1.1?
>

Yeah, #9 serves as a catch-all, but based on the reporting from Jonathan
and Alex, also #4.

For things like internal IPs / domains, there's #6 and #10

Depending on the CP/CPS, #14 applies

For some of the encoding issues, I would argue that #15 would/should apply
- some of these certificates actively harm interoperability (e.g. they're
problematic practices well beyond an acceptable timeframe for them, and
fail to work in NSS and other applications)


I think, in all of this, it's worth calling out that at least one CA has
(1) Proactively reached out to multiple root programs regarding a proposed
remediation plan
(2) Provided a detailed post-mortem that sufficiently demonstrates an
understanding of the issue
(3) Provided a reasonable and responsible set of next steps that arguably
represent good industry practice that other CAs should consider adopting.

And that's PKIoverheid.

Also worth calling out is Let's Encrypt, which was able to revoke in a
timely fashion, enact a production change, and provide a detailed
post-mortem, all within 24 hours. That's an incredible level of
responsibility and turn-around that shows a systemic understanding of the
risks and challenges.

It would be a shame if the excellent work by these two CAs to address
community concerns was not met-or-exceeded by the other CAs on the list, as
that certainly discourages future post-mortems and encourages incomplete
responses.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Bugzilla Bugs re CA issuance of non-compliant certs

2017-08-15 Thread Ryan Sleevi via dev-security-policy
On Tue, Aug 15, 2017 at 3:37 PM, Kathleen Wilson via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:
>
> I do *NOT* necessarily expect the CAs to revoke all of these certificates.
> I expect the CAs to do a careful analysis of the situation and
> determine/explain whether or not they will revoke the certs or let the
> expire. If the choice is to let them expire, there needs to be good reasons
> and a timeline for when the bulks of certs will expire. We (Mozilla
> community) will evaluate such information and provide constructive
> feedback, and I or Gerv will add a comment in the bug to confirm if the
> plan (when not revoking) is acceptable, or to state if we/Mozilla will
> require revocation.
>

The requirement for revocation comes from the Baseline Requirements.

Could you clarify your expectations regarding CAs' violation of the
Baseline Requirements with respect to these issues and Section 4.9.1.1.

That is:
1) Do you expect a qualified audit report for any CA that has failed to
revoke within 24 hours? (I would suggest Mozilla should expect that, but
that's not explicitly stated, and other programs may already expect/require
this)
2) Are you suggesting you will, in evaluating such a qualified report, take
into consideration the explanations CAs provide, and the determination of
whether or not such a qualified report will be acceptable shall be
communicated in the bug? (I think that's a correct analysis of your
proposal, but want to confirm)
3) Do you have a plan for CAs that (1) fail to respond (2) fail to respond
in a timely fashion (3) fail to respond to a level of detail sufficient to
determine whether or not it's a 'good' reason).

I would note that any CA which does not or has not promptly revoked these
within 24 hours of contact should, at a minimum, contact all root programs
that they participate in to acknowledge this non-compliance and discuss
what expectations other, non-Mozilla Root Programs have with respect to
these certificates. Similarly, if such programs have requirements around
"Security Incident Reporting," that CAs are timely in such reports.

Given that these are a requirement in the Baseline Requirements, it is up
to each CA to work with their auditor (and supervisory body, as
appropriate) and the root store(s) they participate in to ensure their
analysis of the risk and plan of remediation is acceptable.

Is that a correct summary of the situation?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Expired Certificates Listed by Certificate Manager

2017-08-15 Thread Ryan Sleevi via dev-security-policy
On Tuesday, August 15, 2017 at 10:34:27 AM UTC-4, Gervase Markham wrote:
> On 15/08/17 13:59, Ryan Sleevi wrote:
> > Note: adding to certdata.txt, at present, will have various undesirable
> > side-effects:
> > 
> > - Distrust records, without associated certs, can present UI issues when
> > viewing and editing (which is why the associated certs are included in
> > certdata.txt)
> 
> The current distrust records do have associated certs, right?

Correct. This presents them in the UI (both expired and non-expired - hence 
this thread).

> > - Distrust records, _with_ associated certs, can present UI issues when
> > viewing and editing (yes, it's a no-win, and that's the point)
> 
> I assume you mean UI issues in Firefox/Thunderbird specifically?

No, I actually mean the UI of any NSS-using application, since NSS itself does 
not ship with a UI. That's handled by PSM in Firefox/Thunderbird, a similar UI 
in Chrome, and my understanding is that several tools also exist in the Linux 
space.

We regularly see bugs from Chrome on Linux users (and Chrome on ChromeOS, where 
we've adopted a similar approach) complaining about confusion about 
certificates being listed in the UI but that explicitly aren't trusted (or the 
subtle change that these flags had depending on NSS version).

> > - Distrust records, _with_ associated certs, can present new challenges for
> > distributions that patch (failing to include a new root = things don't work
> > that should. failing to distrust an old certificate = things that shouldn't
> > work, do)
> 
> However, these are existing rather than new challenges, given that we
> already have such certificates in the store.

Yup. But it's something to be aware of when folks propose, say, adding the 
OneCRL-set of distrust, which would be several hundred more certificates (463, 
it looks like)
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Expired Certificates Listed by Certificate Manager

2017-08-15 Thread Ryan Sleevi via dev-security-policy
On Tue, Aug 15, 2017 at 8:31 AM, Gervase Markham via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On 01/08/17 09:21, userwithuid wrote:
> > In this context @Mozilla: Those additional distrust entries are
> > coming from NSS, but they are all pre-OneCRL afaics. Is this
> > coincidence (= there wasn't any "high-profile" enough distrust
> > warranting nss addition) or has the certdata-based distrust been
> > entirely obsoleted by OneCRL (= there will never be any new distrust
> > entries in certdata)?
>
> OneCRL does not obsolete certdata.txt-based distrust because not
> everyone checks OneCRL. While we can't add every cert in OneCRL to
> certdata.txt, we should add the big dis-trusts to it. Do you think
> there's anything missing?
>

Note: adding to certdata.txt, at present, will have various undesirable
side-effects:

- Distrust records, without associated certs, can present UI issues when
viewing and editing (which is why the associated certs are included in
certdata.txt)
- Distrust records, without associated certs, creates issues for various
tools consuming certdata.txt
- Distrust records, _with_ associated certs, can present UI issues when
viewing and editing (yes, it's a no-win, and that's the point)
- Distrust records, _with_ associated certs, can present new challenges for
distributions that patch (failing to include a new root = things don't work
that should. failing to distrust an old certificate = things that shouldn't
work, do)

Could you indicate what you believe 'big' distrusts are versus 'little'
distrusts? Are we talking root vs subordinate CA? Something else?

Given that distrusting a certificate (whether because CA requested - such
as a cessation of operation - or imposed - such as compromised) presents
path building risks and challenges, the current approach of placing it
within OneCRL minimizes the risk to certdata.txt consumers, which are
fairly consistently poorly suited for path discovery, and generally only
possess limited path validation capabilities. That is, introducing distrust
records could 'break' legitimate chains, given the common path "building"
implementation, which is why it's useful to keep separate.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certificates with less than 64 bits of entropy

2017-08-15 Thread Ryan Sleevi via dev-security-policy
On Tue, Aug 15, 2017 at 4:53 AM, Stephen Davidson via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Update on Siemens - Certificates with less than 64 bits of entropy
> The following is regarding the topic https://groups.google.com/
> forum/#!topic/mozilla.dev.security.policy/vl5eq0PoJxY regarding the
> “Siemens Issuing CA Internet Server 2016” that is root signed by QuoVadis
> and independently audited and disclosed.
>
> At the time the issue was reported, Siemens agreed to immediately take the
> CA offline, and it remains offline pending resolution.  This was reported
> to the listserv by me on 7/20.
>
> Siemens confirmed a bug in their internally-developed CA software which
> meant it was issuing TLS/SSL with 32bit serial numbers, although the serial
> numbers were non sequential.  Siemens informed their external auditors of
> the situation.
>
> It was found that 1201 currently valid certificates chained to the
> QuoVadis root were affected.  An additional 137 currently valid
> certificates were issued under the previous "Siemens Issuing CA Internet
> 2013" chained to a Digicert root, noted in an email from Ben Wilson of
> Digicert yesterday.  In the case of the QuoVadis-chained certificates, the
> certificates are virtually all of one year validity with expirations
> balanced across the calendar months (there are a handful of two and three
> year certificates, similar to the Digicert-chained population).  The
> remaining Digicert-chained certificates all expire by end of November
> 2017.  All certificates were issued to Siemens entities and
> Siemens-controlled domains.
>
> Next steps
> Siemens has moved to accelerate the previously planned replacement of
> their existing inhouse CA platform with a well-known open source CA with
> which QuoVadis is well familiar.  QuoVadis and Siemens' auditors are
> coordinating with Siemens to confirm that the new CA configuration meets
> Baseline Requirements.  It is worth noting that some BR controls,
> particularly related to vetting, are imposed by the Siemens certificate
> lifecycle system which will continue to be used with the new CA.  Siemens
> will not recommence their inhouse SSL issuance until the new CA is active
> and confirmed compliant.  The new CA is expected to come online in the
> second week of September.  Siemens commits to logging new SSL from that CA
> in Certificate Transparency.
>
> Replacement
> Although the Siemens PKI is centralised, the certificates are issued to a
> wide variety of Siemens group companies around the world and are used on
> both infrastructure and high traffic websites.  A rushed revocation and
> replacement of these certificates would have a negative business impact on
> Siemens that they believe outweighs the risk of the lower serials entropy
> (particularly given that they are nonsequential).
>
> We propose that Siemens begin the early replacement of the affected
> certificates as soon as the new CA infrastructure is approved, with the
> goal of completing the task by January 31, 2018.  This will include all the
> affected certificates (ie those chained from both the QuoVadis and Digicert
> roots).  While Siemens acknowledges that the affected certificates should
> not have occurred, we point out that they will all be replaced far in
> advance of the September 2019 date when industry-wide the last certificates
> issued before the BR change (to larger serial numbers) are scheduled to
> expire.
>
> We request that Siemens be allowed this expanded scope to conduct an
> orderly replacement of the affected certificates.
>
> Many thanks, Stephen Davidson
> QuoVadis


Stephen,

Thanks for posting your update regarding Siemens. Unfortunately, however,
it's lacking in many critical details necessary to take appropriate next
steps.

On the positive side, it is good to see that QuoVadis immediately took (and
kept) the Siemens CA offline. This represents a minimum necessary step when
faced with misissuance from a subordinate, and is a step expected of all
CAs while they investigate the issue and its causes, to prevent future
misissuance.

However, the assessment of what went wrong, what steps are being taken, and
the risk are all deficient and, at worse, potentially demonstrate a
misunderstanding of both the risk of these certificates and the purposes of
these discussions.

To understand an appropriate level of detail, and the set of questions that
both QuoVadis and Siemens should be asking, I think a postmortem to the
level of detail provided by PKIoverheid, in
https://groups.google.com/d/msg/mozilla.dev.security.policy/vl5eq0PoJxY/W1D4oZ__BwAJ
, is a _minimum_ necessary step to take. In particular, it's useful to
understand

1) Siemens has maintained it was a "bug" that caused 32-bit serial numbers.
However, it's unclear from the community whether or not Siemens actually
took steps to appropriately implement the necessary controls - meaning it
was a bug in process - or whether code was implemented, but 

Re: Certificates with reserved IP addresses

2017-08-12 Thread Ryan Sleevi via dev-security-policy
Do you have an estimate on when you can provide an explanation to the
community about how/why this happened, how many certificates it affected,
and what steps DigiCert is taking to prevent these issues in the future? Do
you have details about why DigiCert failed to detect these, and what steps
DigiCert has in place to ensure compliance from its subordinate CAs?

On Sat, Aug 12, 2017 at 10:19 PM, Ben Wilson via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Thanks.  We've sent an email to the operators of the first two CAs (TI
> Trust Technologies and Cybertrust Japan) that they need to revoke those
> certificates.
> Thanks again,
> Ben
>
> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-bounces+ben=
> digicert@lists.mozilla.org] On Behalf Of Jonathan Rudenberg via
> dev-security-policy
> Sent: Saturday, August 12, 2017 7:53 PM
> To: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Certificates with reserved IP addresses
>
> Baseline Requirements section 7.1.4.2.1 prohibits ipAddress SANs from
> containing IANA reserved IP addresses and any certificates containing them
> should have been revoked by 2016-10-01.
>
> There are seven unexpired unrevoked certificates that are known to CT and
> trusted by NSS containing reserved IP addresses.
>
> The full list can be found at: https://misissued.com/batch/7/
>
> DigiCert
> TI Trust Technologies Global CA (5)
> Cybertrust Japan Public CA G2 (1)
>
> PROCERT
> PSCProcert (1)
>
> It’s also worth noting that three of the "TI Trust Technologies”
> certificates contain dnsNames with internal names, which are prohibited
> under the same BR section.
>
> Jonathan
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: 2017.08.10 Let's Encrypt Unicode Normalization Compliance Incident

2017-08-11 Thread Ryan Sleevi via dev-security-policy
On Fri, Aug 11, 2017 at 1:22 PM, Nick Lamb via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Friday, 11 August 2017 16:49:29 UTC+1, Ryan Sleevi  wrote:
> > Could you expand on this? It's not obvious what you mean.
>
> I guess I was unclear. My concern was that one obvious way to approach
> this is to set things up so that after the certificate is signed, Boulder
> runs cablint, and if it finds anything wrong with that signed certificate
> the issuance fails, no certificate is delivered to the applicant and it's
> flagged to Let's Encrypt administrators as a problem.
>
> [ Let's Encrypt doesn't do CT pre-logging, or at least it certainly didn't
> when I last looked, so this practice would leave no trace of the
> problematic cert ]
>
> In that case any bug in certlint (which is certainly conceivable) breaks
> the entire issuance pipeline for Let's Encrypt, which is what my employer
> would call  a "Severity 1 issue", ie now people need to get woken up and
> fix it immediately. That seems like it makes Let's Encrypt more fragile.
>

I'm not sure this is a particularly compelling argument. By this logic, the
most reliable thing a CA can or should do is sign anything and everything
that comes from applicants, since any form of check or control is a
potentially frail operation that may fail.


> > Could you expand on what you mean by "cablint breaks" or "won't complete
> in
> > a timely fashion"? That doesn't match my understanding of what it is or
> how
> > it's written, so perhaps I'm misunderstanding what you're proposing?
>
> As I understand it, cablint is software, and software can break or be too
> slow. If miraculously cablint is never able to break or be too slow then I
> take that back, although as a programmer I would be interested to learn how
> that's done.


But that's an argument that applies to any change, particularly any
positive change, so it does not appear as a valid argument _against_
cablint.

That is, you haven't elaborated any concern that's _specific_ to
certlint/cablint, merely an abstract argument against change or process of
any form. And while I can understand that argument in the abstract -
certainly, every change introduces some degree of risk - we have plenty of
tools to manage and mitigate risk (code review, analysis, integration
tests, etc). Similarly, we can also assume that this is not a steady-state
of issues (that is, it is not to be expected that every week there will be
an 'outage' of the code), since, as code, it can and is continually fixed
and updated.

Since your argument applies to any form of measurement or checking for
requirements - including integrating checks directly into Boulder (for
example, as Let's Encrypt has done, through its dependency on ICU / IDNA
tables) - I'm not sure it's an argument against these checks and changes. I
was hoping you had more specific concerns, but it seems they're generic,
and as such, it still stands out as a good idea to integrate such tools
(and, arguably, prior to signing, as all CAs should do - by executing over
the tbsCertificate). An outage, while unlikely, should be managed like all
risk to the system, as the risk of misissuance (without checks) is arguably
greater than the risk of disruption (with checks)
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: 2017.08.10 Let's Encrypt Unicode Normalization Compliance Incident

2017-08-11 Thread Ryan Sleevi via dev-security-policy
On Fri, Aug 11, 2017 at 11:40 AM, Nick Lamb via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Friday, 11 August 2017 14:19:57 UTC+1, Alex Gaynor  wrote:
> > Given that these were all caught by cablint, has Let's Encrypt considered
> > integrating it into your issuance pipeline, or automatically monitoring
> > crt.sh (which runs cablint) for these issues so they don't need to be
> > caught manually by researchers?
>
> The former has the risk of being unexpectedly fragile,


Could you expand on this? It's not obvious what you mean.


> This way: If cablint breaks, or won't complete in a timely fashion during
> high volume issuance, it doesn't break the CA itself. But on the other hand
> it also doesn't wail on Comodo's generously offered public service crt.sh.
>

Could you expand on what you mean by "cablint breaks" or "won't complete in
a timely fashion"? That doesn't match my understanding of what it is or how
it's written, so perhaps I'm misunderstanding what you're proposing?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: dNSName containing '/' / low serial number entropy

2017-08-11 Thread Ryan Sleevi via dev-security-policy
Mark,

Thanks for providing a detailed report about this, including the steps
being taken to prevent future events like this. Your proposed remediation
plans sound like excellent steps to ensure future conformance, and
demonstrate an understanding as to the root causes and how to prevent them
in the future. More importantly, they demonstrate a level of attention that
hopefully all CAs engaging in third-party cross-signing should aspire to -
namely, the oversight and supervision of the scope of audits to ensure all
necessary controls are in place, the integration of automated checks, and
greater transparency.

On Fri, Aug 11, 2017 at 10:39 AM, Policy Authority PKIoverheid via
dev-security-policy  wrote:

> Dear Mozilla Security Policy Community,
>
> My apologies for the delayed follow up response.
>
> As stated in my email from 07/25/2017, Digidentity (DDY), one of our
> TSP’s, issued 777 certificates from September 30th 2016 which were not
> compliant with BR ballot 164.
>
> DDY has fixed the problem with the serial generation and is in the process
> of replacing all 777 non-compliant certificates.
>
> Below you will find the answers on the following questions:
> 1. Why did Logius/Policy Authority PKIoverheid not detect, identify,
> disclose, and resolve this matter prior to public notification?
> 2. Why did DDY not implement the serial number entropy as required by the
> Baseline Requirements?
> 3. Was this detected by the auditor? If not, why not?
>
> ANSWER ON QUESTION 1:
> Logius PKIoverheid was notified by Gervase Markham to draw the issue to
> their attention. See the timeline for further details.
>
> Logius PKIoverheid relies on the audits performed by external auditors to
> make sure that the Trusted Service Providers (TSPs) aka CAs within the
> PKIoverheid/Staat der Nederlanden hierarchy fully comply with applicable
> requirements (like the BR, ETSI and our own Program of Requirements).
>
> For further details about the PKIoverheid architecture aka hierarchy see
> one of these bugs: https://bugzilla.mozilla.org/show_bug.cgi?id=529874 or
> https://bugzilla.mozilla.org/show_bug.cgi?id=1016568
>
> Inform
> • Our TSPs are responsible to follow relevant changes in the BR. Besides
> that the Policy Authority (PA) PKIoverheid informs all PKIoverheid-TSPs
> about (intended) relevant changes to the Baseline Requirements.
>
> Check
> • We require a yearly full ETSI EN 319 411-1 audit. In the case of DDY the
> most recent full audit is of November 2016.
> • We require a ETSI accredited auditor. In the case of DDY the auditor is
> BSI and in 2016 it was accredited by the RvA (In Dutch: Raad voor
> Accreditatie), the Dutch accreditation body (for more information see:
> https://www.rva.nl/en/our-organization/about-the-rva).
> • We manually take samples of the issued certificates from our TSPs using
> CT logs. Unfortunately DDY was not part of the latest samples (see new
> measure 2).
>
> Approve
> • The PA PKIoverheid reviews the audit rapports from the TSPs and if
> necessary will take measures to make sure the TSP conforms to the
> applicable audit requirements.
>
>
> Timeline (all times are UTC):
>
> 19 July 2017 00:27: A posting on mozilla.dev.security.policy stating a
> non-compliant certificate issued by DDY.
>
> https://groups.google.com/forum/#!topic/mozilla.dev.
> security.policy/vl5eq0PoJxY
>
> 20 July 2017 16:45: Mozilla (Gerv) notifies the Policy Authority (PA)
> PKIoverheid on non-compliant certificates from DDY
>
> 20 July 2017 16:45: START INCIDENT
>
> 20 July 2017 17:27: PA PKIoverheid starts investigating the issue and
> almost immediately raises an internal incident.
>
> 21 July 2017 09:08: PA PKIoverheid issues DDY to postpone further issuing
> of certificates and requests an action plan from DDY how they will resolve
> the issue by revoking and reissuing all certificates involved.
>
> 21 July 2017 09:50: DDY confirms postponing the issuing of certificates.
>
> 21 July 2017 09:50: FURTHER CERTIFICATE ISSUING SUSPENDED
>
> 24 July 2017 08:53: DDY delivers action plan including two newly generated
> and compliant test certificates as proof that they fixed the issue.
>
> 24 July 2017 16:25: Based on the provided certificates the PA PKIoverheid
> requests DDY to start executing the action plan including the approval to
> restart issuing certificates.
>
> 24 July 2017 16:25: ISSUING RESTARTED
>
> 25 July 2017 14:37: DDY installs first production certificate on website (
> https://www.digidentity.eu/nl/home/)
>
> 25 July 2017 14:37: DDY starts revoking and replacing certificates
>
> 25 July 2017 21:20: PA PKIoverheid post a message on
> mozilla.dev.security.policy stating that DDY has proven to be able to
> generate compliant certificates and is allowed to restart the issuing of
> new (compliant) certificates. A link to the compliant new DDY certificates
> is included in this post as evidence.
>
> 26 July 2017 17:40: PA PKIoverheid requests Mozilla to honor 

Re: Certificates with improperly normalized IDNs

2017-08-10 Thread Ryan Sleevi via dev-security-policy
On Thu, Aug 10, 2017 at 5:31 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:
>
> This raises the question if CAs should be responsible for misissued
> domain names, or if they should be allowed to issue certificates to
> actually existing DNS names.
>

No. It doesn't. That's been addressed several times in the CA/Browser Forum
with other forms of 'invalid' (non-preferred name syntax) domain names,
such as those with underscores.

It's not permitted under RFC 5280, thus, CAs are responsible. Full stop.


> I don't know if the bad punycode encodings are in the 2nd level names (a
> registrar/registry responsibility, both were from 2012 or before) or in
> the 3rd level names (locally created at an unknown date).
>
> An online utility based on the older RFC349x round trips all of these.
> So if the issue is only compatibility with a newer RFC not referenced from
> the current BRs, these would probably be OK under the current BRs and
> certLint needs to accept them.
>

No, it's a newer RFC not referenced in RFC 5280, so it's not permitted
under the current BRs.

There's no retroactive immunity.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certificates with invalidly long serial numbers

2017-08-10 Thread Ryan Sleevi via dev-security-policy
Could you explain how it benefits Mozilla users to optimize for OV or EV,
given that it does not provide any additional security value?

It seems far better for the security of users, and the ecosystem, to have
such certificates revoked in 24 hours. If the subscriber's selection of
certificate type (e.g. OV or EV) makes it difficult to replace, then that's
a market choice they've made, given that it offers no objective security
value over DV, and it being possible to replace that certificate with a DV
certificate in a timely fashion.

24 hours is enough for most subscribers to get a reissued certificate. I
don't think we should speculate about what cost it is (that's between them
and the CA) or their selection of validation type (of which, for objective
security value, only the domain name matters).

On Thu, Aug 10, 2017 at 5:39 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> But that would require the issuer of the replacement cert (which might
> not be a fast-issue DV cert) to complete validation in something like 36
> hours, which is much shorter than the normal time taken to do proper OV
> and/or EV validation.
>
> I have previously suggested 14 days for live certificates that don't
> cause actual security issues.  This would be enough for most subscribers
> to either get a reissued certificate (for free) from the original CA or
> set up an account with a competing CA and get at least a basic OV cert.
>
> On 10/08/2017 03:02, Jeremy Rowley wrote:
>
>> No objection to 72 hours v. 1 business day.  I agree it should be short
>> and
>> 72 hours seems perfectly reasonable .
>>
>> -Original Message-
>> From: dev-security-policy
>> [mailto:dev-security-policy-bounces+jeremy.rowley=digicert.
>> com@lists.mozilla
>> .org] On Behalf Of Paul Kehrer via dev-security-policy
>> Sent: Wednesday, August 9, 2017 4:57 PM
>> To: mozilla-dev-security-pol...@lists.mozilla.org
>> Subject: Re: Certificates with invalidly long serial numbers
>>
>> On Wednesday, August 9, 2017 at 9:20:02 AM UTC-5, Jeremy Rowley wrote:
>>
>>> All CAS are required to maintain the capability to process and receive
>>>
>> revocation requests 24x7 under the baseline requirements. The headache is
>> not with the CA. Rather, it's notifying the customer that their
>> certificate
>> will be revoked before the start of the next business day. Having a one to
>> two business day rule  instead of 24 hours for non compromise issues gives
>> the end entity time to receive the notification and replace their
>> certificate with a compliant version.
>>
>> I'm sure many customers would absolutely prefer that and on the surface it
>> does sound like a good solution. However, I think it's another example of
>> the general difference of opinion between people on this list around
>> whether
>> we should be holding CAs to the highest standards or not. These mis-issued
>> certificates are typically not a security concern, but they speak to
>> either
>> ignorance on the part of CA operators or a pattern of lackadaisical
>> controls
>> within the issuance systems. Neither of these is acceptable behavior at
>> this
>> juncture. Conformance with the BRs has been mandatory for over 5 years
>> now.
>> Customers need to be made aware of the failures of their chosen providers
>> and the responsibilities incumbent upon them as subscribers, and if their
>> own certificate installation/replacement processes are sufficiently
>> archaic
>> as to make it difficult to replace a certificate in an automated fashion
>> then they should rectify that immediately.
>>
>> That said, to continue the thought experiment, what does "1-2 business
>> days"
>> really mean?Does the CA get 1-2 business days followed by 1-2 for the
>> customer? What if there's a holiday in the CA's country of operations
>> followed by a holiday in the customer's home country? How quickly does
>> this
>> window extend to 2+ weeks? If you were to go down this path I'd strongly
>> prefer it to be a hard deadline (e.g. 72 hours) and not anything related
>> to
>> business days.
>>
>>
>
> Enjoy
>
> Jakob
> --
> Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
> Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
> This public discussion message is non-binding and may contain errors.
> WiseMo - Remote Service Management for PCs, Phones and Embedded
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certificates with common names not present in their SANs

2017-08-10 Thread Ryan Sleevi via dev-security-policy
CFCA stated this, in
https://cabforum.org/pipermail/public/2017-July/011733.html

Since then, no further evidence of this claim has been provided.

SHECA ( https://cabforum.org/pipermail/public/2017-July/011737.html ) and
GDCA ( https://cabforum.org/pipermail/public/2017-July/011736.html ) are
more restrained in claiming local law, although made similarly problematic
claims :)

On Thu, Aug 10, 2017 at 2:20 PM, Matthew Hardeman via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Didn't someone recently float the argument that the native u-label was
> required by local regulation / custom (in China) to be included and so they
> stuffed it into the CN?
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certificates with metadata-only subject fields

2017-08-10 Thread Ryan Sleevi via dev-security-policy
Can you provide an example of what you believe is a bigger issue that has
been masked? Otherwise, it sounds like you're saying "Ignore the obvious
errors, because maybe someone will find something non-obvious, and we don't
want to miss out" - but that's a deeply flawed argument, and I would hope
isn't the substance of what you're saying.

Note: I still disagree with you about the artificial ontology; all of these
errors equally speak to the CA's ability to execute on Best Practices, such
as using available tools that have been evangelized for over a year as
something that can (and arguably should) be integrated into issuance
pipelines. Discussions at this point are extremely relevant, as they speak
to how well the CA is staying abreast of changes, as well as how
effectively they're managing their subsidiaries - both issues that are key
to public trust.

On Thu, Aug 10, 2017 at 2:17 PM, Jeremy Rowley via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> I strongly disagree. The discussion around errors like these masks the
> bigger issues in the noise.  If there are bigger issues, let's find those.
>
> -Original Message-
> From: dev-security-policy
> [mailto:dev-security-policy-bounces+jeremy.rowley=
> digicert.com@lists.mozilla
> .org] On Behalf Of David E. Ross via dev-security-policy
> Sent: Wednesday, August 9, 2017 4:35 PM
> To: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Re: Certificates with metadata-only subject fields
>
> On 8/9/2017 2:54 PM, Jonathan Rudenberg wrote:
> >
> >> On Aug 9, 2017, at 17:50, Peter Bowen  wrote:
> >>
> >> The point of certlint was to help identify issues.  While I
> >> appreciate it getting broad usage, I don't think pushing for
> >> revocation of every certificate that trips any of the Error level checks
> is productive.
> >
> > I agree, and I don't really have a position on the revocation of
> certificates with errors that do not appear to have any security impact
> like
> these.
> >
> > Jonathan
> >
> >
>
> I strongly disagree.  Errors like this make me question whether the
> certification authority is sufficiently competent to be trusted.  Small
> errors can indicate an increased likelihood of serious errors.
>
> --
> David E. Ross
> 
>
> President Trump demands loyalty to himself from Republican members of
> Congress.  I always thought that members of Congress -- House and Senate --
> were required to be loyal to the people of the United States.  In any case,
> they all swore an oath of office to be loyal to the Constitution.
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Misissued certificates

2017-08-10 Thread Ryan Sleevi via dev-security-policy
On Thu, Aug 10, 2017 at 11:55 AM, identrust--- via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Thursday, August 10, 2017 at 12:23:55 AM UTC-4, Lee wrote:
> > What's it going to take for mozilla to set up near real-time
> > monitoring/auditing of certs showing up in ct logs?
> >
> > Lee
> >
> > On 8/9/17, Alex Gaynor via dev-security-policy
> >  wrote:
> > > (Whoops, accidentally originally CC'd to m.d.s originally! Original
> mail
> > > was to IdenTrust)
> > >
> > > Hi,
> > >
> > > The following certificates appear to be misissued:
> > >
> > > https://crt.sh/?id=77893170=cablint
> > > https://crt.sh/?id=77947625=cablint
> > > https://crt.sh/?id=78102129=cablint
> > > https://crt.sh/?id=92235995=cablint
> > > https://crt.sh/?id=92235998=cablint
> > >
> > > All of these certificates have a pathLenConstraint value with CA:FALSE,
> > > this violates 4.2.1.9 of RFC 5280: CAs MUST NOT include the
> > > pathLenConstraint field unless the cA boolean is asserted and the key
> usage
> > > extension asserts the keyCertSign bit.
> > >
> > > Alex
> > >
> > > --
> > > "I disapprove of what you say, but I will defend to the death your
> right to
> > > say it." -- Evelyn Beatrice Hall (summarizing Voltaire)
> > > "The people's good is the highest law." -- Cicero
> > > GPG Key fingerprint: D1B3 ADC0 E023 8CA6
> > >
> > >
> > >
> > >
> > > --
> > > "I disapprove of what you say, but I will defend to the death your
> right to
> > > say it." -- Evelyn Beatrice Hall (summarizing Voltaire)
> > > "The people's good is the highest law." -- Cicero
> > > GPG Key fingerprint: D1B3 ADC0 E023 8CA6
> > > ___
> > > dev-security-policy mailing list
> > > dev-security-policy@lists.mozilla.org
> > > https://lists.mozilla.org/listinfo/dev-security-policy
> > >
> We aware of this situation and had previously introduced logic into our
> certificate authority that a pathLengthConstraint will never be set for a
> certificate other than a CA.  We have confirmed that only the stated
> five (5)
> certificates contain the issue.  Three (3) of these are real certificates;
> however, one has expired. We have revoked the other two certificates. The
> remaining two (2) are pre-certificates.


It might be helpful if you can share more details regarding this situation,
to better help the community understand the procedures Identrust has in
place.

1) Were you aware of this issue before it was reported? It's unclear, based
on this reply, whether this was something you were previously aware of,
given the logic you mentioning having introduced.
2) Given this issue, have you examined other Identrust-issued certificates
for issues - for example, running the corpus of issued certificates over
the past year (whether from your own DB or logged in CT) - for other forms
of violations, such as by using tools as certlint or cablint?
3) What processes and procedures are in place at Identrust to help ensure
certificates properly adhere to RFC 5280? Why did these not detect the
issue? What steps are being taken in the future to provide greater
assurance of future conformance?

While it's useful to hear that you've revoked those certificates, it's
equally useful to help the community understand what, if any, changes that
Identrust is making. If the answer is "There was a bug, we fixed it," then
it's useful to understand what, if any, changes are being made to detect
and/or prevent such bugs in the future.

Cheers
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certificate issued by D-TRUST SSL Class 3 CA 1 2009 with short SerialNumber

2017-08-10 Thread Ryan Sleevi via dev-security-policy
Under the Baseline Requirements, v1.4.8 (current version), 4.9.1.1,

"The CA SHALL revoke a Certificate within 24 hours if one of more of the
following occurs:
 9. The CA is made aware that the Certificate was not issued in accordance
with these requirements or the CA's Certificate Policy or Certification
Practice Statement"

Since the passage of Ballot 165 (
https://cabforum.org/2016/07/08/ballot-164/ ), adopted in version 1.3.7
"Effective September 30, 2016, CAs SHALL generate Certificate serial
numbers greater than zero (0) containing at least 64 bits of output from a
CSPRNG."

So these were not issued in accordance with these Requirements, and thus
subject to revocation.

On Thu, Aug 10, 2017 at 7:55 AM, Fiedler, Arno via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Hello Jonathan,
>
> the certificate has 64 bits of entropy in the "DNqualifier" field instead
> of the serial number field.
>
> Since 2012 we used this way of adding random bits to certificates to
> mitigate  preimage attacks
> From a security perspective the amount of Entropy in the certificate
> should be reasonable.
>
> Do you see a security need for revoking the certificate?
>
> Viele Grüße
>
> Arno Fiedler
> Standardization & Consulting
> Bundesdruckerei GmbH
> Kommandantenstraße 18 · 10969 Berlin · Deutschland
>
> Tel. :+ 49 30 25 98 - 3009
> Mobil: + 49 172 3053272
>
> arno.fied...@bdr.de · www.bundesdruckerei.de
>
> Sitz der Gesellschaft: Berlin · Handelsregister: AG Berlin-Charlottenburg
> HRB 80443. USt.-IdNr.: DE 813210005
> Aufsichtsratsvorsitzender: Willi Berchtold
> Geschäftsführer: Ulrich Hamann (Vorsitzender), Christian Helfrich
>
> This message is intended only for the use of the individual or entity to
> which it is addressed, and may contain information that is privileged,
> confidential and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, or the employee or agent
> responsible for delivering the message to the intended recipient, we hereby
> give notice that any dissemination, distribution or copying of this
> communication is strictly prohibited. If you have received this message in
> error, please delete the message and notify us immediately.
>
> Diese Nachricht kann vertrauliche und gesetzlich geschützte Informationen
> enthalten. Sie ist ausschließlich für den Adressaten bestimmt. Wenn Sie
> nicht der beabsichtigte Adressat sind, möchten wir Sie hiermit darüber
> informieren, dass das Weiterleiten, Verteilen oder Kopieren dieser Mail
> nicht gestattet ist. Wenn Sie diese Mail irrtümlicherweise erhalten haben,
> informieren Sie uns bitte schnellstmöglich und löschen Sie bitte die Mail.
>
>
> -Ursprüngliche Nachricht-
> Von: Jonathan Rudenberg [mailto:jonat...@titanous.com]
> Gesendet: Dienstag, 8. August 2017 19:12
> An: Fiedler, Arno
> Cc: mozilla-dev-security-pol...@lists.mozilla.org
> Betreff: Re: Certificate issued by D-TRUST SSL Class 3 CA 1 2009 with
> short SerialNumber
>
>
> > On Aug 8, 2017, at 08:58, Fiedler, Arno via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
> >
> > Dear Mozilla Security Policy Community,
> >
> > Thanks for the advice about the short serial numbers and apologies for
> the delayed response.
> >
> > Since 2016, all D-TRUST TLS certificates based on electronic Certificate
> Requests have a certificate serial number which includes 64 bits of entropy.
> >
> > Between 2012 and July 6th, 2017 we produced a small number of
> certificates with  paper-based Certificate Registration Requests using 64
> bits of entropy in the "DNqualifier" field instead of the serial number
> field.
> >
> > Since the 7th of July, 2017, all D-TRUST TLS-Certificates have 64 bits
> of entropy in the serial number.
> >
> > I hope this helps and please do not hesitate to contact us if there are
> any further questions.
>
> Hi Arno,
>
> It doesn’t look like this certificate has been revoked yet?
> https://crt.sh/?id=174827359=cablint
>
> Can you explain why it hasn’t been revoked yet and when it will be?
>
> Thanks,
>
> Jonathan
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certificates with invalidly long serial numbers

2017-08-09 Thread Ryan Sleevi via dev-security-policy
On Wednesday, August 9, 2017 at 12:05:32 AM UTC-4, Peter Gutmann wrote:
> Matthew Hardeman via dev-security-policy 
>  writes:
> 
> >I merely raise the point that IF the framers of the 20 bytes rule did, in
> >fact, simultaneously intend that arbitrary SHA-1 hash results should be able
> >to be stuffed into the serial number field AND SIMULTANEOUSLY that the DER
> >encoded integer field value must be a positive integer and that insertion of
> >a leading 0x00 byte to ensure that the high order bit would be 0 (thus
> >regarded as a positive value per the coding), THEN it must follow that at
> >least in the minds of those who engineered the rule, that the inserted 0x00
> >byte must not be part of the 20 byte maximum size of the value AS legitimate
> >SHA-1 values of 20 bytes do include values where the high order bit would be
> >1 and without pre-padding the proper interpretation of such a value would be
> >as a negative integer.
> 
> That sounds like sensible reasoning.  So you need to accept at least 20 + 1
> bytes, or better yet just set it to 32 or 64 bytes and be done with it because
> there are bound to be implementations out there that don't respect the 20-byte
> limit.  At the very least though you'd need to be able to handle 20 + 1.
> 
> Peter.

I see. So the solution to standards non-compliance that creates compatibility 
issues is to invent arbitrary standards (32 or 64 bytes)? How does that align 
with https://www.mozilla.org/en-US/about/manifesto/#principle-06 ?

The original language in RFC 2459 restricted it to INTEGER, and in 2459 had no 
length limit (thus, unbounded). This was reformed in RFC 3280, which introduced 
the language limiting the upper bound to 20 octets - which clearly intends to 
be the encoded value, by virtue of X.690. Similarly, when coupled with the 
'positive integer', this would hopefully have clearly limited the length to 20 
octets - there's no "20 plus padding" because the guarantee of a positive 
integer is a transformation that happens  before the conversion to octets, and 
the result is limited to 20 octets, and those octets are the result of encoding 
to the appropriate rules (BER or DER).

So no, this attempt at retro-analyzing 'large enough to fit SHA-1' does not fit 
the historic context, does not fit the text, and the argument for arbitrary 
bytes (e.g. actively ignoring 3280) is equally silly.

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certificates with invalidly long serial numbers

2017-08-08 Thread Ryan Sleevi via dev-security-policy
On Tuesday, August 8, 2017 at 11:26:11 AM UTC-7, Jakob Bohm wrote:
> On 08/08/2017 18:43, Ryan Sleevi wrote:
> > On Tuesday, August 8, 2017 at 11:05:06 PM UTC+9, Jakob Bohm wrote:
> >> I was not advocating "letting everyone decide".  I was advocating that
> >> Mozilla show some restraint, intelligence and common sense in wielding
> >> the new weapons that certlint and crt.sh have given us.
> >>
> >> This shouldn't be race as to who wields the weapon first, forgiving CAs
> >> only if they happen to report faster than some other newsgroup
> >> participant.
> >>
> >> This is similar to if a store boss gets a new surveillance camera in the
> >> shop and sees that some employees are taking extra breaks when there are
> >> few customers in the store.  It would be unreasonable for such a store
> >> boss to discipline this with similar zeal as seeing some employees
> >> genuinely steeling cash out of the register or selling stolen items out
> >> of the back door.  Instead the fact that they work less when there is
> >> less work to do should inspire reevaluation of any old rule that they
> >> are not allowed to have a watercooler chat during work hours.
> >>
> >> Now such a reevaluation might result in requiring them to use such
> >> occasions to clean the floors or do some other chores (Mozilla equiv:
> >> Deciding that the rule is important for good reason and needs to be
> >> followed in the future) or it could result in relaxing the rule as
> >> long as they stand ready the moment a customer arrives (Mozilla equiv:
> >> Relaxing the requirement, initially just for Mozilla, later perhaps as a
> >> BR change).
> >>
> >> Dogmatically insisting on enforcing rules that were previously not
> >> enforced due to lack of detection, just because "rules are rules" or
> >> other such arguments seems overzealous.
> >>
> > 
> > Such tools have been available for over a year. CAs have been aware of 
> > this, the ability to run it over their own corpus and self-detect and 
> > self-report. These tools, notably, were created by one of the newest CA 
> > applicants - Amazon - based on a methodical study of what is required of a 
> > CA.
> > 
> > Your attempts to characterize it as overzealous ignore this entirely. At 
> > this point, it's gross negligence, and attempts to argue otherwise merely 
> > suggest a lack of understanding or concern for standards compliance and 
> > interoperability.
> > 
> > Mozilla has already communicated to CAs these tools exist and their 
> > relevance to CAs.
> > 
> > Perhaps we can move on from misguided apologetics and instead focus on how 
> > to make things better. Suggestions we ignore these, at this point, are 
> > neither productive nor relevant. Attempts to suggest tortured metaphors are 
> > like attempting to suggest rich people deserve to be robbed, or poor people 
> > just need to work harder - arguments that are both hollow and borderline 
> > offensive in their reductionism. A pattern of easily preventable 
> > misissuance has been happening,CAs have been repeatedly told to 
> > self-detect, and clearly, some CAs, like presumably some businesses, aren't 
> > taking security seriously. That needs to stop.
> > 
> 
> I am questioning the fairness of applying these tools, which did not
> exist when the rules were written, to enforce every rule with the same
> high weight. 

Did anything prevent the CAs responsible from writing these tools?

Do you believe there is any excuse for issuance in 2017 that violates these 
rules?

Is your view that until someone else does the CA's work for them (reading and 
understanding the rules), the CA should not be responsible for reading or 
understanding themselves?

You're arguing against a strawman. It's 2017 - it's both time to stop making 
excuses and time to recognize that the ability of CAs to adhere to the rules is 
core to their trustworthiness. Technical rules are but a proxy for procedure 
rules.

> I am not apologizing for bad behavior, I am saying if
> everybody gets scrutinized this hard, we will eventually have to
> distrust pretty much all the CAs, because there is no such thing as a
> perfect CA organization.

No, you are apologizing for their bad behaviour by suggesting they shouldn't be 
held to an objective standard, because someone else hadn't done the work for 
them. The compliance or noncompliance is extremely relevant to the CAs ability 
to react and respond to changes, and you continue to offer a view that suggests 
CAs shouldn't have to respond consistently or should no

Re: Certificates issued with HTTPS OCSP responder URL (IdenTrust)

2017-08-08 Thread Ryan Sleevi via dev-security-policy
On Tuesday, August 8, 2017 at 8:52:54 PM UTC+9, Jakob Bohm wrote:
> On 08/08/2017 12:54, Nick Lamb wrote:
> > On Monday, 7 August 2017 22:31:34 UTC+1, Jakob Bohm  wrote:
> >> Since the CT made it possible, I have seen an increasing obsession with
> >> enforcing every little detail of the BRs, things that would not only
> >> have gone unnoticed, but also been considered unremarkable before CT.
> > 
> > Even if I had no other reason to be concerned about violations of the BRs 
> > (and I do have plenty, as we saw here in this case it looks like the 
> > certificate can be revoked but it effectively can't) the Brown M Rider 
> > reason is enough,
> > 
> > The rider (hospitality and technical requirements for a performing artist) 
> > can be pretty detailed, some venues may glance at it and agree to whatever 
> > is inside without knowing the details. This is a _huge_ problem, and Van 
> > Halen is famous for a clause in their rider (requiring a bowl of M but 
> > with the brown ones removed) which they say existed not out of spite but 
> > precisely to check that the venue had actually read the rider in full and 
> > not just skimmed it, so that they would have early warning if a particular 
> > venue were sloppy and might cause surprise problems with technical 
> > implementation.
> > 
> > We need CAs to be detail oriented. It is not enough to "kinda, mostly" get 
> > this job right. If you can't do _exactly_ what it says in the BRs, don't 
> > bother doing it at all. Neither Mozilla nor any other trust store compel 
> > CAs to stay in this business, if they decide they'd rather sell pancakes or 
> > mow lawns, that's up to them. So long as they want to be trusted public 
> > CAs, they need to obey the rules that are in place to make that safe for 
> > everybody.
> > 
> 
> While the Brown M Rider argument would be good if, like the van Halen
> clause, there was only a small number of such intentional gotcha rules,
> in this case we are dealing with a large corpus of rules, some explicit,
> some in documents referenced from other documents etc.
> 
> That makes the situation much more like the situation with other large
> sets of rules for people to follow, which means that there will, in
> practice, always be rules more important than others.  And hence a
> natural expectation that those tasked with enforcing the rules actually
> know the difference and don't issue large penalties for what is
> obviously minor infractions.
> 
> Now in this *specific* case, it has been found that Mozilla's NSS
> doesn't handle HTTPS OCSP URLs in a good way, and thus Mozilla has a
> specific need to prevent their public use until NSS gains the ability to
> handle them safely (because there is a benefit to it).  Discussing that
> benefit and planning appropriate transition plans is an issue for
> another thread, possibly in another forum.

While you may feel this way, it is again inaccurate to present it as such. It 
is an intentional design decision by the NSS and Firefox developers, made 
independently by folks at Apple, Google, and Microsoft as well for nearly two 
decades. This isn't "oops, NSS doesn't support it" - it is "this is a terrible 
idea"

I realize you are trying to present it as a bug, in order to justify the 
presence of brown M as "not important," but you are failing to recognize 
such URLs DO break implementations and are forbidden for legitimate reasons.

But certainly, further arguments on this point are best suited for another 
forum, because there is no good reason to change here.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certificate issued by D-TRUST SSL Class 3 CA 1 2009 with short SerialNumber

2017-08-08 Thread Ryan Sleevi via dev-security-policy
On Wednesday, August 9, 2017 at 12:22:53 AM UTC+9, Fiedler, Arno wrote:
> Dear Mozilla Security Policy Community,
> 
> Thanks for the advice about the short serial numbers and apologies for the 
> delayed response.
> 
> Since 2016, all D-TRUST TLS certificates based on electronic Certificate 
> Requests have a certificate serial number which includes 64 bits of entropy.
> 
> Between 2012 and July 6th, 2017 we produced a small number of certificates 
> with  paper-based Certificate Registration Requests using 64 bits of entropy 
> in the "DNqualifier" field instead of the serial number field.
> 
> Since the 7th of July, 2017, all D-TRUST TLS-Certificates have 64 bits of 
> entropy in the serial number.
> 
> I hope this helps and please do not hesitate to contact us if there are any 
> further questions.
> 
> Best regards
> Arno Fiedler
> Standardization & Consulting
> Bundesdruckerei GmbH
> Kommandantenstraße 18 · 10969 Berlin · Deutschland
> 
> Tel. :+ 49 30 25 98 - 3009
> Mobil: + 49 172 3053272
> 
> arno.fied...@bdr.de · www.bundesdruckerei.de
> 
> Sitz der Gesellschaft: Berlin · Handelsregister: AG Berlin-Charlottenburg HRB 
> 80443. USt.-IdNr.: DE 813210005
> Aufsichtsratsvorsitzender: Willi Berchtold
> Geschäftsführer: Ulrich Hamann (Vorsitzender), Christian Helfrich
> 
> This message is intended only for the use of the individual or entity to 
> which it is addressed, and may contain information that is privileged, 
> confidential and exempt from disclosure under applicable law. If the reader 
> of this message is not the intended recipient, or the employee or agent 
> responsible for delivering the message to the intended recipient, we hereby 
> give notice that any dissemination, distribution or copying of this 
> communication is strictly prohibited. If you have received this message in 
> error, please delete the message and notify us immediately.
> Diese Nachricht kann vertrauliche und gesetzlich geschützte Informationen 
> enthalten. Sie ist ausschließlich für den Adressaten bestimmt. Wenn Sie nicht 
> der beabsichtigte Adressat sind, möchten wir Sie hiermit darüber informieren, 
> dass das Weiterleiten, Verteilen oder Kopieren dieser Mail nicht gestattet 
> ist. Wenn Sie diese Mail irrtümlicherweise erhalten haben, informieren Sie 
> uns bitte schnellstmöglich und löschen Sie bitte die Mail.

Thanks for acknowledging this, Arno, but I can't help but feel this is an 
insufficient and incomplete analysis.

Could you share more along the following:
1) How many certificates were affected?
2) How did you determine this?
3) Did you detect this prior to July 7?
4) If not, why not, given the availability of tools?
5) Have you completed an analysis of what the root cause of your failure to 
follow the Baseline Requirements was?
6) If so, what was it? If not, why not?
7) Was this detected by your audits?
8) If so, why was it not noted at the time? If not, what would you suggest be 
added to prevent this in the future?
9) What systematic steps have you taken to ensure compliance with the BRs in a 
timely fashion?

The goal here is not penance from a CA, nor is it granting indulgences or 
special dispensation - it's about demonstrating an awareness of the 
requirements and the opportunity to improve how a CA is managed to comply to 
these requirements, if it is to continue to be trusted as a CA.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certificates with invalidly long serial numbers

2017-08-08 Thread Ryan Sleevi via dev-security-policy
On Tuesday, August 8, 2017 at 11:05:06 PM UTC+9, Jakob Bohm wrote:
> I was not advocating "letting everyone decide".  I was advocating that
> Mozilla show some restraint, intelligence and common sense in wielding
> the new weapons that certlint and crt.sh have given us.
> 
> This shouldn't be race as to who wields the weapon first, forgiving CAs
> only if they happen to report faster than some other newsgroup
> participant.
> 
> This is similar to if a store boss gets a new surveillance camera in the
> shop and sees that some employees are taking extra breaks when there are
> few customers in the store.  It would be unreasonable for such a store
> boss to discipline this with similar zeal as seeing some employees
> genuinely steeling cash out of the register or selling stolen items out
> of the back door.  Instead the fact that they work less when there is
> less work to do should inspire reevaluation of any old rule that they
> are not allowed to have a watercooler chat during work hours.
> 
> Now such a reevaluation might result in requiring them to use such
> occasions to clean the floors or do some other chores (Mozilla equiv:
> Deciding that the rule is important for good reason and needs to be
> followed in the future) or it could result in relaxing the rule as
> long as they stand ready the moment a customer arrives (Mozilla equiv:
> Relaxing the requirement, initially just for Mozilla, later perhaps as a
> BR change).
> 
> Dogmatically insisting on enforcing rules that were previously not
> enforced due to lack of detection, just because "rules are rules" or
> other such arguments seems overzealous.
> 

Such tools have been available for over a year. CAs have been aware of this, 
the ability to run it over their own corpus and self-detect and self-report. 
These tools, notably, were created by one of the newest CA applicants - Amazon 
- based on a methodical study of what is required of a CA.

Your attempts to characterize it as overzealous ignore this entirely. At this 
point, it's gross negligence, and attempts to argue otherwise merely suggest a 
lack of understanding or concern for standards compliance and interoperability.

Mozilla has already communicated to CAs these tools exist and their relevance 
to CAs.

Perhaps we can move on from misguided apologetics and instead focus on how to 
make things better. Suggestions we ignore these, at this point, are neither 
productive nor relevant. Attempts to suggest tortured metaphors are like 
attempting to suggest rich people deserve to be robbed, or poor people just 
need to work harder - arguments that are both hollow and borderline offensive 
in their reductionism. A pattern of easily preventable misissuance has been 
happening,CAs have been repeatedly told to self-detect, and clearly, some CAs, 
like presumably some businesses, aren't taking security seriously. That needs 
to stop.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certificates issued with HTTPS OCSP responder URL (IdenTrust)

2017-08-07 Thread Ryan Sleevi via dev-security-policy
On Tuesday, August 8, 2017 at 6:31:34 AM UTC+9, Jakob Bohm wrote:
> On 07/08/2017 23:05, Vincent Lynch wrote:
> > Jakob,
> > 
> > I don't see what is wrong with Jonathan reporting these issues. The authors
> > and ratifiers of the BRs made the choice to specify these small details.
> > While a minor encoding error is certainly not as alarming as say, issuing
> > an md5 signed certificate, it is still an error and is worth reporting.
> > 
> > I believe it is decidedly off-topic to debate what BR violations are worth
> > reporting.
> > 
> > If you think certain BR rules are outdated or sub-par, I am sure the
> > community would welcome that discussion but it should be its own thread.
> > 
> 
> Since the CT made it possible, I have seen an increasing obsession with
> enforcing every little detail of the BRs, things that would not only
> have gone unnoticed, but also been considered unremarkable before CT.
> 
> Do we really want the CA community to be filled with bureaucratic
> enforcement of harsh punishments for every slight misstep?  This is the
> important question that any organization (in this case this community)
> needs to ask itself whenever new surveillance abilities make it possible
> to catch microscopic infractions.
> 
> Do we want to be the kind of place where people are punished for not
> polishing their boots perfectly or having a picture of their wife on
> their desk?  (To mention other rules that some organizations have
> overzealously enforced a long time ago).
> 
> 
> 
> Enjoy
> 
> Jakob
> -- 
> Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
> Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
> This public discussion message is non-binding and may contain errors.
> WiseMo - Remote Service Management for PCs, Phones and Embedded

As mentioned earlier, you would be best to start your own thread.

As a peer for the module, I greatly appreciate the work Jonathan is doing, and 
encourage him to do more. If you feel otherwise, it may be best if you simply 
don't participate in those discussions, since the contrariness or explanations 
are not providing significant value to these discussions.

As others suspected, the use of an HTTPS URI for OCSP effectively prevents 
clients from being able to check, as clients including NSS, CryptoAPI, Chrome, 
and SecureTransport all refuse to check OCSP via HTTPS due to circular 
dependencies. This means that the inclusion of such a URL does not provide 
revocation services, and as those are presently required by the BRs, fails to 
meet those objectives.

Your proposed approach - of dividing things you feel are serious or minor - are 
actively harmful to the efforts of this community, in part due to seeming to 
lack sufficient context to assess the seriousness or minorness of the impact 
(as shown in this week's threads). Issues you have felt were minor are, in 
fact, serious, and the prevalence of a myriad of minor issues has historically 
been and continues to be a reasonable signal for more serious issues in either 
policy or practice. Perhaps it may be worth holding back on sharing opinions at 
first, so that these technical details can be shared and a better sense of 
serious and minor developed.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certificates with invalidly long serial numbers

2017-08-07 Thread Ryan Sleevi via dev-security-policy
On Tuesday, August 8, 2017 at 12:51:40 AM UTC+9, Matthew Hardeman wrote:
> It is what it is, I'm sure, but that definition in RFC5280 is rather tortured 
> and leads to ambiguity as to whether or not the leading 0x00 is.  In fact, I 
> would say that it is not part of the integer value but rather an explicit 
> sign flag required by the encoding mechanism.
> 
> Wouldn't it have been easier just to say that despite what the ASN.1 INTEGER 
> type says, serial number shall be regarded as an explicitly unsigned integer 
> of up to 20 bytes length, to be represented as a positive integral value?
> 
> Pragmatically, does anything known break on the extra byte there?

Yes. NSS does. Because NSS properly implements 5280.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certificates with invalidly long serial numbers

2017-08-07 Thread Ryan Sleevi via dev-security-policy
On Tuesday, August 8, 2017 at 5:27:13 AM UTC+9, Jakob Bohm wrote:
> On 07/08/2017 22:12, Alex Gaynor wrote:
> > You seem to be suggesting that the thoroughness of testing is somehow
> > related to how long it takes.
> > 
> > I'd expect any serious (or even not particularly serious...) to have a
> > comprehensive automated test suite that can verify that the software is
> > regression free and correct in minutes or hours. If you can't deploy
> > changes of any size with confidence in less than several months, I think
> > you have some serious process problems.
> > 
> 
> For non-essential changes, 

I think this may be the first, and most serious, flaw in your argument. 
Compliance with Mozilla policy and standards that are, at this point, over 
twenty years old, are essential Ganges. Full stop, they need to be made, made 
quickly, and should never have reached production.

> it may be a good idea to supplement the fast
> automated tests by tests that take a lot longer.  This could be manual
> tests, or it could be tests that verify expiry procedures in real time
> (e.g. issue a cert at the start of the test and verify that the OCSP
> component acts as intended near the end of the test).

Your examples are things you can and should write automated tests for.

> 
> The need to deploy some changes quickly inevitably represents a
> compromise between speed and quality, both in testing and coding.  So
> not using the rushed procedures for non-urgent changes is good general
> practice.

Creating a taxonomy between such is otherwise attempting to legitimize poor 
software development practices and poor business practices. It is a perspective 
that is no doubt shared by legacy software development firms, but much has been 
done in the past 20 years to support models of continuous development and 
continuous deployment. This is aided by rigorous test driven development, which 
is the corner stone of having the objective confidence, rather than the 
subjective unease.

> 
> Consider that most end-users are not encouraged to run Firefox nightlies
> and that enterprise users tend to use special ESR releases with longer
> release cycles than end users.  These practices represent the same
> fundamental speed/quality trade-off.
> 

While is may sound like compelling support for your argument, it does represent 
several unsupported or inaccurate claims. For example, Firefox users are 
encouraged to run other channels (like Aurora or Beta) precisely because the 
thorough esss of the automated testing represents a higher degree of confidence 
- which enables such updated versions to ship every six weeks. Similarly, 
Enterprise users who run ESR generally hold back the Web Platform and face 
greater risks, despite the considerable effort to support ESR, by virtue of 
running perpetually outdated software. Again, this is an area where the past 
twenty years have shown notions such as shipping "long term stable" software - 
whether it be a browser, an OS, or CA software - is actively detrimental to the 
ecosystem.

I realize much of your message is expressing a philosophy on software 
development, and while I've responded to point out that alternative 
philosophies exist - and with more catcher in modern software development - it 
is likely that our entire philosophical musings are moot. Whatever the approach 
to development, participants in the Mozilla Root CA Program have an obligation 
to comply with the program requirements, and for the safety and security of 
Mozilla users, that compliance needs to be timely. If certain, outdated and 
insecure approaches to that development may be used, then that is the CAs fault 
and risk, and not something the community should be asked to bear.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certificates with invalidly long serial numbers

2017-08-07 Thread Ryan Sleevi via dev-security-policy
On Tuesday, August 8, 2017 at 5:18:21 AM UTC+9, Jakob Bohm wrote:
> On 07/08/2017 16:54, Peter Bowen wrote:
> > On Mon, Aug 7, 2017 at 12:53 AM, Franck Leroy via dev-security-policy
> >  wrote:
> >> Hello
> >>
> >> I checked only one but I think they are all the same.
> >>
> >> The integer value of the serial number is 20 octets, but when encoded into 
> >> DER a starting 00 may be necessary to mark the integer as a positive value 
> >> :
> >>
> >> 0 1606: SEQUENCE {
> >> 4 1070:   SEQUENCE {
> >> 83: [0] {
> >>101:   INTEGER 2
> >>   :   }
> >>13   21: INTEGER
> >>   :   00 A5 45 35 99 1C E2 8B 6D D9 BC 1E 94 48 CC 86
> >>   :   7C 6B 59 9E B3
> >>
> >> So the serialNumber (integer) value is 20 octets long but lenght can be 
> >> more depending on the encoding representation.
> >>
> >> Here is ASCII (common representation when stored into a database: 
> >> "A54535991CE28B6DD9BC1E9448CC867C6B599EB3" it is 40 octets long, 
> >> VARCHAR(40) is needed.
> > 
> > The text from 5280 says:
> > 
> > " CAs MUST force the serialNumber to be a non-negative integer, that
> > is, the sign bit in the DER encoding of the INTEGER value MUST be
> > zero.  This can be done by adding a leading (leftmost) `00'H octet if
> > necessary.  This removes a potential ambiguity in mapping between a
> > string of octets and an integer value.
> > 
> > As noted in Section 4.1.2.2, serial numbers can be expected to
> > contain long integers.  Certificate users MUST be able to handle
> > serialNumber values up to 20 octets in length.  Conforming CAs MUST
> > NOT use serialNumber values longer than 20 octets."
> > 
> > This makes it somewhat whether the `00'H octet is to be included in
> > the 20 octet limit or not. While I can see how one might view it
> > differently, I think the correct interpretation is to include the
> > leading `00'H octet in the count.  This is because
> > CertificateSerialNumber is defined as being an INTEGER, which means
> > "octet" is not applicable.  If it was defined as OCTET STRING, similar
> > to how KeyIdentifier is defined, then octet could be seen as applying
> > to the unencoded value.  However, given this is an INTEGER, the only
> > way to get octets is to encode and this requires the leading bit to be
> > zero for non-negative values.
> > 
> > That being said, I think that it is reasonable to add "DER encoding of
> > Serial must be 20 octets or less including any leading 00 octets" to
> > the list of ambiguities that CAs must fix by date X, rather than
> > something that requires revocation.
> > 
> 
> (Thinking in a multi-year future perspective):
> 
> Given the age of RFC5280 and the (suspicious) fact that 20 is also the
> length of SHA-1 hashes, maybe there should be work in CAB/F and
> implementations to actually raise this maximum (and one day perhaps the
> minimum) to a larger value, such as 64 plus optional zero.
> 
> Doing so would allow future requirements to increase the minimum serial
> entropy to more than 160 bits, should a relevant attack scenario emerge.

This is entirely unnecessary and would present serious stability issues due to 
backwards compatibility.

It may not be appropriate for this thread - discussing specific misissuances - 
but there is zero benefit from extending the serial number, and obvious serious 
detriment to the wide variety of applications - including, of course, NSS and 
CryptoAPI - that specifically expect serial numbers to be less than or equal to 
20 bytes, when encoded.

I appreciate your multiyear perspective, but given that it provides no 
articulated value, and of which significant discussion around the limits of 
other fields, such as commonName, are both relevant and informative, it would 
merely be change for change sake.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: StartCom cross-signs disclosed by Certinomis

2017-08-03 Thread Ryan Sleevi via dev-security-policy
On Friday, August 4, 2017 at 8:02:16 AM UTC+9, Kathleen Wilson wrote:
> On Thursday, August 3, 2017 at 3:09:25 PM UTC-7, Kurt Roeckx wrote:
> > I would really like to see that they have at least opened a bug to
> > request the inclusion of that CA before it's cross-signed. 
> 
> Here's StartCom's current root inclusion request:
> https://bugzilla.mozilla.org/show_bug.cgi?id=1381406
> 
> 
> > It should
> > have already all the requirements that Mozilla has for including the
> > root CA certificate before it's cross signed.
> 
> Correct. That should be true for all subCAs that are cross-signed by 
> currently-included CAs.
> 
> > 
> > I would prefer that it's even included in the Mozilla root store
> > before it's cross signed, 
> 
> That might not be fair, given how long Mozilla's root inclusion process 
> takes, and that we don't require this of other CAs who are new to our program.

Kathleen,

Doesn't this depend on your perspective of whether or not "new CA" refers to 
the key or the organization?

There's no doubt StartCom is not a "new CA" - it is a CA that Mozilla entrusted 
with the ability to issue certificates, and the organization - and its 
management - egregiously and repeatedly violated that trust.

The decision to remove - and to instill new requirements - was to ensure that 
the organization meaningfully underwent change to ensure that it would not do 
the same if further keys operated by it were included as trusted by Mozilla 
products - whether directly or indirectly.

Further, the process of restricting it to going through the Mozilla process 
ensures that there is a sufficient level of community review of those 
remediation steps, so that the Mozilla community can be assured that StartCom, 
the organization, is both competent enough and trustworthy enough to be 
entrusted with keys to the Internet.

In this light, both the past actions of StartCom the organization AND the many 
technical failures are relevant in that evaluation. Cross-signing directly 
undermines that review process, and in doing so, undermines both trust in 
StartCom the organization and in the Mozilla process, if a removed CA need only 
to pay for a cross-sign and can thus bypass every remediation step required of 
them, as StartCom is effectively doing.

My understanding had been that when a remediation is imposed on an 
organization, due to failing to follow policies, then those remediation steps 
must be taken, and subsequently reviewed by the community, before any direct 
(root key inclusion) or indirect (cross-signing) restoration of trust in the 
Mozilla Root Store. Without that guarantee, any corrective actions are hollow, 
and with it, trust in the whole system itself is lost.

I do hope you can clarify whether remediations apply to keys operated by 
organizations, or whether they apply to the organization themselves. If they 
apply to the organization, one would naturally expect they apply to root 
inclusion or cross-signs, and the organization is no longer "treated like a new 
CA," because they are no longer a new CA - they are an existing one.

It is also worth noting that in the past, Mozilla directed other CAs that 
cross-signing of their (new) roots would be expressly forbidden until the 
corrective actions were taken and publicly reviewed. For example, allowing 
CNNIC to be cross-signed prior to remediation would have defeated the entire 
purpose of removal.

In this larger light, it would also seem that StartCom, having misissued a 
number of certificates already under their new hierarchy, which present a risk 
to Mozilla users (revocation is neither an excuse nor a mitigation for 
misissuance), should be required to take corrective steps and generate a new 
hierarchy that is not, out of the gate, presenting risk to the overall 
community due to its past misissuances. We can and should expect more of new 
keys being included, because the compatibility risk of expecting adherence to 
the Root Policy is non-existent.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Miss-issuance: URI in dNSName SAN

2017-07-22 Thread Ryan Sleevi via dev-security-policy
On Fri, Jul 21, 2017 at 4:04 AM ramirommunoz--- via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> El jueves, 20 de julio de 2017, 16:49:15 (UTC+2), Gervase Markham
> escribió:
> > On 19/07/17 14:53, Alex Gaynor wrote:
> > > I'd like to report the following instance of miss-issuance:
> >
> > Thank you. Again, I have drawn this message to the attention of the CAs
> > concerned (Government of Venezuela and Camerfirma).
> >
> > Gerv
>
> Hi all
>
> Regarding Camerfirma certificates, we have follow the rules imposed by the
> local public administration to regulate the profile of several
> certificates. SSL certificates for public administration websites included.
> There is a entry in cabforum where this issue is described
> https://cabforum.org/pipermail/public/2016-June/007896.html.
> New eIDAS regulation has forced to Spanish Administration to fix this
> problem so from now on we can issue certificate that fully fulfil the
> cabforum rules.
> AC Camerfirma will offer to our public administration customers to renew
> the SSL certificates  with our new eIDAS 2016 CAs.


Could you point where the regulation require(s/d) the CN and SAN (in type
dNSName) contain a URI?

The past discussion was in context of additional SAN types not permitted by
the BRs, but the issue highlighted in this thread is clear violation of RFC
5280 semantics, and it is difficult to believe that was encompassed by
Camerafirma's previous disclosure.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Regarding CA requirements as to technical infrastructure utilized in automated domain validations, etc. (if any)

2017-07-20 Thread Ryan Sleevi via dev-security-policy
On Thu, Jul 20, 2017 at 8:13 PM, Matthew Hardeman via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:
>
> My purpose in writing this was to illustrate just how easily someone with
> quite modest resources and the right skill set can presently overcome the
> technical checks of DNS based domain validation (which includes things such
> as HTTP validation).
>

Sure, and this was an excellent post for that. But I note that you
discounted, for example, registry attacks (which are, sadly, all too
common).

And I think that's an important thing to consider. The use of BGP attacks
against certificate issuance are well-known and long-documented, and I also
agree that it's not something we've mitigated by policy. I also appreciate
the desire to improve issuance practices - after all, it's telling that
it's only 2017 before we're likely to finally do away with "CA invents
whatever method to validate it wants" - but I think we should also look
holistically at the threat scenario.

I mention these not because I wouldn't want to see mitigations for these,
but to make sure that the mitigations proposed are both practical and
realistic, and that they look holistically at the threat model. In a
holistic look, one which accepts that the registry can easily be
compromised (and/or other forms of DNSSEC shenanigans), it may be that the
solution is better invested on detection than prevention.

I'll write separately in a less sensationalized post to describe each risk
> factor and appropriate mitigations.
>
> In closing I wish to emphasize that Let's Encrypt was only chosen for this
> example because it was convenient as I already had a client installed and
> also literally free for me to perform multiple validations and certificate
> issuances.  (Though I could do that with Comodo's domain validation 3 month
> trial product too, couldn't I?)  A couple of extra checks strongly suggest
> that quite several other CAs which issue domain validation products could
> be just as easily subverted.  As yet, I have not identified a CA which I
> believe is well prepared for this level of network manipulation.  To their
> credit, it is clear to me that the people behind Let's Encrypt actual
> recognize this risk (on the basis of comments I've seen in their discussion
> forums as well as commentary in some of their recent GitHub commits.)
> Furthermore, there is evidence that they are working toward a plan which
> would help mitigate the risks of this kind of attack.  I reiterate again
> that nothing in this article highlights a risk surfaced by Let's Encrypt
> that isn't also exposed by every other DV issuing CA I've scrutinized.


Agreed. However, we're still figuring out with CAs how not to follow
redirects when validating requests, so we've got some very, very
low-hanging fruit in the security space to improve on. And this improvement
is, to some extent, a limited budget, so we want to go for the big returns.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Regarding CA requirements as to technical infrastructure utilized in automated domain validations, etc. (if any)

2017-07-20 Thread Ryan Sleevi via dev-security-policy
On Thu, Jul 20, 2017 at 4:23 PM, Matthew Hardeman via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> I would be willing to take a stab at this if the subject matter is of
> interest and would be willing to commit some time to work on it providing
> that it would appear a convenient time to discuss and contemplate the
> matter.  Can anyone give me a sense of whether the matter of the potential
> vulnerabilities that I see here -- and of the potential mitigations I might
> suggest -- are of interest to the community?
>

Broadly, yes, but there's unfortunately a shade of IP issues that make it
more difficult to contribute as directly as Gerv proposed. Gerv may accept
any changes to the Mozilla side, but if the goal is to modify the Baseline
Requirements, you'd need to sign the IPR policy of the CA/B Forum and join
as an Interested Party before changes.

And realize that the changes have to be comprehensible by those with
limited to know background in technology :)


> Quite separately, it appears that 3.2.2.8's "As part of the issuance
> process..." text would strongly suggest that CAA record checking be
> performed upon each instance of certificate issuance.  I presume that
> applies even in the face of a CA which might be relying upon previous DNS /
> HTTP domain validation.  I grant that the text goes on to say that issuance
> must occur within the greater of 8 hours or the CAA TTL, but it does appear
> that the intent is that CAA records be queried for each instance of
> issuance and for each SAN dnsName.  If this is the intent and ultimately
> the practice and we are already requiring blocking reliance on DNS query
> within the process of certificate issuance, should the validity of domain
> validation itself be similarly curtailed?  My argument is that if we are
> placing a blocking reliance upon both the CA's DNS validation
> infrastructure AS WELL AS the target domain's authoritative DNS
> infrastructure during the course of the certificate issuance process
>  , then there is precious little extra point of failure in just requiring
> that domain validation occur with similarly reduced validity period.
>

This is indeed a separate issue. Like patches, it's best to take as small
as you can go.

The question about the validity/reuse of this information is near and dear
to Googles' heart (hence Ballots 185 and 186) and the desire to reduce this
time substantially exists. That said, the Forum as a whole has mixed
feelings on this, and so it's still an active - and separate - point of
discussion.


> > > I believe there would be a massive improvement in the security of DNS
> query and HTTP client fetch type validations if the CA were required to
> execute multiple queries (ideally at least 3 or 4), sourced from different
> physical locations (said locations having substantial network and
> geographic distance between them) and each location utilizing significantly
> different internet interconnection providers.
> >
> > How could such a requirement be concretely specced in an auditable way?
>
> I can certainly propose a series of concrete specifications / requirements
> as to a more resilient validation infrastructure.  I can further propose a
> list of procedures for validating point-in-time compliance of each of the
> requirements in the aforementioned list.  Further, I can propose a list of
> data points / measurements / audit data that might be recorded as part of
> the validation record data set by the CA at the time of validation which
> could be used to provide strong support that the specifications /
> requirements are being followed through the course of operations.  If those
> were written up and presented does that begin to address your question?


I think it's worth exploring.

Note that there's a whole host of process involved:

- Change the CA/B documents (done through the Validation WG, at present -
need to minimally execute an IPR agreement before even members can launder
ballots for you)
- Change to the WebTrust TF audit criteria (which would involve
collaboration with them, and in general, they're not a big fan of precise
auditable controls)
- Change to the ETSI audit criteria (similar collaboration)

Alternatively, if exploring the Mozilla side, it's fairly easy to make it
up as you go along - which is not a criticism of the root store policy, but
praise :) You just may not get as much feedback.

That said, I think it's worthwhile to make sure the threat model, more than
anything, is defined and articulated. If the threat model results in us
introducing substantive process, but without objective security gain, then
it may not be as worthwhile. Enumerating the threats both addressed and
unaddressible are thus useful in that scope.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certificate with invalid dnsName issued from Baltimore intermediate

2017-07-18 Thread Ryan Sleevi via dev-security-policy
On Tue, Jul 18, 2017 at 8:05 AM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On 17/07/2017 21:27, Nick Lamb wrote:
> > On Monday, 17 July 2017 16:22:22 UTC+1, Ben Wilson  wrote:
> >> Thank you for bringing this to our attention.  We have contacted Intesa
> Sanpaolo regarding this error and have asked them to correct it as soon as
> possible.
> >
> > "Correcting" the error is surely the smaller of the two tasks ahead.
> >
>
> Depends if the only error is allowing double dots (while correctly
> validating the domain as if spelled without the extra dot).  Things are
> much worse if double dots bypass domain validation completely.
>
> Since at least two CA systems have now been found to accept double dots,
> where only single dots should be allowed, it is reasonable to assume
> that some relying parties also allow double dots.


It is not reasonable to conclude there is a pattern based on two samples,
nor is it reasonable to conclude there is a pattern in an unrelated system.
If you are aware of any relying party libraries based on CA validation
libraries, that would be useful in establishing the reasonableness of the
conclusion.


This makes it
> essential that any certificates with this syntax error have been
> completely validated for the equivalent single-dotted name.
>
> I also notice that this is apparently an unconstrained
> intermediate/SubCA.
>
> Since this appears to be a certificate for the cert holders own domains,
> it is also possible domain validation was done manually, as in "we know
> first hand that we control these domains", making this an OV cert, not a
> DV cert.


All OV certs are DV certs - they are subsets.

Perhaps you're confusing DNS validation done at an Authorization Domain
Name, rather than FQDN. That would be consistent with allowing the customer
to enter labels below the validated domain (under 3.2.5 of the BRs), but
not validating it's a valid DNS label or well formed domain name.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Leaking private keys through web servers

2017-07-14 Thread Ryan Sleevi via dev-security-policy
On Fri, Jul 14, 2017 at 2:07 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:
>
> That's my point.  The current situation is distinct from weak keys, and
> we shouldn't sacrifice the weak keys BR to make room for a compromised
> keys BR.


But a weak key is always suspected of having suffered a Key Compromise - is
it not?

That is, changing to from "weak keys" to "suspected or known to have
suffered Key Compromise" in 6.1.1.3 would fully include weak keys (which
are already in scope) as well as include those excluded (compromised,
strong). This applies in addition to the requirements already present in
6.1.5/6.1.6 regarding key sizes and strengths (which already counter your
hypothetical), and 4.9.1.1/4.9.1.2 address the situation if a strong key,
post issuance, becomes either weak or compromised.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Leaking private keys through web servers

2017-07-14 Thread Ryan Sleevi via dev-security-policy
On Fri, Jul 14, 2017 at 11:11 AM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On 14/07/2017 15:53, Ryan Sleevi wrote:
>
>> On Fri, Jul 14, 2017 at 1:29 AM, Jakob Bohm via dev-security-policy <
>> dev-security-policy@lists.mozilla.org> wrote:
>>
>>>
>>> But that doesn't clearly include keys that are weak for other reasons,
>>> such as a 512 bit RSA key with an exponent of 4 (as an extreme example).
>>>
>>>
>> Yes. Because that's clearly not necessary - because it's already covered
>> by
>> 4.9.1.1 #3 and 6.1.5/6.1.6. So I don't think this serves as a valid
>> criticism to the proposed update.
>>
>>
> That's why I called it an "extreme example".  Point was that the current
> wording requires CAs to reject public keys that fail any reasonable test
> for weakness not just the explicit cases listed in the BRs (such as too
> short RSA keys or small composite public exponents).
>
> For example if it is published that the RSA requirements in 6.1.6 are
> insufficient (for example that moduli with more than 80% 1-bits are
> weak), then the current wording of 6.1.1.3 would require CAs to
> instigate such a test without waiting for a BR update.
>

Sure, but that's unrelated to the discussion at hand, at least from what
you've described. However, if I've misunderstood you, it might help if you
rephrase the argument from what was originally being discussed - which is
CAs issuing certificates for compromised keys - which are arguably distinct
from weak keys (which was the point I was making).

It sounds like you're saying "No, they're weak" - but I both acknowledged
and refuted that interpretation in my message, so perhaps I simply do not
understand the relation of what you're discussing above to the general
issue Hanno raised.


> Maybe it would be better stylistically to add this to one of the other
>>> BR clauses.
>>>
>>>
>> Considering that the goal is to make it clearer, I'm not sure this
>> suggestion furthers that goal.
>>
>>
> It could be in a new clause 6.1.1.3.1 (not applicable to SubCAs) or a
> new clause 6.1.1.4 (applicable to all public keys, not just subscribers)
> or a new clause 6.1.6.1 (ditto), or it could be added as an additional
> textual paragraph in 6.1.1.3 or 6.1.6 .
>

I'm afraid at this point, I'm completely lost as to the point you're trying
to make.

But at least 4.9.1.1 #3 requires them to revoke without waiting for a
> new report.


No, "it depends". And that was the point I was trying to make.


> And it would be obviously and patently bad faith to revoke
> the same key every 24 hours and claim all is well (once or twice may be
> an understandable oversight, since this is not such a common scenario,
> but after that they should start automating the rejection/revocation).


I don't think you can or should ascribe bad faith here; my entire point was
to highlight the possible interpretation issues - but to further highlight
why the thing you call "patently bad faith" is itself fraught with peril,
and thus it's reasonable to argue that it's not bad faith.

If this is not desirable, we should strive to make it clearer - but that
means acknowledging the edge cases, determining what is appropriate, and
providing sufficient guidance so that, in the future, it might be more
successfully argued as bad faith.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Leaking private keys through web servers

2017-07-14 Thread Ryan Sleevi via dev-security-policy
On Fri, Jul 14, 2017 at 9:44 AM, Hanno Böck via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:
>
> So there are several questions and possible situations here.
>
> I think it's relatively clear that a CA could prevent reissuance of
> certs if they know about a key compromise.
>

I actually don't think it's clear, that's why I was trying to highlight the
challenges.

That is, I think we can all agree that for situations where you reported
directly to the CA, it was clear that the CA had knowledge that the
associated private key was compromised. Presumably, a requirement to
prevent issuance would mean that the CA maintains a blacklist of
'compromised keys' and refuses to issue certificates for them.

However, if we say that the CA shall not prevent for situations of
compromise, the following interpretations exist, and we should try to
figure out first what we want (from an ecosystem) and then how to specify
that.
- Are we expecting the CA to maintain a database of compromised private
keys (I believe the implied answer is 'yes' - but today, they only need
maintain the database of revoked certificates, which is different)
- Is the CA obligated to check other sources of compromise information
prior to issuing the certificate.
  - Example: Should they check other CAs' CRLs? The CRLs themselves don't
provide information about the key, so one would presumably _also_ need to
check sources like Certificate Transparency.
  - Tortured example: What happens if a (different CA's) cert is not logged
in CT, revoked in the CRL (for keyCompromise), and then subsequently
disclosed to first CA. Are they obligated to revoke (under 4.9.1.1 #3)? Are
they obligated to not issue (under the proposed change)?

The reason I highlight this is that preventing CA "Foo" from issuing a
second cert for (compromised) key X doesn't prevent CA "Bar" from doing the
same. Because of this, it's a reasonable question about what security value
we're obtaining, if the party with Key X can simply go to another CA to get
the cert.

From a CA perspective, requiring that Foo reject a request that Bar can
accept would be unappealing to Foo - it's effectively giving business to
Bar (whether or not this is actually the case, and however illogical it is,
there are plenty of CAs who think this way)

From a security perspective, requiring that Foo not issue for key X doesn't
ensure that a cert for key X will not be introduced, not unless we make the
requirement of all CAs.

So that's why I'm not sure how much value we'd get from such a requirement
- and wanted to highlight the challenges in finding a way to establish it
for all CAs, and why it's important (for CAs and relying parties) for a
consistent requirement.


> Ultimately I'm inclined to say that there really shouldn't be any good
> reason at all to ever reuse a key. (Except... HPKP)


I see. I think I'd strongly disagree with that assertion. There are lots of
good reasons to reuse keys. The most obvious example being for
shorter-lived certificates (e.g. 90 days), which allow you to rotate the
key in case of compromise, but otherwise don't require you to do so.
Considering revocation information is no longer required to be provided
once a certificate expires, it _also_ means that in the CA Foo case, with
Key X compromised, the subscriber could get another cert for it once the
original cert has expired (and thus revocation information no longer able
to be provided)
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Leaking private keys through web servers

2017-07-14 Thread Ryan Sleevi via dev-security-policy
On Fri, Jul 14, 2017 at 1:29 AM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:
>
> But that doesn't clearly include keys that are weak for other reasons,
> such as a 512 bit RSA key with an exponent of 4 (as an extreme example).
>

Yes. Because that's clearly not necessary - because it's already covered by
4.9.1.1 #3 and 6.1.5/6.1.6. So I don't think this serves as a valid
criticism to the proposed update.


> Maybe it would be better stylistically to add this to one of the other
> BR clauses.
>

Considering that the goal is to make it clearer, I'm not sure this
suggestion furthers that goal.


> Anyway, I think this is covered by BR 4.9.1.1 #3, although it might not
> be obvious to the CA that they should have set up checks for this, since
> most key compromise reports come from the subscriber, who would be a lot
> less likely to make this mistake after revoking the key themselves,
> except when the revocation was mistaken (this happens, and in that case,
> reusing the key is not a big problem).
>

I'm afraid you may have misunderstood the point. Certainly, 4.9.1.1 #3
covers revocation. However, my suggestion was about preventing issuance,
which is why I was talking about 6.1.1.3. That is, unquestionably, if a CA
revokes a certificate for key compromise, then issues a new one for that
same key, they're obligated under 4.9.1.1 #3 to revoke within 24 hours. My
point was responding to Hanno's suggestion of preventing them from issuing
the second certificate at all.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: WoSign new system passed Cure 53 system security audit

2017-07-13 Thread Ryan Sleevi via dev-security-policy
In the description of the remediation of the vulnerabilities, aspects of
the design are shared, particularly in discussing remediation. These
aspects reveal design decisions that do not comply with the BRs, and are
significant enough to require re-design.

I agree that this can be difficult to independently evaluate. However, it
should hopefully be possible for all participants to understand that, given
the Mozilla required remediations, it seems unwise to audit a system before
you've made all of the necessary changes, or demonstrated a comprehensive
awareness of what is required of the BRs. It is good as an incremental
approach, particularly if you don't have a team of qualified security
engineers that can provide that in-house during the design and
implementation phase, but a holistic approach will involve making sure the
system is both compliant and secure, and both should be tackled together.

On Thu, Jul 13, 2017 at 3:13 PM, Percy via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> > You will fail #4. Because your system, as designed, cannot and does not
> > comply with the Baseline Requirements.
>
> Is there a design outline in the security audit as well? No one in the
> community can judge either yours or WoSign's statement as this information
> is not shared with us. I suggest either WoSign or Mozilla/Google share such
> information with the community if it's not under NDA. Otherwise, this
> discussion is rather unproductive as we have crucial information missing.
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: WoSign new system passed Cure 53 system security audit

2017-07-13 Thread Ryan Sleevi via dev-security-policy
You will fail #4. Because your system, as designed, cannot and does not
comply with the Baseline Requirements.

As such, you will then
(4.1) Update new system, developing new code and new integrations
(4.2) Engage the auditor to come back on side
(4.3) Hope you get it right this time
(4.4) Generate a new root
(4.5) Do the PITRA audit and hopefully pass
(4.6) Hope that the security audit from #1 still applies to #4.1 [but
because the changes needed are large, it's hard to imagine]
(5) Apply for the new root inclusion

The system you had security audited in #1 cannot pass #4. That's why
working with an auditor to do a readiness assessment in conjunction with or
before the security assessment can help ensure you can meet the BRs, and
then ensure you can meet them securely.

On Thu, Jul 13, 2017 at 11:04 AM, Richard Wang <rich...@wosign.com> wrote:

> Hi Ryan,
>
> I really don't understand where the new system can't meet the BR, we don't
> use the new system to issue one certificate, how it violate the BR?
>
> Our step is:
> (1) develop a new secure system in the new infrastructure, then do the new
> system security audit, pass the security audit;
> (2) engage a WebTrust auditor onsite to generate the new root in the new
> system;
> (3) use the new audited system to issue certificate;
> (4) do the PITRA audit and WebTrust audit;
> (5) apply the new root inclusion.
>  While we start to apply the new root application, we will follow the
> requirements here: https://bugzilla.mozilla.org/show_bug.cgi?id=1311824
> to demonstrate we meet the 6 requirements.
>
> We will discard the old system and facilitates, so the right order should
> be have-new-system first, then audit the new system, then apply the new
> root inclusion. We can not use the old system to do the BR audit.
>
> Please advise, thanks.
>
>
> Best Regards,
>
> Richard
>
> On 13 Jul 2017, at 21:53, Ryan Sleevi <r...@sleevi.com> wrote:
>
> Richard,
>
> That's great, but the system that passed the full security audit cannot
> meet the BRs, you would have to change that system to meet the BRs, and
> then that new system would no longer be what was audited.
>
> I would encourage you to address the items in the order that Mozilla posed
> them - such as first systematically identifying and addressing the flaws
> you've found, and then working with a qualified auditor to demonstrate both
> remediation and that the resulting system is BR compliant. And then perform
> the security audit. This helps ensure your end result is most aligned with
> the desired state - and provides the public the necessary assurances that
> WoSign, and their management, understand what's required of a publicly
> trusted CA.
>
> On Wed, Jul 12, 2017 at 10:24 PM, Richard Wang <rich...@wosign.com> wrote:
>
>> Hi Ryan,
>>
>> We got confirmation from Cure 53 that new system passed the full security
>> audit. Please contact Cure 53 directly to verify this, thanks.
>>
>> We don't start the BR audit now.
>>
>> Best Regards,
>>
>> Richard
>>
>> On 12 Jul 2017, at 22:09, Ryan Sleevi <r...@sleevi.com> wrote:
>>
>>
>>
>> On Tue, Jul 11, 2017 at 8:18 PM, Richard Wang <rich...@wosign.com> wrote:
>>
>>> Hi all,
>>>
>>> Your reported BR issues is from StartCom, not WoSign, we don't use the
>>> new system to issue any certificate now since the new root is not generated.
>>> PLEASE DO NOT mix it, thanks.
>>>
>>> Best Regards,
>>>
>>> Richard
>>>
>>
>> No, the BR non-compliance is demonstrated from the report provided to
>> browsers - that is, the full report associated with this thread.
>>
>> That is, as currently implemented, the infrastructure for the new roots
>> would not be able to receive an unqualified audit. Further system work is
>> necessary, and that work is significant enough that it will affect the
>> conclusions from the report.
>>
>>
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: WoSign new system passed Cure 53 system security audit

2017-07-13 Thread Ryan Sleevi via dev-security-policy
Richard,

That's great, but the system that passed the full security audit cannot
meet the BRs, you would have to change that system to meet the BRs, and
then that new system would no longer be what was audited.

I would encourage you to address the items in the order that Mozilla posed
them - such as first systematically identifying and addressing the flaws
you've found, and then working with a qualified auditor to demonstrate both
remediation and that the resulting system is BR compliant. And then perform
the security audit. This helps ensure your end result is most aligned with
the desired state - and provides the public the necessary assurances that
WoSign, and their management, understand what's required of a publicly
trusted CA.

On Wed, Jul 12, 2017 at 10:24 PM, Richard Wang <rich...@wosign.com> wrote:

> Hi Ryan,
>
> We got confirmation from Cure 53 that new system passed the full security
> audit. Please contact Cure 53 directly to verify this, thanks.
>
> We don't start the BR audit now.
>
> Best Regards,
>
> Richard
>
> On 12 Jul 2017, at 22:09, Ryan Sleevi <r...@sleevi.com> wrote:
>
>
>
> On Tue, Jul 11, 2017 at 8:18 PM, Richard Wang <rich...@wosign.com> wrote:
>
>> Hi all,
>>
>> Your reported BR issues is from StartCom, not WoSign, we don't use the
>> new system to issue any certificate now since the new root is not generated.
>> PLEASE DO NOT mix it, thanks.
>>
>> Best Regards,
>>
>> Richard
>>
>
> No, the BR non-compliance is demonstrated from the report provided to
> browsers - that is, the full report associated with this thread.
>
> That is, as currently implemented, the infrastructure for the new roots
> would not be able to receive an unqualified audit. Further system work is
> necessary, and that work is significant enough that it will affect the
> conclusions from the report.
>
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Leaking private keys through web servers

2017-07-13 Thread Ryan Sleevi via dev-security-policy
On Thu, Jul 13, 2017 at 7:07 AM, Gervase Markham via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On 12/07/17 15:47, Ryan Sleevi wrote:
> > One challenge to consider is how this is quantified. Obviously, if you
> > reported to Comodo the issue with the key, and then they issued another
> > certificate with that key, arguably that's something Comodo should have
> > caught.
>
> I'd say so.
>
> > However, if you reported the compromise to, say, ACME CA, and then
> > Comodo issued an equivalent cert, that's questionable.
>
> Sure. This would be a provision to deter accidental stupidity, not
> wilful stupidity. The common case is a clueless person just resubmits
> the same keypair to their current CA.
>

Right. My point in that was that it's easy to rathole into a definition of
"known Key Compromise" that  implies an obligation to check others
revocation lists to see if a certificate with the same public key has been
revoked due to keyCompromise, which is further undesirable given that CAs
can't be trusted with the revocation reasons in the first place.

My goal was to try to capture that, to some extent, the burden of knowledge
can only be on the CA's own direct knowledge - which means it only prevents
the subscriber from getting a cert (from the same CA), and not from another
CA. This is both a limitation of the mitigation - it doesn't prevent
another CA from issuing - and a potential concern from CAs - others can
issue what they cannot.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: How long to resolve unaudited unconstrained intermediates?

2017-07-12 Thread Ryan Sleevi via dev-security-policy
On Wed, Jul 12, 2017 at 10:40 AM, Kurt Roeckx via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On 2017-07-12 16:12, Ryan Sleevi wrote:
>
>> I don't know if this currently happens, but I would like to see all CA
>>> certificates that are in OneCRL but are not revoked to be added to the
>>> root
>>> store as distrusted too.
>>>
>>>
>> Why? I can share reasons why it might not be desirable, but rather than
>> start out negatively, I was hoping you could expand upon the reasons for
>> including.
>>
>
> My understanding is that certdata.txt is what is the trust of the root
> store is, and that OneCRL is mostly a browser only thing to get revocation
> information, but is also (ab)used to distrust something.
>
> The certdata.txt currently does explicitly list CA certificates that
> shouldn't be trusted.
>
> As far as I know external user of the trust information currently only use
> certdata.txt. So only adding it to OneCRL will not reach all the users of
> the trust store.
>
> It could be that maybe the combination is what should be used, but as far
> as I know it's not documented as such and I doubt it gets used much outside
> Mozilla products.


You're correct that OneCRL is specific to Firefox. OneCRL has the (highly
desirable) properties of being able to be rapidly updated, much like
CRLSets. In times of compatibility issues, it's possible to 'un'revoke a
certificate - as has been necessary, in the past, due to high-profile
revocations causing various path building issues. As a concrete example,
both Symantec and Comodo had revocations which - while, on a pure technical
level, were entirely correct - the processing of these revocations caused
issues for clients as diverse as Apple macOS, Microsoft Windows, and,
perhaps unsurprisingly, Google Chrome.

The risk in moving these to certdata.txt (which is consumed by a wide
variety of clients - and in particular, those not using the current version
of Mozilla NSS as Mozilla Firefox) is generally that carried out by
https://wiki.mozilla.org/CA:FAQ#Can_I_use_Mozilla.27s_set_of_CA_certificates.3F
. That is, it is patently unwise (and, at times, unsafe) to consume the
Mozilla Root CA Store without using and validating certificates using the
same code as Mozilla Firefox. I know this dismays some members, but that's
the reality due to the complexity of chain and path building.

Consider, for example, a client that does not support path discovery
(which, for example, includes most actively-deployed OpenSSL versions). If
one were to extract certdata.txt into trust and distrust records, with the
algorithm that OpenSSL uses, this would actively break connections to a
number of sites, as it would encounter the distrusted certificates and
cease path building. Mozilla Firefox, on the other hand, uses mozilla::pkix
and implements a robust path discovery mechanism - the presence of a
distrust record will have it 'unwind' on path discovery and continue trying
alternative paths.

One can see this having played out in other situations in the past as well
- such as Red Hat's decision to (temporarily) ship 1024-bit roots that were
removed from the Mozilla Root CA store, due to their need to support
OpenSSL clients that could not build alternative paths to the (included)
2048-bit roots.

In this sense, by keeping them separated - into certdata.txt and OneCRL -
Mozilla is able to ensure certdata.txt is more usable by these clients.
Including them in certdata.txt, while certainly more complete and
comprehensive, would conversely mean more clients would break when
consuming certdata.txt - or, if Mozilla were to try to maintain
certdata.txt as an 'interoperable source of truth', would prevent the
necessary changes to ensure users are safe.

Further, consider that while the use of OCSP or CRLs, and in particular,
hard fail, is unsuitable for a client such as Mozilla Firefox, other
products may have different requirements for both performance and
availability. For example, for a mutually authenticating batch processing
system, the additional latency and/or unreliability imposed by these
revocation checking methods is not as significant to the overall product
flow, and thus offers a better alternative than relying on either OneCRL or
certdata.txt updates.

Because the situation varies by client - and, again, I want to stress that
a "Web PKI" client that wishes to remain interoperable with 'the browsers'
truly needs to be using the same code as 'the browsers' (and this is true
across all major browser platforms) - keeping it distinct best serves the
needs of various consumers, and allows the few distrust records included to
be ones that minimize the large-scale compatibility impact that might
otherwise be introduced.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Leaking private keys through web servers

2017-07-12 Thread Ryan Sleevi via dev-security-policy
On Wed, Jul 12, 2017 at 10:19 AM, Hanno Böck via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:
>
> * Comodo re-issued certs with the same key. I wonder if there should be
>   a rule that once a key compromise event is known to the CA it must
>   make sure this key is blacklisted. (Or maybe one of the existing
>   rules already apply, I don't know.)
>

BRs 1.4.5 6.1.1.3 only requires the CA to reject a certificate if it
doesn't mean 6.1.5/6.1.6 or is known weak private key.

While the example is given (e.g. Debian weak keys), one could argue that
'weak' includes 'disclosed'. Of course, given that the specific term "Key
Compromise" is also provided in the BRs, that seems a stretch.

One could also argue 6.1.2 is applicable - that is, revocation was
immediately obligated because of the awareness - but that also seems
tortured.

Probably the easiest thing to do is update the BRs in 6.1.1.3 to replace
"known weak private key" to just say "If the private key is suspected or
known to have suffered Key Compromise" - which includes known weak private
keys, as well as the broader sense.

One challenge to consider is how this is quantified. Obviously, if you
reported to Comodo the issue with the key, and then they issued another
certificate with that key, arguably that's something Comodo should have
caught. However, if you reported the compromise to, say, ACME CA, and then
Comodo issued an equivalent cert, that's questionable. I'm loathe to make
CAs rely on eachothers' keyCompromise revocation reasons, simply because we
have no normative guidance in the BRs (yet) that require CAs be honest or
competent with their revocation reasons (... yet). Further, we explicitly
don't want to have a registry (of compromised keys, untrustworthy orgs,
etc), for various non-technical reasons.

I'm curious if you have thoughts there - particularly, how you reported the
private key was compromised (did you provide evidence - for example, a
signed message, or simply a link to "Here's the URL, go see for yourself"?)
- and how you see it working cross-CA boundaries.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: How long to resolve unaudited unconstrained intermediates?

2017-07-12 Thread Ryan Sleevi via dev-security-policy
On Wed, Jul 12, 2017 at 6:03 AM, Kurt Roeckx via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On 2017-07-11 15:56, Nick Lamb wrote:
>
>> On Tuesday, 11 July 2017 10:56:43 UTC+1, Kurt Roeckx  wrote:>
>>
>>> So at least some of them have been notified more than 3 months ago, and
>>> a bug was filed a month later. I think you already gave them too much
>>> time to at least respond to it, and suggest that you sent a new email
>>> indicating that if they don't respond immediately that they will get
>>> added to OneCRL.
>>>
>>
>> Agreed. It may also make sense to add telemetry that allows Mozilla to
>> determine whether listing such subCAs in the OneCRL are ever actually
>> blocking anything. This makes  a difference in my opinion as to the
>> severity of the breach of policy by the CA in question.
>>
>
> I don't know if this currently happens, but I would like to see all CA
> certificates that are in OneCRL but are not revoked to be added to the root
> store as distrusted too.
>

Why? I can share reasons why it might not be desirable, but rather than
start out negatively, I was hoping you could expand upon the reasons for
including.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: WoSign new system passed Cure 53 system security audit

2017-07-12 Thread Ryan Sleevi via dev-security-policy
On Tue, Jul 11, 2017 at 8:18 PM, Richard Wang  wrote:

> Hi all,
>
> Your reported BR issues is from StartCom, not WoSign, we don't use the new
> system to issue any certificate now since the new root is not generated.
> PLEASE DO NOT mix it, thanks.
>
> Best Regards,
>
> Richard
>

No, the BR non-compliance is demonstrated from the report provided to
browsers - that is, the full report associated with this thread.

That is, as currently implemented, the infrastructure for the new roots
would not be able to receive an unqualified audit. Further system work is
necessary, and that work is significant enough that it will affect the
conclusions from the report.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: WoSign new system passed Cure 53 system security audit

2017-07-11 Thread Ryan Sleevi via dev-security-policy
On Tue, Jul 11, 2017 at 12:09 PM, Percy via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Tuesday, July 11, 2017 at 8:36:33 AM UTC-7, Ryan Sleevi wrote:
>
> > comply with the Baseline Requirements, nor, as designed, can it. The
> system
> > would need to undergo non-trivial effort to comply with the Baseline
> > Requirements.
>
> If the system needs significant changes to meet the BR, then does it mean
> the current security audit will no longer applies to the BR-complaint
> system, assuming WoSign is ever able to produce one?


That will be a question for Mozilla to assess with respect to its WoSign
remediation actions.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: WoSign new system passed Cure 53 system security audit

2017-07-11 Thread Ryan Sleevi via dev-security-policy
On Tue, Jul 11, 2017 at 11:40 AM, Alex Gaynor <agay...@mozilla.com> wrote:

> Is this a correct summary:
>
> - The report included here is supposed to fulfill the network security
> test portion of the BRs
>

No. This is #5 from https://bugzilla.mozilla.org/show_bug.cgi?id=1311824 ,
and relates to the overall security design of the system which in part
stemmed from issues such as the ability to cause arbitrary (backdated)
issuance via manipulation of API parameters. That is, it's orthogonal to
the BRs, and intended to take a more systemic approach to the system design.


> - This report does not attest to BR compliance (or non-compliance)
>

Correct


> - To complete an application for the Mozilla Root Program, WoSign would be
> required to additionally provide a WebTrust audit (or equivalent, as
> described in the Mozilla PKI Policy section 3.1)
>

Correct, as required by #3 and #4.


> - Based on your reading of the complete network security test, you would
> not expect WoSign to be able to pass a BR Audit without qualifications
>

Correct.


>
> Alex
>
> On Tue, Jul 11, 2017 at 11:35 AM, Ryan Sleevi via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
>> On Tue, Jul 11, 2017 at 11:16 AM, Jonathan Rudenberg via
>> dev-security-policy <dev-security-policy@lists.mozilla.org> wrote:
>>
>> >
>> > > On Jul 11, 2017, at 06:53, okaphone.elektronika--- via
>> > dev-security-policy <dev-security-policy@lists.mozilla.org> wrote:
>> > >
>> > > On Monday, 10 July 2017 08:55:38 UTC+2, Richard Wang  wrote:
>> > >>
>> > >> Please note this email topic is just for releasing the news that
>> WoSign
>> > new system passed the security audit, just for demonstration that we
>> > finished item 5:
>> > >> " 5. Provide auditor[3] attestation that a full security audit of the
>> > CA’s issuing infrastructure has been successfully completed. "
>> > >> " [3] The auditor must be an external company, and approved by
>> Mozilla.
>> > "
>> > >
>> > > It also seems a bit strange to report item 5 "successfully completed"
>> > before we hear anything about the other items. How about starting with
>> item
>> > 1? What are your plans voor fixing the problems?
>> >
>> > It’s worth noting that the problems have not stopped yet. There are a
>> > bunch of certificates issued over the past few months that do not comply
>> > with the Baseline Requirements issued from the new "StartCom BR SSL
>> ICA”,
>> > for example:
>> >
>> > https://crt.sh/?opt=cablint=8BDFE4A526BFB35C8A417B10F4D0AB
>> > E9E1D60D28A412539D5BC71C19B46FEF21
>> > https://crt.sh/?opt=cablint=124AAD38DAAC6B694D65F45226AB51
>> > 52FC46D229CBC203E0814D175F39977FF3
>> > https://crt.sh/?opt=cablint=9B78C78B32F4AC717B3DEFDABDACC4
>> > FEFA61BFD17782B83F75ADD82241147721
>> > https://crt.sh/?opt=cablint=AAB0B5A08F106639A5C9D720CD37FD
>> > B30E7F337AEBAF9407FD854B5726303F7B
>> > https://crt.sh/?opt=cablint=9DCE6A924CE837328D379CE9B7CDF4
>> > A2BA8A0E8EC01018B9DE736EBC64442361
>> > https://crt.sh/?opt=cablint=62A9A9FDCDC04A043CF2CB1A5EAFE3
>> > 3CF9ED8796245DE4BD5250267ADEFF005A
>> > https://crt.sh/?opt=cablint=6A72FA5DCC253D2EE07921898B9A9B
>> > B263FD1D20FE61B1F52F939C0C1C0DCFEE
>> > https://crt.sh/?opt=cablint=238E2E96665748D2A05BAAEEC8BAE6
>> > AFE7B7EF4B1ADA4908354C855C385ECD81
>> > https://crt.sh/?opt=cablint=C11C00EB0E14EEB30567D749FFD304
>> > 45E0B490D1DCA7B7E082FD1CB0A40A71C0
>> > https://crt.sh/?opt=cablint=4DEF4CFD21A969E8349E4428FDEC73
>> > 767C01DE6127843312511B71029F4E3836
>>
>>
>> It's worth noting that, on the basis of the security audit report full
>> details shared by WoSign, the system that was security audited does not
>> comply with the Baseline Requirements, nor, as designed, can it. The
>> system
>> would need to undergo non-trivial effort to comply with the Baseline
>> Requirements.
>> ___
>> dev-security-policy mailing list
>> dev-security-policy@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-security-policy
>>
>
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: WoSign new system passed Cure 53 system security audit

2017-07-11 Thread Ryan Sleevi via dev-security-policy
On Tue, Jul 11, 2017 at 11:16 AM, Jonathan Rudenberg via
dev-security-policy  wrote:

>
> > On Jul 11, 2017, at 06:53, okaphone.elektronika--- via
> dev-security-policy  wrote:
> >
> > On Monday, 10 July 2017 08:55:38 UTC+2, Richard Wang  wrote:
> >>
> >> Please note this email topic is just for releasing the news that WoSign
> new system passed the security audit, just for demonstration that we
> finished item 5:
> >> " 5. Provide auditor[3] attestation that a full security audit of the
> CA’s issuing infrastructure has been successfully completed. "
> >> " [3] The auditor must be an external company, and approved by Mozilla.
> "
> >
> > It also seems a bit strange to report item 5 "successfully completed"
> before we hear anything about the other items. How about starting with item
> 1? What are your plans voor fixing the problems?
>
> It’s worth noting that the problems have not stopped yet. There are a
> bunch of certificates issued over the past few months that do not comply
> with the Baseline Requirements issued from the new "StartCom BR SSL ICA”,
> for example:
>
> https://crt.sh/?opt=cablint=8BDFE4A526BFB35C8A417B10F4D0AB
> E9E1D60D28A412539D5BC71C19B46FEF21
> https://crt.sh/?opt=cablint=124AAD38DAAC6B694D65F45226AB51
> 52FC46D229CBC203E0814D175F39977FF3
> https://crt.sh/?opt=cablint=9B78C78B32F4AC717B3DEFDABDACC4
> FEFA61BFD17782B83F75ADD82241147721
> https://crt.sh/?opt=cablint=AAB0B5A08F106639A5C9D720CD37FD
> B30E7F337AEBAF9407FD854B5726303F7B
> https://crt.sh/?opt=cablint=9DCE6A924CE837328D379CE9B7CDF4
> A2BA8A0E8EC01018B9DE736EBC64442361
> https://crt.sh/?opt=cablint=62A9A9FDCDC04A043CF2CB1A5EAFE3
> 3CF9ED8796245DE4BD5250267ADEFF005A
> https://crt.sh/?opt=cablint=6A72FA5DCC253D2EE07921898B9A9B
> B263FD1D20FE61B1F52F939C0C1C0DCFEE
> https://crt.sh/?opt=cablint=238E2E96665748D2A05BAAEEC8BAE6
> AFE7B7EF4B1ADA4908354C855C385ECD81
> https://crt.sh/?opt=cablint=C11C00EB0E14EEB30567D749FFD304
> 45E0B490D1DCA7B7E082FD1CB0A40A71C0
> https://crt.sh/?opt=cablint=4DEF4CFD21A969E8349E4428FDEC73
> 767C01DE6127843312511B71029F4E3836


It's worth noting that, on the basis of the security audit report full
details shared by WoSign, the system that was security audited does not
comply with the Baseline Requirements, nor, as designed, can it. The system
would need to undergo non-trivial effort to comply with the Baseline
Requirements.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SRVNames in name constraints

2017-07-06 Thread Ryan Sleevi via dev-security-policy
On Thu, Jul 6, 2017 at 10:48 AM, Gervase Markham via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:
>
> What EKU(s) get used with certs containing SRVName? I confess I don't
> understand this technology as well as I might.
>

Relevant to this group, id-kp-serverAuth (and perhaps id-kp-clientAuth)
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Machine- and human-readable format for root store information?

2017-07-06 Thread Ryan Sleevi via dev-security-policy
On Thu, Jul 6, 2017 at 10:57 AM, Gervase Markham <g...@mozilla.org> wrote:

> On 05/07/17 18:08, Ryan Sleevi wrote:
> > That is, the difference between, say:
> > "label": "Verisign/RSA Secure Server CA"
> > And
> > CKA_LABEL "Verisign/RSA Secure Server CA"
>
> Not much, but you've picked the clearest part of certdata.txt to compare
> :-)
>

Sure - because you haven't given much of a sense for what human readability
means. That is, whether or not \104\143 is more or less readable than 68:8F
(hex) or aI8= (base64) or NCHQ (base32), as an example.

The presumption here seems to be "format that I'm familiar with", but
that's a fairly subjective read. We already have machine-readability, and
we've established that tool-generation is strongly preferred (both for
correctness and consistency), so human-writability does not seem like it's
agreed upon goal. So where does human-readability factor in, and does it
make more sense to derive human-readability from the existing
machine-readability?


>
> > It isn't, because JSON can't.
>
> As Rob notes, you can basically have them in all but name.
>

I don't think that really holds, but I'm surprised to see no one pointing
it out yet.

For example, there is a meaningful difference between

# This is the CA with serial abcd
CKA_LABEL UTF8 "Verisign/RSA Secure Server CA"

# This is the hash 00:ab:cd:ef
CKA_CERT_SHA1_HASH MULTILINE_OCTAL
\104\143\305\061\327\314\301\000\147\224\141\053\266\126\323\277
\202\127\204\157
END

If you wanted to express that in JSON, using Rob's bit, you'd end up with
{
  "label": "VeriSign/RSA Secure Server CA",
  "comment": "This is the CA with serial abcd"
},
{
  "sha1_hash": "\x00\xab\xcd\xed",
  "comment": "This is the hash 00:ab:cd:ef"
}

Except that wouldn't be a valid JSON string (or at least, not all
expressible byte sequences are, as they'd result in invalid unicode
sequences), so you'd have to do a further transformation, such as base64
decoding (or de-hexing), which means its once again less human-maintainable.

I suspect we're at the risk of ratholing here, but the lack of JSON
comments is a well-known limitation that continually negatively affects
those who pursue JSON schemas, so we should not be so quick to brush away
what is frequently a maintenance compliant.


> > Would you see it being as independent, or subservient to Firefox? If you
> > saw it as independent, then you would presumably need to ensure that -
> like
> > today - Firefox-specific needs, like EV or trust restrictions, did not
> > creep into the general code.
>
> I don't think that follows. EV trustworthiness is a property of the root
> store. The root program makes those decisions, and it's entirely
> appropriate that they be encoded in root program releases. We also make
> decisions on "trust restrictions", so I'm not sure why you call that a
> "Firefox-specific need".
>

EV trustworthiness is an aspect of the application code - in this case, a
Web browser with UI surface being exposed. Do you believe EV makes sense
for, say, a utility like cURL or wget? Or for an application like PHP? Does
the EV issuance status of a CA affect something like Thunderbird?

Or consider other stores - like Chrome - in which EV-SSL status is granted
not solely by the presence of policy, but the associated Certificate
Transparency information. One cannot equivalently determine EV status
solely based on a policy status - it's more than that.


> > Of course, it seems like your argument is you want to express the Firefox
> > behaviors either directly in NSS (as some trust restrictions are, via
> code)
> > or via some external, shared metafile, which wouldn't be relevant to
> > non-Firefox consumers.
>
> Perhaps this is the disconnect. Several non-Firefox consumers have said
> they are very interested in an encoding of the root program's partial
> trust decisions.
>

Could you recall where this happened? It doesn't seem from this thread,
beyond Kai's remarks, but perhaps you're evaluating against the previous
threads?

No, because they could consume whatever copy of the upstream file
> Firefox had imported.
>
> I don't expect "Mozilla's root store's trust view" and "Trusted by
> Firefox" ever to diverge, apart from due to time skew, and perhaps
> occasionally due to unencodeable restrictions.
>

But they already do, regularly. Compare Firefox ESR with Firefox Beta with
Firefox stable, and then compare that with NSS releases (and different OS
distributions of those releases). There is already an inherent divergence.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: FW: P-521

2017-07-06 Thread Ryan Sleevi via dev-security-policy
On Thu, Jul 6, 2017 at 10:46 AM, Gervase Markham via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On 05/07/17 14:49, Alex Gaynor wrote:
> > Is it really true that additional curves are just additional parameters?
> I
>
> That was my assumption; additional clue on this point would be welcome.


As Alex mentioned - it's generally not the case. While you can implement
with generic parameters, this can create both security and performance
issues, and so the preference within cryptographic libraries is to maintain
optimized versions (optimized for constant time, which is not always easy,
but also optimized for performance).

For NSS, consider the contributions from Intel -
https://bugzilla.mozilla.org/show_bug.cgi?id=1073990 , the performance
analysis in https://bugzilla.mozilla.org/show_bug.cgi?id=1125028 , the
performance optimizations in
https://bugzilla.mozilla.org/show_bug.cgi?id=653236 , and the performance
issues in https://bugzilla.mozilla.org/show_bug.cgi?id=1293936 . In short,
it generally gravitates towards per-platform, per-curve optimizations.

I think it's also worthwhile to consider the performance impact -
https://www.imperialviolet.org/2010/12/21/eccspeed.html . Note where P-521
falls on that graph. While this is 7 years ago, the numbers have not (to my
knowledge) substantially improved in relation to eachother.

It's also useful to think of this similar to RSA. The Baseline Requirements
do not set a maximum bound on the RSA modulus size - merely specifying a
minimum of 2048. However, in practice, >= 8096 is not supported, due to
limitations that many platforms impose, due to the computational cost. So
the Web PKI does determine an effective limit, even if NSS supports up to
16K RSA moduli sizes (but imposes 16K as the limit, again, for performance
reasons).

So the Web PKI certainly imposes limits - for performance, security, and
interoperability - so it's not unreasonable to impose this same limit. The
performance gulf, and the added overhead, do not make it significantly
compelling to add support for, and the security boundary between 192-bits
and 256-bits is somewhere in the "heat death of the universe" level
security (see
https://www.imperialviolet.org/2014/05/25/strengthmatching.html )
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Machine- and human-readable format for root store information?

2017-07-05 Thread Ryan Sleevi via dev-security-policy
On Wed, Jul 5, 2017 at 4:32 AM Gervase Markham via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On 29/06/17 16:27, Ryan Sleevi wrote:
> > Well, the current certdata.txt is a text file. Do you believe it's
> human-readable, especially sans-comments?
>
> Human readability is, of course, a little bit of a continuum. You can
> open it in a text editor and get some sense of what's going on, but it's
> far from ideal.


Unfortunately, your answers don't really help capture your goals - and thus
make this a very difficult endeavor to satisfy.

You haven't really established on what principles you believe JSON (which
seems to be your preferred format, and which does not support comments) is
more favorable than the current format.

That is, the difference between, say:
"label": "Verisign/RSA Secure Server CA"
And
CKA_LABEL "Verisign/RSA Secure Server CA"

I would argue there isn't a meaningful difference for "human readability",
and it's more a subjective preference. Before we fixate on those, I'm
hoping we should get objective use cases nailed down. That's why I'm trying
to understand how you're evaluating that spectrum. Is it because it's
something you'd like to maintain, because you think it should be "readable"
on a webpage, etc?


> How it is sans-comments is irrelevant, because it has comments. :-)


It isn't, because JSON can't.


> Of course, those changing the root store might need access to the
> compilation tool. But from a Mozilla PoV, that's just Kai normally. And
> if people were used to editing and consuming certdata.txt, they could
> continue to do it that way.


I'm thinking you may have misunderstood? Are you suggesting certdata.txt is
canonical or not? Otherwise, they can't continue doing it hat way - they
would have to use whatever format you adopt, and whatever new tools.

>
> Thought experiment for you: if we decided to make the root store its own
> thing with its own repo and its own release schedule, and therefore NSS
> became a downstream consumer of it, where on occasion someone would
> "take a release" by generating and checking in certdata.txt from
> whatever format we decided to use, what problems would that cause?


Would you see it being as independent, or subservient to Firefox? If you
saw it as independent, then you would presumably need to ensure that - like
today - Firefox-specific needs, like EV or trust restrictions, did not
creep into the general code.

Of course, it seems like your argument is you want to express the Firefox
behaviors either directly in NSS (as some trust restrictions are, via code)
or via some external, shared metafile, which wouldn't be relevant to
non-Firefox consumers.

More broadly, that proposal simply adds more work and moving parts, and
arguably undermines your stated goals - because downstream parties like
those identified are not interested in what the "upstream root store" is
doing - they're interested in what Firefox is doing, and to get that, they
would need to consume certdata.txt as well.

I'm fairly certain we're not on the same page as to what problems consumers
are facing in this space, and this may be contributing to the
misunderstanding. If you look at major parties doing stuff in this space -
Cloudflare's CFSSL, SSLLabs, Censys - the goal is generally "trusted by
Firefox," as the goal is debugging and helping users properly configure.
crt.sh is more interested in "trusted by NSS," due to the policy
enforcement.

That is - there are two separate problems - trusted by browser X, and
trusted by root program Y. We should at least recognize these as related,
but separable problems. The need to identify the former is why, for
example, folks scrape the historic releases (or maintain copies, such as of
the Microsoft CTLs).

>
> > So clearly, we get in situations where not all restrictions are
> expressible.
>
> Sure. As I said, I'm not interested in an arbitrarily complex file
> format, so it will always be possible to come up with restrictions we
> can't encode.


I'm still not sure I understand what you believe is arbitrarily complex.
All restrictions can be encoded - it's a question of whether the complexity
is useful. For example, you could encode a BPF like state machine for
restrictions - which can be fully encoded and processed, but which would
add code. But one could easily make the argument that a BPF like filter
library is useful and worthwhile for any number of Root stores.

It's very easy to get lost in this games, and so perhaps it may be useful
if you could contemplate what your core goals are, for Mozilla. I'm not
sure it would be fair to express hypothetical's for Apple or others, in
their absence, but I hope you can appreciate why this feels like a lot of
"ambiguous make work," as specified.

But whatever format Ap

Re: Machine- and human-readable format for root store information?

2017-07-03 Thread Ryan Sleevi via dev-security-policy
On Mon, Jul 3, 2017 at 11:53 AM, Kai Engert  wrote:

> > > I suspect, means anyone could plug
> > > in a modern CI
>
> CI meaning Continuous Integration ?
>

Yes. Gerv's proposal rests on the idea of having a file committed that
explains it in human-readable and machine-readable (simultaneously) form,
and then have a continuous integration build translate that into something
consumable by NSS, and then commit that generated file back into the tree
(as I understand it). For example, the resulting certdata.txt or certdata.c


> I'd prefer a bit more control. Any conversion script, which translates
> from a
> new high level file format, to a specific technical file format used by our
> software, could have bugs.
>
> If everything is automated, there's more risk that changes might not get
> reviewed, and bugs aren't identified.
>

Agreed


> > I don't believe the state of NSS infrastructure is well-placed to
> support that
> > claim. I'd be curious for Kai's/Red Hat's feedback.
>
> I'm not sure I correctly understand this sentence, but if you're asking if
> we
> have such conversion magic, we don't.
>

That's what I was asking about.


> There's the technicaly possibility of having commit hooks. But I'm not
> sure I
> like that approach.
>

I agree.


> I would discourage a few things when introducing a JSON file format, like,
> avoid
> unnecessary changes in line wrapping or reordering, to make it easier to
> compare
> different revisions.
>

Right. And JSON can't have comments. So we'd lose substantially in
expressiveness.


> > > No, because NSS consumers could choose to continue consuming the
> > > (autogenerated by the CI tool) certdata.txt.
> >
> > The CI tools don't check in artifacts.
>
> What does artifact mean in this context?
>

"Artifact" = generated file run as part of a build process, and then
checked back in.


> > Thought experiment: Why not have certdata.txt generate a CI artifact that
> > interoperates for other consumers to use?
>
> Are you suggesting that we should convert certdata.txt into a file format
> that
> others would prefer to consume?
>
> Yes, that's another option.
>
> But it wouldn't solve the idea to also store the Mozilla EV attributes in
> the
> common place. Firefox developers would have to start converting information
> found inside NSS to Firefox application code.
>

I'm not sure I fully understand your response. The suggestion was that if
there's some 'other format' that leads interoperability to downstream
consumers, it 'could' be a path to take certdata.txt and have a tool that
can generate that 'other format' from certdata.txt.

The purpose of this thought experiment was to find what, if any,
limitations exist in certdata.txt. You've highlighted a very apt and
meaningful one, in theory - which is that EV data is a Mozilla Firefox (and
exclusively Firefox) concept, while trust records are an aspect of the root
store, hence, the dual expression between Mozilla Firefox source and NSS
source. If we wanted to make "EV" a portion of NSS (which makes no sense
for, say, Thunderbird), we could certainly express that - but it means
carrying around unneeded and unused attributes for other NSS consumers.


> > > Mozilla's opinions on roots are defined by the sum total of:
> > >
> > > 1) certdata.txt
> > > 2) ExtendedValidation.cpp
> > > 3) The changes listed on
> > > https://wiki.mozilla.org/CA/Additional_Trust_Changes
> >
> > 1 & 2 for sure. I don't believe #3 can or should be, certainly not
> effectively
> > maintained.
>
> I think Mozilla could and should try to. See my suggestion to use invented
> identifiers for describing each category of invented partial distrust.
>

I don't disagree we can - on a technical level. But I don't agree that the
ontology of invented partial distrust holds, nor is it terribly useful to
try to expect us to generalize distrust for the various ways in which CAs
fail the community. That said, even when thinking about the concepts, the
fact that the goal is presently woefully underspecified means we cannot
have a good objective discussion about why "Apply the WoSign policy" is
better or worse than a notion of "Distrust certificates after this date" -
or perhaps even a more complex policy, like "Distrust X certificates after
A date, Y certificates after B date, Z certificates after C date, unless
conditions M, N, O are also satisfied"
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Unknown Intermediates

2017-06-29 Thread Ryan Sleevi via dev-security-policy
On Thu, Jun 29, 2017 at 3:56 PM, Bruce via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:
>
> I'm trying to understand this posting. I think the CAs have an obligation
> to disclose all Intermediate certificates to the CCADB. I don't think that
> the CAs have an obligation to disclose through CT. Am I right?
>

Correct.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Machine- and human-readable format for root store information?

2017-06-29 Thread Ryan Sleevi via dev-security-policy
On Wednesday, June 28, 2017 at 7:39:37 PM UTC-4, Gervase Markham wrote:
> Well, we should ask Kai what methods he uses to maintain it right now,
> and whether he uses a tool.

For the recent name constraints, it was a tool.

> > You can have a JSON file, but that doesn't mean it's human-readable in the 
> > least.
> 
> You mean you can stick it all one one line? Or you can choose opaque key
> and value names? Or something else?

Well, the current certdata.txt is a text file. Do you believe it's 
human-readable, especially sans-comments?

> 
> > The CI tools don't check in artifacts. You're proposing giving some piece 
> > of infrastructure the access to generate and check in files?
> 
> I am led to understand this is a fairly common pattern these days.

Please realize that this makes it impossible to effectively test changes, 
without running said tool. This is, again, why certdata.txt being generated is 
part of the build - so that when you change a file, it's reflected in the build 
and code and you can effectively test.

Moving to a CI system undermines the ability to effectively contribute and test.

That's why "machine-readable" is, in effect, a must-have. Whether or not 
"human-readable" is (and what constitutes human-readable) is the point of 
discussion, but if you check in the machine-readable form, then anyone can 
generate the human-readable form at any time.

> 
> >> If Apple said "we are happy to use the MS format", I guess the next
> >> thing I would do is find Kai or whoever maintains certdata.txt and say
> >> "hey, it's not ideal, but what do you think, for the sake of everyone
> >> using the same thing?".
> > 
> > Thought experiment: Why not have certdata.txt generate a CI artifact that 
> > interoperates for other consumers to use?
> 
> Because certdata.txt's format is not rich enough to support all the data
> we would want to encode in a root store. We could consider extending it,
> but why would we roll our own container format when there exist
> perfectly good ones?

Could you explain how you arrive at that conclusion? That may simply be a 
technical misunderstanding, as certdata.txt's format allows for the expression 
of arbitrary attributes (as recently added with the "Mozilla Root" attribute) 
in an appropriate form.

Which may be why we're at cross-purposes here - the existing certdata.txt is 
already technically capable of expressing the constraints. However, it is a 
complex technical burden to express that in metadata, rather than in code - and 
that is true no matter what format you choose.

If your understanding was based on a misunderstanding that "certdata.txt cannot 
be extended to support arbitrary metadata", then I can easily tell you that's 
not the case. It's a matter of changing NSS to, rather than express something 
simply and cleanly in code (relatively speaking), finding an ontology to 
express the constraint in a machine-readable (but not-code) format, and then 
code to parse that and apply in 100 lines what might take 5 lines in code.

This is the same as the authroot.stl - both are quite robust, 
arbitrarily-extensible formats. The choice to not extend is not one about 
technical limitation, but about unreasonable return for the cost to implement.

> 
> >> Mozilla's opinions on roots are defined by the sum total of:
> >>
> >> 1) certdata.txt
> >> 2) ExtendedValidation.cpp
> >> 3) The changes listed on
> >> https://wiki.mozilla.org/CA/Additional_Trust_Changes
> > 
> > 1 & 2 for sure. I don't believe #3 can or should be, certainly not 
> > effectively maintained. Certainly, Google cannot and would not be able to 
> > find an acceptable solution on #3, just looking at things like CT, without 
> > introducing otherwise meaningless ontologies such as "Follows 
> > implementation #37 for this root".
> 
> There are seven items on the list in #3. The first one is item 2, above.
> The second is not a root store modification, technically. The third,
> fifth and sixth would be accommodated if the new format had a "notAfter"
> field. The fourth and seventh would be accommodated if the new format
> had a "name constraints" field.
> 
> So putting all of #3, as it currently stands, into a new format seems
> eminently doable. That doesn't mean every restriction we ever think of
> could be covered, but the current ones (which are ones I can see us
> using again in the future) could be.

That takes a very Mozilla-centric view, but that doesn't align with, say, the 
goal of supporting Apple.

For example, Apple has three CAs where only certain, previously disclosed (via 
CT) certificates are trusted - 
https://opensource.apple.com/source/security_certificates/security_certificates-55070.30.7/certificates/allowlist/
 - CNNIC and WoSign. In a machine-readable form, either you put that in a 
unified file, or you come up with an ontology for expressing dependencies that 
stretches well beyond the sane bounds.

Mozilla's solution to this was, unsurprisingly, with code ( see 

Re: Machine- and human-readable format for root store information?

2017-06-28 Thread Ryan Sleevi via dev-security-policy
On Wednesday, June 28, 2017 at 5:29:19 PM UTC-4, Gervase Markham wrote:
> Well, the fact that we now use Git, I suspect, means anyone could plug
> in a modern CI tool that did "Oh, you changed file X. Let me regenerate
> file Y and check it in alongside". Without really needing anyone's
> permission beyond checkin access.

I don't believe the state of NSS infrastructure is well-placed to support that 
claim. I'd be curious for Kai's/Red Hat's feedback.

> Well, I don't do the actual maintenance of certdata.txt, but I assume
> (perhaps without evidence) that telling whoever does that "hey, you now
> need to use this tool to edit the canonical information store, instead
> of the text editor you have been using" might not go down well. It
> wouldn't if it were me.

It already (effectively) requires a tool to make sure it's done right, AIUI :)

But I think you're still conflating "text" vs "human readable", and I'm not 
sure that they represent equivalents. That is, "human readable" introduces a 
subjective element that can easily lead to ratholes about whether or not 
something is "readable enough", or coming up with sufficient ontologies so that 
it can "logically map" - just look at XML for the case study in this.

You can have a JSON file, but that doesn't mean it's human-readable in the 
least.

That's why I'm pushing very hard on that.

> No, because NSS consumers could choose to continue consuming the
> (autogenerated by the CI tool) certdata.txt.

The CI tools don't check in artifacts. You're proposing giving some piece of 
infrastructure the access to generate and check in files? I believe Mozilla may 
do that, but NSS does not, and the infrastructure is separately maintained.

> You want me to rank my goals in order of preference? :-)

Moreso be more explicit in the goals. It's trying to figure out how 'much' 
interoperability is being targeted here :)

> If Apple said "we are happy to use the MS format", I guess the next
> thing I would do is find Kai or whoever maintains certdata.txt and say
> "hey, it's not ideal, but what do you think, for the sake of everyone
> using the same thing?".

Thought experiment: Why not have certdata.txt generate a CI artifact that 
interoperates for other consumers to use?

Which is all still a facet of the original question: Trying to determine what 
your goals are / what the 'necessary' vs 'nice to have' features are :)

> It's not a massive improvement if we are the only group using it. I
> think there is value to Mozilla even if MS and Apple don't get on board,
> because our root store gets more descriptive of reality, but that value
> alone might not be enough to convince someone like the two people who
> have expressed interest thusfar to take the time to work on the spec. I
> don't know.

But why doesn't certdata.txt meet that already, then? It's a useful thought 
experiment to find out what you see the delta as, so that we can understand 
what are and are not acceptable solutions.

> Mozilla's opinions on roots are defined by the sum total of:
> 
> 1) certdata.txt
> 2) ExtendedValidation.cpp
> 3) The changes listed on
> https://wiki.mozilla.org/CA/Additional_Trust_Changes

1 & 2 for sure. I don't believe #3 can or should be, certainly not effectively 
maintained. Certainly, Google cannot and would not be able to find an 
acceptable solution on #3, just looking at things like CT, without introducing 
otherwise meaningless ontologies such as "Follows implementation #37 for this 
root".

(Which, for what it's worth, is what Microsoft does with the authroot.stl, 
effectively)
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: P-521

2017-06-28 Thread Ryan Sleevi via dev-security-policy
On Tue, Jun 27, 2017 at 2:44 PM, Alex Gaynor via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> I'll take the opposite side: let's disallow it before it's use expands :-)
> P-521 isn't great, and there's really no value in proliferation of crypto
> algorithms, as someone told me: "Ciphersuites aren't pokemon, you shouldn't
> try to catch 'em all". There's no real use cases P-521 enables, and not
> supporting it means one less piece of code to drag around as we move
> towards better curves/signature algorithms like Ed25519 and co.


+1 to this.

P-521 is specified for negotiation because negotiation is just that -
negotiation. It's not mandatory to implement all of those algorithms, and
it's not necessarily desirable to either (e.g. rsa_pkcs1_sha1 )

P-521 does not have widespread deployment on the Web PKI, and does not
meaningfully or substantially improve security relevant to the attacks, at
a computational and interoperability cost that is justified.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Machine- and human-readable format for root store information?

2017-06-28 Thread Ryan Sleevi via dev-security-policy
On Tue, Jun 27, 2017 at 3:52 PM, Gervase Markham via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On 27/06/17 12:17, Ryan Sleevi wrote:
> > This was something the NSS developers explicitly moved away from with
> > respect to certdata.c
>
> It would be interesting to know the history of that; but we are in a
> different place now in terms of the SCM system we use and the CI tools
> available, versus what we were a few years ago.
>

Not really, at least from the NSS perspective. There's been the CVS ->
Mercurial -> Git(ish) transitions, but otherwise, the tools and
dependencies have largely remained the same.


> If you were able to elaborate on the relevant history here, as you
> obviously know it, that would be helpful.
>

Well, the obvious issue remains cross-compiling and what dependencies are
acceptable. So the minimal set was - in order to maintain compatibility
with NSS consumers like Red Hat and Oracle - the set of tools already
integrated into the build system. Even the transition to GTests has not
been without controversy, and not all GTests are run by all NSS consumers,
due to the dependency on (modern) C++.

I highlight this because the "Mozilla build environment" is not necessarily
aligned with the "NSS Build environment"


> >> That's one option. I would prefer something which is both human and
> >> computer-readable, as certdata.txt (just about) is.
> >
> > Why? Opinions without justification aren't as useful ;)
>
> :-) Because human-readable only is clearly silly, and computer-readable
> only is harder to maintain (requires tools other than a text editor). I
> want it to be easily maintainable, easily browseable and also
> unambiguously consumable by tools.
>

Put differently: If a human-readable version could be generated from a
machine-readable file, is the objective met or not?

You've put a very particular constraint here (both human and machine
readable), which is very much a subjective question (as to whether it's
human readable), and which arguably can be produced from anything that
meets a machine readable format.

For example, you highlight that computer-readable only requires other tools
to maintain, but that's not intrinsically true (you can have
machine-readable text files, for example), and one in which you're just
shifting the tooling concern from "NSS maintainers" to "NSS consumers"
(which is worth calling out here; it's increasing the scale and scope of
impact).

I can understand the preference, but I'm trying to suss out what the actual
hard requirements and goals are, since as exciting as the prospect is, not
only does it require work (to define said schema), but it requires work to
integrate that schema, and wanting to understand what the long-term payout
is.


> > Apple suggested they'd like to make this data available; my hope would
> >> be that if a format could be defined, they might be persuaded to adopt
> it.
> >
> > And if they can't, is that justified?
> >
> > That is, it sounds like you're less concerned about cross-vendor
> > interoperability, and only concerned with Apple interoperability. Is that
> > correct?
>
> I'm after interoperability with whoever wants to interoperate.


That doesn't really helpfully answer the question, but apologies for not
making it explicit.

You've proposed solutions and goals that appear to align with "We want
Apple to use our format", and are explicitly rejecting "We will
interoperate with Microsoft using their format", while presenting it as "We
want interoperability"

1) Is it correct that you value Apple interoperability (because they've
privately expressed some interest, or which you hope to convince them to,
given their public statements)
2) Is it correct that you do not value Microsoft interoperability (because
you're explicitly defining criteria that would reject that interoperability)
3) If neither party arrives at an interoperable solution, are your goals
met and is the work justified?


> The other
> benefits I see for Mozilla are being able to better (if not perfectly)
> express our root store's opinions on our level of trust for roots in a
> single computer-readable file, rather than the combination of a text
> file, a C++ file and a wiki page.
>

Well, regardless, you need the C file, unless you're also supposing that
NSS directly consume the computer-readable file (adding both performance
and security implications).

The wiki page you mention is already automatically generated (by virtue of
Salesforce), and you're certainly not eliminating that burden of
maintenance, so it seems like you still have three files - the 'source in
tree', the generated code, and the Salesforce-driven output. Can you
explain to me the benefit there?


> Given that t

Re: Machine- and human-readable format for root store information?

2017-06-27 Thread Ryan Sleevi via dev-security-policy
On Tue, Jun 27, 2017 at 1:49 PM Gervase Markham via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On 27/06/17 10:35, Ryan Sleevi wrote:
> > If that is the goal, it may be useful to know what the proposed
> limitations
> > / dependencies are. For example, the translation of the txt to the c file
> > generated non-trivial concern among the NSS development team to support.
>
> I propose it be part of the checkin process (using a CI tool or similar)
> rather than part of the build process. Therefore, there would be no new
> build-time dependencies for NSS developers.


This was something the NSS developers explicitly moved away from with
respect to certdata.c

> For example, one possible suggestion is to adopt a scheme similar to, or
> > identical to, Microsoft's authroot.stl, which is PKCS#7, with attributes
> > for indicating age and expiration, and the ability to extend with
> > vendor-specific attributes as needed. One perspective would be to say
> that
> > Mozilla should just use this work.
>
> That's one option. I would prefer something which is both human and
> computer-readable, as certdata.txt (just about) is.



Why? Opinions without justification aren't as useful ;)

(To be fair, this is broadly about articulating and agreeing use cases
before too much effort is spent)

Apple suggested they'd like to make this data available; my hope would
> be that if a format could be defined, they might be persuaded to adopt it.



And if they can't, is that justified?

That is, it sounds like you're less concerned about cross-vendor
interoperability, and only concerned with Apple interoperability. Is that
correct?

> Further, one could
> > reasonably argue that an authroot.stl approach would trouble Apple, much
> as
> > other non-SDO driven efforts have, due to IP concerns in the space.
> > Presumably, such collaboration would need to occur somewhere with
> > appropriate IP protections.
>
> Like, really? Developing a set of JSON name-value pairs to encode some
> fairly simple structured data has potential IP issues? What kind of mad
> world do we live in?


It doesn't matter the format - it matters how and where it was developed.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Machine- and human-readable format for root store information?

2017-06-27 Thread Ryan Sleevi via dev-security-policy
On Tue, Jun 27, 2017 at 9:58 AM Gervase Markham via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On 27/06/17 04:16, Rob Stradling wrote:
> > If the aim is to replace certdata.txt, authroot.stl, etc, with this new
> > format, then I'm more interested.
>
> I can't speak for other providers, but if such a spec existed, I would
> be pushing for Mozilla to maintain our root store in that format, and
> auto-generate certdata.txt (and perhaps ExtendedValidation.cpp) on
> checkin for legacy uses.
>


If that is the goal, it may be useful to know what the proposed limitations
/ dependencies are. For example, the translation of the txt to the c file
generated non-trivial concern among the NSS development team to support.

For example, one possible suggestion is to adopt a scheme similar to, or
identical to, Microsoft's authroot.stl, which is PKCS#7, with attributes
for indicating age and expiration, and the ability to extend with
vendor-specific attributes as needed. One perspective would be to say that
Mozilla should just use this work.

However, the NSS developer would rightfully point out the complexity
involved in this - such as what language or tools should be used to
translate this form into the native code. Perl or Python (part of MozBuild)
may be acceptable to the Mozilla developer, but challenging for the NSS
developer. A native tool integrated into the build system (as signtool is
for updating the chk tool) presents a whole host of challenges for
cross-compilers.

Yet if the goal is cross-vendor compatibility, one can argue that is the
best approach, as t changes the number of vendors implementing it to 2,
from the present 1, and thus achieves that goal. As you introduce the
concept of Apple, but which has historically been a non-participant here,
it makes it hard to design a system acceptable to them. Further, one could
reasonably argue that an authroot.stl approach would trouble Apple, much as
other non-SDO driven efforts have, due to IP concerns in the space.
Presumably, such collaboration would need to occur somewhere with
appropriate IP protections.

These criticisms are not meant to suggest I disagree with your goal, merely
that it seems there would be a number of challenges in achieving your goal
that discussion on m.d.s.p. would not resolve. The way to address these
challenges seems to involve getting firm commitments and collaboration with
other vendors (since that is your primary goal), as well as to explore the
constraints and limits of the NSS (and related) build systems, since the
combination of those two factors will determine whether this is just
another complex transition (as changing certdata.c to be generated and not
checked in was) with limited applicability.


> Gerv
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Machine- and human-readable format for root store information?

2017-06-26 Thread Ryan Sleevi via dev-security-policy
On Mon, Jun 26, 2017 at 9:50 AM, Gervase Markham via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> A few root store operators at the recent CAB Forum meeting informally
> discussed the idea of a common format for root store information, and
> that this would be a good idea. More and more innovative services find
> it useful to download and consume trust store data from multiple
> parties, and at the moment there are various hacky solutions and
> conversion scripts in use.
>

Gerv,

Do you anticipate this being used to build trust decisions in other
products, or simply inform what CAs are trusted (roughly)?

My understanding from the discussions is that this is targeted at the
latter - that is, informative, and not to be used for trust decision
capability - rather than being a full expression of the policies and
capabilities of the root store.

The reason I raise this is that you quickly get into the problem of
inventing a domain-specific language (or vendor-extensible, aka
'non-format') if you're trying to express what the root store does or what
constraints it applies. And that seems a significant amount of work, for
what is an unclear consumption / use case.

I'm hoping you can clarify with the concrete intended users you see Mozilla
wanting to support, and if you could share what the feedback these other
store providers offered.

FWIW, Microsoft's (non-JSON, non-XML) machine readable format is
http://unmitigatedrisk.com/?p=259
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


  1   2   3   4   5   6   >