Re: Misissued/Suspicious Symantec Certificates

2017-02-17 Thread Ryan Sleevi via dev-security-policy
Hi Steve,

Two more question to add to the list which is already pending:

In [1], in response to question 5, Symantec indicated that Certisign was a
WebTrust audited partner RA, with [2] provided as evidence to this fact.
While we discussed the concerns with respect to the audit letter,
specifically in [3], questions 3 - 6, and while Symantec noted that it
would case to accept future EY Brazil audits, I have confirmed with CPA
Canada that at during the 2016 and 2017 periods, EY Brazil was not a
licensed WebTrust practitioner, as indicated at [4].

Given that EY Brazil was not a licensed WebTrust auditor, it appears that
Symantec failed to uphold Section 8.2 of the Baseline Requirements, v1.4.1
[5], namely, that "(For audits conducted in accordance with the WebTrust
standard) licensed by WebTrust", which is a requirement clearly articulated
in Section 8.4 of the Baseline Requirements, namely, that "If the CA is not
using one of the above procedures and the Delegated Third Party is not an
Enterprise RA, then the CA SHALL obtain an audit report, issued under the
auditing standards that underlie the accepted audit schemes found in
Section 8.1, ..."

1) Was Symantec's compliance team involved in the review of Certisign's
audit?
2) Does Symantec agree with the conclusion that, on the basis of this
evidence, Symantec failed to uphold the Baseline Requirements, independent
of any action by a Delegated Third Party?

[1] https://bug1334377.bmoattachments.org/attachment.cgi?id=8831933
[2] https://bug1334377.bmoattachments.org/attachment.cgi?id=8831929
[3] https://bug1334377.bmoattachments.org/attachment.cgi?id=8836487
[4]
http://www.webtrust.org/licensed-webtrust-practitioners-international/item64419.aspx
[5] https://cabforum.org/wp-content/uploads/CA-Browser-Forum-BR-1.4.1.pdf
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Misissued/Suspicious Symantec Certificates

2017-02-17 Thread Ryan Sleevi via dev-security-policy
On Fri, Feb 17, 2017 at 5:17 PM, urijah--- via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Friday, February 17, 2017 at 7:50:31 PM UTC-5, uri...@gmail.com wrote:
> > On Friday, February 17, 2017 at 7:23:54 PM UTC-5, Ryan Sleevi wrote:
> > > I have confirmed with CPA
> > > Canada that at during the 2016 and 2017 periods, EY Brazil was not a
> > > licensed WebTrust practitioner, as indicated at [4].
> > >
> > > [4]
> > > http://www.webtrust.org/licensed-webtrust-practitioners-international/
> item64419.aspx
> >
> >
> > The footnote at the above makes that a little hard to understand--
> >
> > "EY refers to a member firm of Ernst & Young Global Limited.  Through a
> license with Ernst & Young Global Limited all EY members are licensed to
> provide WebTrust for Certification Authorities services."
>

Thanks for highlighting this. Indeed, while confirming the list was up to
date, I had missed the footnote.


> Additionally "Ernst Young Brazil" was listed as late as March 20, 2016
> apparently.
>
> https://web-beta.archive.org/web/20160320161225/http://www.
> webtrust.org/licensed-webtrust-practitions-international/item64419.aspx
>
>
The audit was dated 2017/01/24, so the historic status would be irrelevant.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SHA-1 collision

2017-02-23 Thread Ryan Sleevi via dev-security-policy
On Thu, Feb 23, 2017 at 5:16 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:
>
> For example, in a certificate request, while the attacker can "choose"
> such a bunch of bits in the public key, the value also has to be a valid
> public key for which the attacker can generate at least one digital
> signature (the one on the CSR).


I do not believe this is correct. Can you please point me to the Baseline
Requirements' requirement that the Applicant demonstrate proof of
possession of a private key through a CSR?



> For RSA certificates, this might be done by requesting a longer that 2048
> bit public key found by searching for primes whose most significant 1024
> bits multiply to form the attack value (such prime searching is a standard
> part of legitimate key generation).  The length of the request DN would
> then need to be adjusted to align the public key just right for the public
> key to start at a 64 byte boundary after adding the stuff the victim CA
> adds.  And after that, the actual 128 byte block would need the very costly
> calculation for a prefix corresponding to the combination of the chosen
> request parameters and whatever the CA is predicted to add, including any
> random serial number bits and whatever "predictable" serial number and
> timestamp will be used after the multi-month calculation period.
>

What you describe is not wrong, but it's also not right either. Perhaps
it'd be easier to simply point to the original, more accurate, and more
comprehensive description of the MD5 Rogue CA, which more accurately
addresses where the weaknesses are in the issuance process -
https://www.win.tue.nl/hashclash/rogue-ca/


> As a further countermeasure, I would suggest that CAs operating
> continued SHA-1 services (such as Windows XP compatible code signing
> certificate issuance and signature timestamping) should run a
> variation of the "sha1collisiondetection" check released Tuesday by the
> CWI/Google team, and simply refuse requests that this check flags as
> suspicious.


The checking for SHA-1 collision detection has been available from
Marc-et-al since 2012. There have been several improvements over the years
since, but that basic piece of advice is one that the community has already
been sharing for five years now.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Misissued/Suspicious Symantec Certificates

2017-02-24 Thread Ryan Sleevi via dev-security-policy
On Wed, Feb 22, 2017 at 8:32 PM, Ryan Sleevi  wrote:

> Hi Steve,
>
> Thanks for your continued attention to this matter. Your responses open
> many new and important questions and which give serious question as to
> whether the proposed remediations are sufficient. To keep this short, and
> thereby allow Symantec a more rapid response:
>
> 1) Please provide the CP, CPS, and Audit Letter(s) used for each RA
> partner since the acquisition by Symantec of the VeriSign Trust Services
> business in 2010.
>

Hi Steve,

Have you had the opportunity to review and complete this? This is hopefully
a simple task for your compliance team, given the critical necessity of
maintaining of records, so I'm hoping that you can post within the next
business day.

Regards,
Ryan
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Let's Encrypt appears to issue a certificate for a domain that doesn't exist

2017-02-22 Thread Ryan Sleevi via dev-security-policy
There is no definition or requirement for what a high risk domain is.
That's the point/problem.

WoSign may determine "apple", "google", "microsoft", and "github" as High
Risk.
Amazon may determine certificates issued on the first of the month are more
likely to be High Risk (because it may be that the 1st of the month is the
most lucrative time for credit card scammers to use their ill-gotten gains
to produce dangerous domains)

On Wed, Feb 22, 2017 at 7:55 PM, Richard Wang  wrote:

> I don't agree this.
> If "apple", "google", "Microsoft" is not a high risk domain, then I don’t
> know which domain is high risk domain, maybe only "github".
>
> Best Regards,
>
> Richard
>
> -Original Message-
> From: Peter Bowen [mailto:pzbo...@gmail.com]
> Sent: Thursday, February 23, 2017 11:53 AM
> To: Richard Wang 
> Cc: r...@sleevi.com; mozilla-dev-security-pol...@lists.mozilla.org; Tony
> Zhaocheng Tan ; Gervase Markham 
> Subject: Re: Let's Encrypt appears to issue a certificate for a domain that
> doesn't exist
>
> On Wed, Feb 22, 2017 at 7:35 PM, Richard Wang via dev-security-policy
>  wrote:
> > As I understand, the BR 4.2.1 required this:
> >
> > “The CA SHALL develop, maintain, and implement documented procedures that
> > identify and require additional verification activity for High Risk
> > Certificate Requests prior to the Certificate’s approval, as reasonably
> > necessary to ensure that such requests are properly verified under these
> > Requirements.”
> >
> > Please clarify this request, thanks.
>
> Richard,
>
> That sentence does not say that domain names including "apple", "google",
> or
> any other string are High Risk Certificate Requests
> (HRCR).   I could define HRCR as being those that contain domain names
> that contain mixed script characters as defined in UTS #39 section 5.1.
> "apple-id-2.com" is not mixed script so it is not a HRCR based on this
> definition.
>
> Thanks,
> Peter
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Misissued/Suspicious Symantec Certificates

2017-02-22 Thread Ryan Sleevi via dev-security-policy
Hi Steve,

Thanks for your continued attention to this matter. Your responses open
many new and important questions and which give serious question as to
whether the proposed remediations are sufficient. To keep this short, and
thereby allow Symantec a more rapid response:

1) Please provide the CP, CPS, and Audit Letter(s) used for each RA partner
since the acquisition by Symantec of the VeriSign Trust Services business
in 2010.



On Fri, Feb 17, 2017 at 8:32 PM, Steve Medin via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Our third response to questions, including these two below, is posted at
> Bugzilla, and directly at https://bug1334377.
> bmoattachments.org/attachment.cgi?id=8838825.
>
>
>
>
>
> From: Ryan Sleevi [mailto:r...@sleevi.com]
> Sent: Friday, February 17, 2017 6:54 PM
> To: Ryan Sleevi 
> Cc: Gervase Markham ; mozilla-dev-security-policy@
> lists.mozilla.org; Steve Medin 
> Subject: Re: Misissued/Suspicious Symantec Certificates
>
>
>
> Hi Steve,
>
>
>
> Two more question to add to the list which is already pending:
>
>
>
> In [1], in response to question 5, Symantec indicated that Certisign was a
> WebTrust audited partner RA, with [2] provided as evidence to this fact.
> While we discussed the concerns with respect to the audit letter,
> specifically in [3], questions 3 - 6, and while Symantec noted that it
> would case to accept future EY Brazil audits, I have confirmed with CPA
> Canada that at during the 2016 and 2017 periods, EY Brazil was not a
> licensed WebTrust practitioner, as indicated at [4].
>
>
>
> Given that EY Brazil was not a licensed WebTrust auditor, it appears that
> Symantec failed to uphold Section 8.2 of the Baseline Requirements, v1.4.1
> [5], namely, that "(For audits conducted in accordance with the WebTrust
> standard) licensed by WebTrust", which is a requirement clearly articulated
> in Section 8.4 of the Baseline Requirements, namely, that "If the CA is not
> using one of the above procedures and the Delegated Third Party is not an
> Enterprise RA, then the CA SHALL obtain an audit report, issued under the
> auditing standards that underlie the accepted audit schemes found in
> Section 8.1, ..."
>
>
>
> 1) Was Symantec's compliance team involved in the review of Certisign's
> audit?
>
> 2) Does Symantec agree with the conclusion that, on the basis of this
> evidence, Symantec failed to uphold the Baseline Requirements, independent
> of any action by a Delegated Third Party?
>
>
>
> [1] https://bug1334377.bmoattachments.org/attachment.cgi?id=8831933<
> https://clicktime.symantec.com/a/1/6wJmuz5H2ktURSIGjev34ZuuQTad1L
> RVz1nIlADR7XE=?d=EzdV7X-pe5sih3AYTnIMlzBIT3AaPBWIYQF9w
> d5LbpGrImaYYowG0inKiozTFwfAeJMk8B2dt_4yENjH4IaBlGSfv3Nbn8GMpSPDtntA
> Wmyx8q3PfDYHHU_bDfrHZGtmC5XInqf0-ck-FF9e6SGtIxb23Mc2kGZNy8eGAG1jAT
> 1TAe21ybqhXxIvmlxFXmTHtVMR3YXXvHPdAlcwv8e83_rm24C4_wUeNtE5oJFsBljHikK-
> 4oZ1OAUbs4kCgGUxt8cWaB75e0ZDlR_fb71_91rphEjG44uTwcWMGyYK07gsGTyfvK
> sUrvka6LTCQoX9d09q2fHeLb5TL3SPWUKa6B9_V5GfWubr-0rIMxR7-
> kT2QzmMrkTgl2YGGDT-rtrKWSZ_xCOsOuU3sp_ARcYoRPNHR1FUGD8%
> 3D=https%3A%2F%2Fbug1334377.bmoattachments.org%2Fattachment.cgi%3Fid%
> 3D8831933>
>
> [2] https://bug1334377.bmoattachments.org/attachment.cgi?id=8831929<
> https://clicktime.symantec.com/a/1/pfZiLBH0rxpzxfeiB5YSfvWdOjwpHC
> 72M_rUahZJxKQ=?d=EzdV7X-pe5sih3AYTnIMlzBIT3AaPBWIYQF9w
> d5LbpGrImaYYowG0inKiozTFwfAeJMk8B2dt_4yENjH4IaBlGSfv3Nbn8GMpSPDtntA
> Wmyx8q3PfDYHHU_bDfrHZGtmC5XInqf0-ck-FF9e6SGtIxb23Mc2kGZNy8eGAG1jAT
> 1TAe21ybqhXxIvmlxFXmTHtVMR3YXXvHPdAlcwv8e83_rm24C4_wUeNtE5oJFsBljHikK-
> 4oZ1OAUbs4kCgGUxt8cWaB75e0ZDlR_fb71_91rphEjG44uTwcWMGyYK07gsGTyfvK
> sUrvka6LTCQoX9d09q2fHeLb5TL3SPWUKa6B9_V5GfWubr-0rIMxR7-
> kT2QzmMrkTgl2YGGDT-rtrKWSZ_xCOsOuU3sp_ARcYoRPNHR1FUGD8%
> 3D=https%3A%2F%2Fbug1334377.bmoattachments.org%2Fattachment.cgi%3Fid%
> 3D8831929>
>
> [3] https://bug1334377.bmoattachments.org/attachment.cgi?id=8836487<
> https://clicktime.symantec.com/a/1/80dDdC7HC5yMdzxfwRS0saqQ2kS5Tv
> wuo_kNWaXWLCI=?d=EzdV7X-pe5sih3AYTnIMlzBIT3AaPBWIYQF9w
> d5LbpGrImaYYowG0inKiozTFwfAeJMk8B2dt_4yENjH4IaBlGSfv3Nbn8GMpSPDtntA
> Wmyx8q3PfDYHHU_bDfrHZGtmC5XInqf0-ck-FF9e6SGtIxb23Mc2kGZNy8eGAG1jAT
> 1TAe21ybqhXxIvmlxFXmTHtVMR3YXXvHPdAlcwv8e83_rm24C4_wUeNtE5oJFsBljHikK-
> 4oZ1OAUbs4kCgGUxt8cWaB75e0ZDlR_fb71_91rphEjG44uTwcWMGyYK07gsGTyfvK
> sUrvka6LTCQoX9d09q2fHeLb5TL3SPWUKa6B9_V5GfWubr-0rIMxR7-
> kT2QzmMrkTgl2YGGDT-rtrKWSZ_xCOsOuU3sp_ARcYoRPNHR1FUGD8%
> 3D=https%3A%2F%2Fbug1334377.bmoattachments.org%2Fattachment.cgi%3Fid%
> 3D8836487>
>
> [4] http://www.webtrust.org/licensed-webtrust-practitioners-international/
> item64419.aspx
>
> [5] https://cabforum.org/wp-content/uploads/CA-Browser-Forum-BR-1.4.1.pdf<
> https://clicktime.symantec.com/a/1/7AUAkdAzUJ1un022RuP_
> TfjD3UiY12QGLjanVeGgxhk=?d=EzdV7X-pe5sih3AYTnIMlzBIT3AaPBWIYQF9w
> d5LbpGrImaYYowG0inKiozTFwfAeJMk8B2dt_4yENjH4IaBlGSfv3Nbn8GMpSPDtntA
> 

Re: Let's Encrypt appears to issue a certificate for a domain that doesn't exist

2017-02-22 Thread Ryan Sleevi via dev-security-policy
Hi Richard,

There's no policies in the Baseline Requirements or Mozilla Requirements
that normalize or define high risk domain, which I believe your suggestion
presupposes.

Perhaps you (or Qihoo 360, as the voting member of the Forum of the
Qihoo/WoSign/StartCom collection) would consider proposing a Ballot to the
Baseline Requirements to address this. Alternatively, perhaps you would
have a concrete suggestion for Mozilla Root CA Inclusion Policy 2.5 that
might be able to address this in a consistent and auditable way and in a
manner consistent with Mozilla's policy goals regarding misissuance.

This is https://github.com/mozilla/pkipolicy/issues/1 for what it's worth,
recently resolved in Policy 2.4

On Wed, Feb 22, 2017 at 5:08 PM, Richard Wang via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> I think "apple-id-2.com" is a high risk domain that must be blocked to
> issue DV SSL to those domains.
>
> Here is the list of some high risk domains related to Microsoft and Google
> that Let's Encrypt issued DV SSL certificates to those domains:
> https://crt.sh/?id=77034583  for microsoftonline.us.com, a fake Office
> 365 login site
> https://crt.sh/?id=71789336  for mail.google-androids.ru
> https://crt.sh/?id=82075006  for marketgoogle.xyz
> https://crt.sh/?id=65208905  for google.ligboy.org
>
>
> Best Regards,
>
> Richard
>
> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-bounces+richard=
> wosign@lists.mozilla.org] On Behalf Of Gervase Markham via
> dev-security-policy
> Sent: Thursday, February 23, 2017 8:30 AM
> To: Tony Zhaocheng Tan ; mozilla-dev-security-policy@
> lists.mozilla.org
> Subject: Re: Let's Encrypt appears to issue a certificate for a domain
> that doesn't exist
>
> On 22/02/17 14:42, Tony Zhaocheng Tan wrote:
> > On 2017-01-03, Let's Encrypt issued a certificate for apple-id-2.com.
> > However, until today, the domain apple-id-2.com has apparently never
> > been registered. How was the certificate issued?
>
> On Hacker News, Josh Aas writes:
>
> "Head of Let's Encrypt here. Our team is looking into this and so far we
> don't see any evidence of mis-issuance in our logs. It looks like the
> domain in question, 'apple-id-2.com', was registered and DNS resolved for
> it successfully at time of issuance. Here is the valid authorization record
> including the resolved IP addresses for 'apple-id-2.com':
>
> https://acme-v01.api.letsencrypt.org/acme/authz/uZGv2KXUJ6Hl...
>
> We can't be sure why the reporter was unable to find a WHOIS record, we
> can only confirm that validation properly succeeded at time of issuance.
>
> Update: Squarespace has confirmed that they did register the domain and
> then released it after getting a certificate from us."
>
> There is currently an entry in WHOIS, because some well-meaning but
> unhelpful person registered it today. I assume that if a domain is
> registered and then released, and then re-registered, the "Creation"
> date is of the re-registration, not the first ever registration.
>
> So unless someone can show it was unregistered at the time of issuance, I
> don't see an issue here.
>
> Gerv
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Let's Encrypt appears to issue a certificate for a domain that doesn't exist

2017-02-22 Thread Ryan Sleevi via dev-security-policy
Hi Richard,

My point was that policy requirement simply states that there needs to be a
procedure, but does not establish any normative requirements. For example,
a CA could develop, maintain, and implement procedures which states that
any certificate that is qualified as High Risk requires Gerv Markham's
personal seal of approval - and could define the way to do so - but they
could also say that "Only certificates requested by Gerv Markham are
considered High Risk"

Alternatively, a CA could deem that all High Risk certificates will require
a second DNS resolution after 30 seconds, to make sure that the first was
not forged. And that can be developed, maintained, and implemented - but
again, doesn't do what you want.

Your suggestion - which was worded as specific domains, but if I might be
as bold as to suggest what you meant to say, likely meant substrings in
domains - implies that there is a certain common requirement either on how
to handle such High Risk requests (block/manually approve/require Gerv's
personal seal of approval imbued upon a wax maintained at a 78 degree
centrigrade temperature for no less than 30 minutes prior to imbuing) or
how to define what is high risk (e.g. substring, CAA, "people we don't like
because they embarrassed us")

Because the BRs (and Mozilla policy) neither specify what _is_ High Risk or
what is _acceptable_ when a High Risk request is processed, it does not
make any sense to suggest particular domains (or substrings) be prohibited
without tackling those two issues first. This is why I suggested that the
solution you've proposed, under the current policies in play, has zero
effect. If you believe your solution is right/appropriate, then the first
step would be to change the policies.

On Wed, Feb 22, 2017 at 7:35 PM, Richard Wang  wrote:

> Hi Ryan,
>
>
>
> As I understand, the BR 4.2.1 required this:
>
> “The CA SHALL develop, maintain, and implement documented procedures that
> identify and require additional verification activity for High Risk
> Certificate Requests prior to the Certificate’s approval, as reasonably
> necessary to ensure that such requests are properly verified under these
> Requirements.”
>
>
>
> Please clarify this request, thanks.
>
>
>
>
>
> Best Regards,
>
>
>
> Richard
>
>
>
> *From:* Ryan Sleevi [mailto:r...@sleevi.com]
> *Sent:* Thursday, February 23, 2017 11:21 AM
> *To:* Richard Wang 
> *Cc:* Gervase Markham ; Tony Zhaocheng Tan <
> t...@tonytan.io>; mozilla-dev-security-pol...@lists.mozilla.org
>
> *Subject:* Re: Let's Encrypt appears to issue a certificate for a domain
> that doesn't exist
>
>
>
> Hi Richard,
>
>
>
> There's no policies in the Baseline Requirements or Mozilla Requirements
> that normalize or define high risk domain, which I believe your suggestion
> presupposes.
>
>
>
> Perhaps you (or Qihoo 360, as the voting member of the Forum of the
> Qihoo/WoSign/StartCom collection) would consider proposing a Ballot to the
> Baseline Requirements to address this. Alternatively, perhaps you would
> have a concrete suggestion for Mozilla Root CA Inclusion Policy 2.5 that
> might be able to address this in a consistent and auditable way and in a
> manner consistent with Mozilla's policy goals regarding misissuance.
>
>
>
> This is https://github.com/mozilla/pkipolicy/issues/1 for what it's
> worth, recently resolved in Policy 2.4
>
>
>
> On Wed, Feb 22, 2017 at 5:08 PM, Richard Wang via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
> I think "apple-id-2.com" is a high risk domain that must be blocked to
> issue DV SSL to those domains.
>
> Here is the list of some high risk domains related to Microsoft and Google
> that Let's Encrypt issued DV SSL certificates to those domains:
> https://crt.sh/?id=77034583  for microsoftonline.us.com, a fake Office
> 365 login site
> https://crt.sh/?id=71789336  for mail.google-androids.ru
> https://crt.sh/?id=82075006  for marketgoogle.xyz
> https://crt.sh/?id=65208905  for google.ligboy.org
>
>
> Best Regards,
>
> Richard
>
>
> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-bounces+richard=
> wosign@lists.mozilla.org] On Behalf Of Gervase Markham via
> dev-security-policy
> Sent: Thursday, February 23, 2017 8:30 AM
> To: Tony Zhaocheng Tan ; mozilla-dev-security-policy@
> lists.mozilla.org
> Subject: Re: Let's Encrypt appears to issue a certificate for a domain
> that doesn't exist
>
> On 22/02/17 14:42, Tony Zhaocheng Tan wrote:
> > On 2017-01-03, Let's Encrypt issued a certificate for apple-id-2.com.
> > However, until today, the domain apple-id-2.com has apparently never
> > been registered. How was the certificate issued?
>
> On Hacker News, Josh Aas writes:
>
> "Head of Let's Encrypt here. Our team is looking into this and so far we
> don't see any evidence of mis-issuance in our logs. It looks like the
> domain in question, 'apple-id-2.com', was 

Re: Misissued/Suspicious Symantec Certificates

2017-02-22 Thread Ryan Sleevi via dev-security-policy
On Wed, Feb 22, 2017 at 8:36 PM, Jeremy Rowley 
wrote:

> Webtrust doesn't have audit criteria for RAs so the audit request may
> produce interesting results. Or are you asking for the audit statement
> covering the root that the RA used to issue from? That should all be public
> in the Mozilla database at this point.


Hi Jeremy,

I believe the previous questions already addressed this, but perhaps I've
misunderstood your concern.

"Webtrust doesn't have audit criteria for RAs so the audit request may
produce interesting results."

Quoting the Baseline Requirements, v.1.4.2 [1] , Section 8.4
"If the CA is not using one of the above procedures and the Delegated Third
Party is not an Enterprise RA, then the
CA SHALL obtain an audit report, issued under the auditing standards that
underlie the accepted audit schemes
found in Section 8.1, that provides an opinion whether the Delegated Third
Party’s performance complies with
either the Delegated Third Party’s practice statement or the CA’s
Certificate Policy and/or Certification Practice
Statement. If the opinion is that the Delegated Third Party does not
comply, then the CA SHALL not allow the
Delegated Third Party to continue performing delegated functions. "

Note that Symantec has already provided this data for the four RA partners
involved for the 2015/2016 (varies) period, at [2]. Specifically, see the
response to Question 5 at [3].

"Or are you asking for the audit statement covering the root that the RA
used to issue from? That should all be public in the Mozilla database at
this point."

Again, referencing Question 5 at [3], and the overall topic of the thread,
no, I am not asking for the audit statement covering the root that the RA
used to issue from. I'm asking for the audit report, issued under the
auditing standards that underlie the accepted audit schemes found in
Section 8.1, that provides an opinion whether the Delegated Third Party's
performance complies with either the Delegated Third Party's practice
statement or the CA's Certificate Policy and/or Certification Practice
Statement.

[1] https://cabforum.org/wp-content/uploads/CA-Browser-Forum-BR-1.4.2.pdf
[2] https://bugzilla.mozilla.org/show_bug.cgi?id=1334377
[3] https://bug1334377.bmoattachments.org/attachment.cgi?id=8831933
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Intermediates Supporting Many EE Certs

2017-02-14 Thread Ryan Sleevi via dev-security-policy
On Tue, Feb 14, 2017 at 5:47 AM, Steve Medin via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:
>
> -  The caching I’m talking about is not header directives, I mean
> how CAPI and NSS retain discovered path for the life of the intermediate.
> One fetch, per person, per CA, for the life of the CA certificate.
>

Right, which has problematic privacy issues, and is otherwise not advisable
- certainly not advisable in a world of technically constrained sub-CAs (in
which the subscriber can use such caching as a supercookie). So if a UA
doesn't do such 'permacache' and instead respects the HTTP cache, you get
those issues.

(Also, NSS doesn't do that behaviour by default; that was a Firefox-ism)


> -  Ever since Vista, CAPI’s root store has been pulled over a wire
> upon discovery. Only kernel mode driver code signing roots are shipped.
>

No, this isn't accurate.


> -  Once the mass market UAs enable dynamic path discovery as an
> option, server admins can opt in based on analytics.
>

Not really. Again, you're largely ignoring the ecosystem issues, so perhaps
this is where the tennis ball remark comes into play. There are effectively
two TLS communities that matter - the browser community, and the
non-browser community. Mozillan and curl maintainer Daniel Stenberg pretty
accurately captures this in
https://daniel.haxx.se/blog/2017/01/10/lesser-https-for-non-browsers/

-  PKCS#7 chains are indeed not a requirement, but see point 1.
> It’s probably no coincidence that IIS supports it given awareness of the
> demands placed on enterprise IT admins.
>

My point was that PKCS#7 is an abomination of a format (in the general
sense), but to the specific technical choice, is a poor technical choice
because the format lacks any structure of expressing order/relationship. A
server supporting PKCS#7 needs not just support PKCS#7, but the complexity
of chain building, in order to reorder the unstructured PKCS#7. And if the
server supports chain building, then it could be argued just as well, that
the server supports AIA. Indeed, if you're taking an ecosystem approach,
the set of clouds to argue at is arguably the TLS server market improving
their support to match IIS's (which, I agree, is quite good). That includes
basic things like OCSP stapling (e.g.
https://gist.github.com/sleevi/5efe9ef98961ecfb4da8 ) and potentially
support for AIA fetching, as you mention. Same effect, but instead of
offloading the issues to the clients, you centralize at the server. But
even if you set aside PKCS#7 as the technical delivery method and set aside
chain building support, you can accomplish the same goal, easier, by simply
utilizing a structured PEM-encoded file.

My point here is that you're advocating a specific technology here that's
regrettably poorly suited for the job. You're not wrong - that is, you can
deliver PKCS#7 certs - but you're not right either that it represents the
low-hanging fruit.


Philosophically, the discussion here is where the points of influence lie -
with a few hundred CAs, with a few thousand server software stacks (and a
few million deployments), and a few billion users. It's a question about
whether the solution only needs to consider the browser (which, by
definition, has a fully functioning HTTP stack and therefore _could_
support AIA) or the ecosystem (which, in many cases, lacks such a stack -
meaning no AIA, no OCSP, and no CRLs either). We're both right in that
these represent technical solutions to the issues, but we disagree on which
offers the best lever for impact - for end-users and for relying parties.
This doesn't mean it's a fruitless argument of intractable positions - it
just means we need to recognize our differences in philosophy and approach.

You've highlighted a fair point - which is that if CAs rotate intermediates
periodically, and if CAs do not (a) deliver the full chain (whether through
PEM or PKCS#7) or (b) subscribers are using software/hardware that does not
support configuring the full chain when installing a certificate, then
there's a possibility of increased errors for users due to servers sending
the wrong intermediate. That's a real problem, and what we've described are
different approaches to solving that problem, with different tradeoffs. The
question is whether that problem is significant enough to prevent or block
attempts to solve the problem Gerv highlighted - intermediates with
millions of certificates. We may also disagree here, but I don't believe
it's a blocker.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Intermediates Supporting Many EE Certs

2017-02-13 Thread Ryan Sleevi via dev-security-policy
On Mon, Feb 13, 2017 at 11:56 AM, Steve Medin via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Patrick, thanks, it appears my attempt at brevity produced density.
>
> - No amount of mantra, training, email notification, blinking text and
> certificate installation checkers make 100% of IT staff who install
> certificates on servers aware that issuing CAs change and need to be
> installed with the server certificate when they do.
> - Many servers do not support PKCS#7 installation.
> - When you roll an intermediate issuer and you modify the end entity
> certificate's AIA CA Issuers URI at the same time, the server presents an
> EE
> to the browser that provides a remedy to path validation failure.
> - The browser does its normal path discovery using cached discovered
> intermediates.
> - At rollover, the browser doesn't find the EE's issuer cached locally.
> - The browser chases AIA to the issuer that the EE asserts is its issuer,
> validates that, and caches the issuer for another  years.
> It's a one-validation latency cost per end user given cached path
> discovery.
>

In the absence of AIA, this quickly becomes discoverable for servers. The
only reason it represents a burden on CAs today is precisely because of
customers' (inadvertant) reliance on AIA to correct for server
misconfiguration.

As mentioned, I'm a strong proponent of AIA - I think it serves a valuable
role in ecosystem agility for root migrations - but I don't think it's
necessarily good for users when it's used to paper over (clear) server
misconfigurations, which is the situation you describe - where the path
from the EE to the Intermediate is improper. I'm more thinking about
situations for where the Intermediate to Root path may change, in order to
accommodate changes in the Root (from Root 1 to Root 2).

Ultimately, it seems like it's a question of whether it's "too burdensome"
to expect servers properly configure their TLS certificate, therefore, the
argument is browser should employ logic to obviate that need. However,
given tools like CFSSL, is that really a good or compelling argument -
particularly one to suggest it's a gating factor for improvement? Isn't it
largely a question of how CAs engage with their customers for the
provisioning and deployment of certificates, rather than a holistic
ecosystem issue (of which I consider root migration to be part of the
latter)
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Misissued/Suspicious Symantec Certificates

2017-02-13 Thread Ryan Sleevi via dev-security-policy
On Mon, Feb 13, 2017 at 4:48 AM, Gervase Markham via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Hi Steve,
>
> On 12/02/17 15:27, Steve Medin wrote:
> > A response is now available in Bugzilla 1334377 and directly at:
> > https://bugzilla.mozilla.org/attachment.cgi?id=8836487
>
> Thank you for this timely response. Mozilla continues to expect answers
> to all reasonable and polite questions posed in our forum, and is happy
> that Symantec is taking this approach.


Indeed Steve, thank you for your continued attention as we try to gain the
information and understanding necessary to determine how best to protect
users from misissued certificates.

I note that Symantec's answer to question 1 in [1] reiterates that, in
Symantec's view, the set of misissuance previously was solely related to a
specific internal tool, and as such, the remediation steps Symantec engaged
in focused on the process and controls related to that tool.

I highlight this because it seems difficult to understand the distinction
between the previous event and this current event, and understanding that
distinction seems relevant to understanding whether the steps Symantec took
previously were reasonable and complete to address this set of issues and
the community trust, as well as understanding the steps Symantec is
proposing or has taken in response to this current set of issues.

In the previous misissuance event, my understanding is that Symantec
asserts that the whole totality of the misissuance was related to a single,
specific tool. Symantec's initial response [2] was to assert that the issue
was limited to rogue actions of a few individuals contrary to Symantec's
policies and procedures. The proposed remediation of this was a termination
of relationship with those specific individuals. However, it was pointed
out by browsers based on a simple cursory examination that such a statement
was not consistent with the data - that the full set of issues were not
identified by Symantec in their initial investigation, and only upon
prompting by Browsers with a specific deadline did Symantec later recognize
the scope of the issues. In recognizing the scope, it was clear that the
issues did not simply relate to the use of a particular tool, but also to
the practices of employees with respect to asserting that things were
correct when they were not. A specific example is that the role of
Validation Specialist - which is tasked with independently reviewing the
certificate request for non-compliance - was designed in such a way that it
could be bypassed or overridden without following the appropriate policies.
These were actions independent of any particular tooling.

These issues were then amplified by the fact that Symantec was failing to
ensure that its policies and practices adhered to the appropriate version
of the Baseline Requirements, and that employees and staff were trained on
the appropriateness of ensuring the appropriate policies were followed,
regardless of the tools being employed.

In response to this issue, Symantec took a series of corrective steps, such
as:
- A comprehensive review of its Policies and Practices to ensure compliance
with the Baseline Requirements, as requested in [3] (and available at [4])
- The establishment of a centralized Compliance team to ensure compliance
across Symantec branded-CAs
- Internal training, which on the basis of [1] appears to have been limited
to a specific tool, rather than to the overall auditable criteria or
policies


In the current misissuance, my understanding is that Symantec asserts that
the totality of the misissuance was related to RAs. Symantec's initial
response to the set of questions posed by Google [5] indicated that " At
this time we do not have evidence that warrants suspension of privileges
granted to any other RA besides CrossCert" in the same message that
provided the CP/CPS for other RAs besides CrossCert, and itself a follow-up
to Symantec's initial response to the Mozilla community, [6], which
acknowledged for the potential of audit issues in the statement "We are
reviewing E’s audit work, including E’s detailed approach to
ascertaining how CrossCert met the required control objectives.". This
appears to be similar to the previous event, in that the proposed
remediation was first a termination of relationship with specific
individuals. However, in Symantec's most recently reply, [1], it seems that
again, on the basis of browser questions from a simple cursory examination
that such a statement was not consistent with the data - that is, that the
full set of issues were not identified by Symantec in their initial
investigation, and only upon prompting by Browsers with a specific deadline
did Symantec later recognize the scope of the issues. In recognizing the
scope, it was clear that the issues did not simply relate to the use of a
particular RA or auditor, but also to the practices of RAs with respect to
asserting things were correct when they 

Re: Google Trust Services roots

2017-02-10 Thread Ryan Sleevi via dev-security-policy
On Fri, Feb 10, 2017 at 8:00 AM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> I am trying to say that I use the word "issue" as the weakest category,
> orders of magnitude less serious than an absolute cause for rejection.


And I'm trying to suggest that it's in the category that is below the floor
acceptable for discussion in the group, because it's purely speculative and
without ability to evaluate.

It might be appropriate to raise as a "suggestion" during a public review
phase (which this is not), but that it's misleading and misrepresenting to
suggest it's an "issue" with any weight whatsoever, precisely because it's
absent factual detail and cannot be evaluated in any way - much like
statements such as "Government X might do Y to this CA", which is no more
valuable than a statement "Unicorns might exist and be morally repulsed by
this particular arrangement of words in the CPS". Yes, it's a possibility,
but it's in no way actionable, and it's certainly not appropriate to
suggest that, say, Mozilla should require the CA to change the sequence, to
avoid offending the Unicorns and thus bringing about the destruction of
Earth.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Trust Services roots

2017-02-09 Thread Ryan Sleevi via dev-security-policy
On Thu, Feb 9, 2017 at 3:39 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:
>
> Additional issue #2: The information at https://pki.goog/ about how to
> report misissuance directs visitors to a generic reporting page for
> code vulnerabilities, which (by their nature) tends to require reaction
> times measured in days/weeks rather than the 1 day maximum specified
> in Google's CPS.
>

(To be clear, I am responding only as an individual, neither as Mozilla
peer or Google employee, although I recognize you will likely disregard my
remarks regardless.)

In the past, such comments have generally been seen as offtopic/accusatory,
because they are inherently absent of evidence of any malfeasance. Indeed,
your very comment seems to suggest that Google is not adhering to its
CP/CPS, but without evidence, and such implication comes not based on any
action that Google has taken, but based on your view of what 'others' do or
the 'class' of bugs.

I highlight this because we (the community) see the occasional remark like
this; most commonly, it's directed at organizations in particular
countries, on the basis that we shouldn't trust "them" because they're in
one of "those countries". However, the Mozilla policy is structured to
provide objective criteria and assessments of that.

In this case, I do not believe you are being accurate or fair to present it
as an "issue"; you are implying that Google will not adhere to its CP/CPS,
but without evidence. The nature of incident reporting via this method may
indeed be risky, but it's neither forbidden nor intrinsically wrong. If you
look at many members in the Mozilla program, you will see far less
specificity as to a problem report and the acceptable means of reporting
this.

So while it's useful for you to draw attention to this, it's without
evidence or basis for you to suggest that this is an "issue", per se - that
is, it seemingly in no way conflicts with Mozilla policy or industry
practice.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Intermediates Supporting Many EE Certs

2017-02-14 Thread Ryan Sleevi via dev-security-policy
On Tue, Feb 14, 2017 at 10:13 AM, Steve Medin via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:
>
> I mention P7 because IIS inhales them in one click and ensures that the
> intermediate gets installed.


Yes, but that's not because of PKCS#7, as I tried to explain and capture.
That's because a host of other things - incluing AIA fetching - that IIS
does.

You're not wrong that PKCS#7 could partially address this. But your desired
end-state has no intrinsic relationship to the use of PKCS#7 - it's because
of all the other server implementation decisions - so it'd be wrong to
assume PKCS#7 is the lever to be pulled.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Trust Services roots

2017-02-10 Thread Ryan Sleevi via dev-security-policy
On Thu, Feb 9, 2017 at 11:40 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:
>
> For clarity, I was pointing out that GTS seems to have chosen a method
> likely to fail if an when actually needed, due to the typical dynamics
> of large human organizations.  Presumably an organization of such
> magnitude is likely to have contact points more dedicated to
> time-sensitive action-required messages than the contact point they chose.
>
> So while it's useful for you to draw attention to this, it's without
>> evidence or basis for you to suggest that this is an "issue", per se -
>> that is, it seemingly in no way conflicts with Mozilla policy or
>> industry practice.
>>
>
> I find that it is an issue, but not an absolute cause for rejection.


I think Peter's response basically highlights why this is not an issue, at
least how Mozilla has historically determined them:

"I think the point is that issues raised about CAs need to be grounded in
fact.  "Universal Trust Services wrote Y in their CPS but did not do Y as
demonstrated by Z" is something that can be evaluated factually  "UTS wrote
Y in their CPS but might not being doing Y" without any evidence is not
something that can be evaluated factually."

Basically, the issue you're raising is, even in the most charitable sense,
not an actionable grounds for rejection - even in part - which you seem to
believe it is ("but not an absolute cause for rejection" - implying it
contributes to some sum total of issues). It might be an opportunity for
the CA to reconsider things, but in the same way that "But the Government
of X might require the CA to do something" is free of evidence and cannot
be evaluated factually, "But they might not abide by the BRs" is free of
evidence and cannot be evaluated factually. It's simply noise.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Misissued/Suspicious Symantec Certificates

2017-02-28 Thread Ryan Sleevi via dev-security-policy
On Fri, Feb 24, 2017 at 4:51 PM, Ryan Sleevi  wrote:

>
>
> On Wed, Feb 22, 2017 at 8:32 PM, Ryan Sleevi  wrote:
>
>> Hi Steve,
>>
>> Thanks for your continued attention to this matter. Your responses open
>> many new and important questions and which give serious question as to
>> whether the proposed remediations are sufficient. To keep this short, and
>> thereby allow Symantec a more rapid response:
>>
>> 1) Please provide the CP, CPS, and Audit Letter(s) used for each RA
>> partner since the acquisition by Symantec of the VeriSign Trust Services
>> business in 2010.
>>
>
> Hi Steve,
>
> Have you had the opportunity to review and complete this? This is
> hopefully a simple task for your compliance team, given the critical
> necessity of maintaining of records, so I'm hoping that you can post within
> the next business day.
>

Hi Steve,

I think we would have expected this would be fairly easy to obtain, given
the record keeping requirements and the fact that these were relationships
pre-existing to the Symantec acquisition.

Can you speak to more about why the delay, and when Symantec expects this
information to be available?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: GlobalSign BR violation

2017-02-28 Thread Ryan Sleevi via dev-security-policy
On Tue, Feb 28, 2017 at 8:53 AM, douglas.beattie--- via dev-security-policy
 wrote:

>
> Yes, we're working to do just this now.


While that's good and well, I do hope GlobalSign will produce an incident
report regarding this matter, as to how the situation in
https://groups.google.com/d/msg/mozilla.dev.security.policy/luxlU5TL2ew/qkL1ZdThAQAJ
came to be in the first place.

I think GoDaddy's recent explanation -
https://groups.google.com/d/msg/mozilla.dev.security.policy/Htujoyq-pO8/uRBcS2TmBQAJ
- may provide a model for GlobalSign here.

The intent is that we don't just deal with the symptom, but that we work to
understand the root cause, the scope of impact, and the opportunities for
improvement.

I look forward to reading GlobalSign's analysis of both this and other
recent issues.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: GlobalSign BR violation

2017-02-28 Thread Ryan Sleevi via dev-security-policy
On Tue, Feb 28, 2017 at 12:02 PM, douglas.beattie--- via
dev-security-policy  wrote:

> Ryan,
>
> GlobalSign certificate issuance has been referenced in several different
> threads recently and I think most of them are closed; however, if you feel
> otherwise, let me know.
>

Hi Doug,

Right, I realize there were several threads - You've addressed some of the
scenarios for both Incapsula and the test certificates - however, I haven't
seen an explanation as to how the spaces were introduced into these SANs,
the scope of how many GlobalSign certs this affected, how long the duration
of affect was, and what GlobalSign is doing to correct that.

While I understand you plan to reach out to Vietnam Airlines regarding this
specific cert, it's understanding both the root cause and the steps
GlobalSign is taking to redress those that I think are relevant here.


> And lastly this ticket.  The Domain name was validated in accordance with
> the BRs, but there was a bug that allowed a user entered space to be
> included in some of the SAN values.  While the value is not compliant with
> RFC 5280 or the BRs, there was no security issue with the certificate that
> was issued (it was likely not able to secure the intended subdomains).
> We'll provide an incident report for this.
>
> If this isn't sufficient for some reason, I'm sure you will let us know.


Right, I think an incident report on this would be useful. I think I would
be quite cautious to suggest "there is no security issue with the
certificate that was issued" - I think many a CA would have said that about
encoding, say, a null byte (\0) within a SAN, prior to realizing the issues.

For example, as a systemic issue, it seems this suggests that GlobalSign
does not validate what appears in the SAN, so long as the validated domain
appears within it. This could range from a SERIOUS security issue (for
example, if GlobalSign's systems are themselves not robust against NULL
bytes) to a benign one. Understanding the root cause, scope, and
remediation plans is useful here to assure the relying parties of
GlobalSign's committment to security.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: (Possible) DigiCert EV Violation

2017-02-27 Thread Ryan Sleevi via dev-security-policy
On Mon, Feb 27, 2017 at 2:19 PM, Jeremy Rowley via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> The requirements don't specify what to do with this information. I know
> our product team interpreted this as part of the validation methods and
> exchange of key information, not something that was included in a
> certificate. We can include this information, but the guidelines are
> unclear what we do with this.


Yeah, let's fix this in the EVGs over in the CA/Browser Forum.

As you know from our private and public conversations, Jeremy, Google's
support for allowing this issuance was contingent upon that extension
appearing within the certificate, as that was the only mitigation .onion
owners had to detect different-key, same-name collisions. It was this
property - the combined implicit logging (due to Chrome's CT policy,
although not explicitly required of CAs) and explicit extension that
provided the safety bar for sites. This is also why we pushed for the
revocation of the existing certs.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


(Possible) DigiCert EV Violation

2017-02-27 Thread Ryan Sleevi via dev-security-policy
The EV Guidelines require certificates issued for .onion include the 
cabf-TorServiceDescriptor extension, defined in the EV Guidelines, as part of 
these certificates. This is required by Section 11.7.1 (1) of the EV 
Guidelines, reading: "For a Certificate issued to a Domain Name with .onion in 
the right-most label of the Domain Name, the CA SHALL confirm that, as of the 
date the Certificate was issued, the Applicant’s control over the .onion Domain 
Name in accordance with Appendix F. "

The intent was to prevent collisions in .onion names due to the use of a 
truncated SHA-1 hash collision with distinct keys, as that would allow two 
parties to respond on the hidden service address using the same key.

Last week, a SHA-1 collision was announced.

In examining the .onion precertificates DigiCert has logged, available at 
https://crt.sh/?q=facebookcorewwwi.onion , I could not find a single one 
bearing this extension, which suggests these are all misissued certificates and 
violations of the EV Guidelines.

During a past discussion of precertificates, at 
https://groups.google.com/d/msg/mozilla.dev.security.policy/siHOXppxE9k/0PLPVcktBAAJ
 ,  Mozilla did not discuss whether or not it considered precertificates 
misissuance, although one module peer (hi! it's me!) suggested they were.

This interpretation seems consistent with the discussions during the WoSign 
issues, as some of those certificates examined were logged precertificates.

Have I missed something in examining these certificates? Am I correct that they 
appear to be violations?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Notice of Intent to Deprecate and Remove: Trust in Symantec-issued Certificates

2017-03-23 Thread Ryan Sleevi via dev-security-policy
(Posting in an official capacity)

Jakob,

As the initial message said:
"You can participate in this discussion at
https://groups.google.com/a/chromium.org/forum/#!topic/blink-dev/eUAKwjihhBs
"

I've removed the cross-post, to ensure that threads do not fork due to
members being subscribed to one list versus the other.

I know this is a new approach, and appreciate your understanding as we try
to work through the challenges.


On Thu, Mar 23, 2017 at 3:54 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On 23/03/2017 20:27, Ryan Sleevi wrote:
>
>> On Thu, Mar 23, 2017 at 1:38 PM, Jakob Bohm via dev-security-policy <
>> dev-security-policy@lists.mozilla.org> wrote:
>>
>> On 23/03/2017 17:09, Ryan Sleevi wrote:
>>>
>>> (Posting in a Google Capacity)

 I just wanted to notify the members of this Forum that we have started
 an
 Intent to Deprecate and Remove, consistent with our Blink process,
 related
 to certain certificates issued by Symantec Corporation.

 This is a proposed plan, not a final commitment, and we welcome all
 feedback from members of this Forum to understand the risks and
 challenges.
 To understand the goals of this process, you can find more details at
 https://www.chromium.org/blink

 You can participate in this discussion at
 https://groups.google.com/a/ch
 romium.org/forum/#!topic/blink-dev/eUAKwjihhBs


 According to the linked document, Google is intending to distrust *all*
>>> Symantec issued certificates with a validity longer than 9 months,
>>> which is less that the 12 month validity normally being the minimum
>>> that site operators can purchase from CAs such as Symantec.
>>>
>>> It is also worth noting that this is apparently scheduled to occur less
>>> than 12 months from now (The document refers to Chrome/Blink version
>>> numbers with no associated dates, but contains a mention that one of
>>> the relevant releases would happen over the "winter holiday",
>>> presumably Christmas 2017).
>>>
>>> Since I know of no commercial (as opposed to free) CAs that routinely
>>> sell certificates with a duration of less than 12 months, this seems
>>> highly draconian and designed to drive Symantec out of the CA business.
>>>
>>> It also seems to ignore every mitigating factor discussed in this
>>> group, including those posted by Symantec themselves.
>>>
>>> For example the cited number of "30,000" affected certificates seems to
>>> come from the number of certificates that Symantec is actively double
>>> checking to ensure they were *not* misissued in a way similar to the
>>> original 127.
>>>
>>> It would seem that the only way to remain interoperable with both
>>> Chrome and the legacy devices and systems that trust only Symantec
>>> owned roots, would be if Chrome's TLS implementation somehow identified
>>> itself to servers as being a Chrome-based implementation before servers
>>> present their certificate.
>>>
>>> The computing world at large would be significantly inconvenienced if
>>> Symantec was forced to close down its CA business, in particular the
>>> parts of that business catering to other markets than general WebPki
>>> certificates.
>>>
>>
>>
>>
> The above message (and one by Symantec) were posted to the
> mozilla.dev.security.policy newsgroup prior to becoming aware of
> Google's decision to move the discussion to its own private mailing
> list and procedures.  I would encourage everyone concerned to keep the
> public Mozilla newsgroup copied on all messages in this discussion,
> which seems to have extremely wide repercussions.
>
>
>
>
> Enjoy
>
> Jakob
> --
> Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
> Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
> This public discussion message is non-binding and may contain errors.
> WiseMo - Remote Service Management for PCs, Phones and Embedded
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec: Next Steps

2017-03-24 Thread Ryan Sleevi via dev-security-policy
(Wearing an individual hat)

On Fri, Mar 24, 2017 at 10:35 AM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:
>
> One common scenario that a new wording should allow is a "fully
> outsourced CA", where all the technical activities, including CA
> private key storage, CRL/OCSP distribution, ensuring policy compliance
> and domain/IP validation are outsourced to a single entity which is
> fully audited as a CA operator, while the entity nominally responsible
> for the CA acts more like an RA or reseller.
>

Can you highlight why you believe this is a common scenario? During that
same conversation, only one party was identified that meets such a
definition, and CAs otherwise did not highlight any of their customers or
awareness of others.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec: Next Steps

2017-03-24 Thread Ryan Sleevi via dev-security-policy
On Fri, Mar 24, 2017 at 1:30 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Examples discussed in the past year in this group include the Taiwan
> GRCA roots and several of the SubCAs hosted by Verizon prior to the
> DigiCert transition.


Apologies for not remembering, but I don't recall the relationship of
either of those discussions to what you described. However, it's very easy
I'm wrong.

Could you link to the threads (ideally, the messages) you believe that
captures this description, so that I can better understand?

Peter is correct, we discussed something slightly different, so apologies
for misunderstanding what you were proposing versus what we discussed. It
sounds like what you're describing is what we discussed (white-label),
except the person signing the management assertion is also acting as a
Delegated Third Party for validation. However, because they're the ones
signing the assertion, they're the ones in scope for the audit presented to
root stores - correct?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Next CA Communication

2017-03-28 Thread Ryan Sleevi via dev-security-policy
On Tue, Mar 28, 2017 at 10:00 AM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> In principle any source of information could change just one minute
> later.  A domain could be sold, a company could declare bankruptcy, a
> personal domain owner could die.
>

Yup. And we balance the usability tradeoff.


> For smaller organizations (i.e. not Google), requesting and deploying
> new certificates every few years is a real hassle,


And that's a bug and needs to change. Plain and simple, that doesn't work
for security. But perhaps we're getting further off-topic, other than I
think the crux of your objection is that "Replacing certificates is hard",
when the reality is we should be striving to replace certificate every 90
days or less, and work to address the systemic and organizational issues
that prevent this.


> and often a
> non-trivial expense.  Forcing the paid, carefully validated
> certificates to be repurchased and reinstalled a lot more often imposes
> a real burden on real websites and real e-mail accounts.
>
> The previous CAB/F rule of 3 years max seemed to be a useful
> compromise, only slightly more difficult than the old 5 year offering
> from some CAs, and well within reason as to handling the frequency of
> ordinary changes in domain and company ownership/status that occur in
> the real world.
>

Unfortunately, that's long been held as undesirably long by browser
members, based on the surveys for several years. Unfortunately, CAs have
not been terribly interested in aligning this.


> The somewhat sudden (to outsiders) tendency to force frequent
> certificate replacements for those not using "Let's encrypt" seems
> arbitrary, harmful and mostly pointless.


Right, I think this philosophical difference - one in which I very much
think is actively harmful to security, even though I think it's a totally
understandable and reasonable position for you to hold - is perhaps the
crux of the objection on validating information. And that's useful to
acknowledge up front, and since we've arguably beat this horse to death,
acknowledge that it's merely a position statement being provided, and the
philosophical differences mean it's unlikely for everyone to be happy.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: EKU in Google sub CAs in violation of RFC5280?

2017-03-27 Thread Ryan Sleevi via dev-security-policy
On Mon, Mar 27, 2017 at 9:45 AM, tpg0007--- via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On https://pki.goog, all 5 of Google's newer subCAs have Extended Key
> Usage extension of serverAuth and clientAuth, unusual for CAs but not
> forbidden I guess. Their Key Usage extension contains the expected cert and
> CRL sign bits. Put together though they appear to be noncompliant with RFC
> 5280 4.2.1.12, which states that if both extensions are present then the
> certificate should not be used for any purpose unless that purpose is
> consistent across both extensions. The digitalSignature key usage that
> might make them consistent with the above EKU is clearly not present.
>

This sounds like a misunderstanding over the RFCs, rather than a violation.

While you highlight the presence of EKUs as unusual, but not forbidden,
it's actually quite usual, and something that both Microsoft and Mozilla
have explored mandating in the past. You can find lots of discussion within
the IETF PKIX WG going over 10 years on this matter, but effectively, an
EKU within an intermediate acts as a constraint upon the EKUs of
certificates it issues. That is, it behaves similar to Certificate Policies
by describing an 'effective' EKU set.

Virtually every major PKI library deployed as part of the Web PKI
recognizes this, and uses it as an effective way to scope the issuance of
types of certificates. That is, if an intermediate contains an EKU
extension, does not contain the any EKU identifier, and contains EKUs other
than serverAuthentication, then these libraries WILL NOT accept
certificates issued by these sub-CAs as valid for serverAuthentication.

To this end, the purpose of Certificate Signatures is entirely consistent.

The digitalSignature purpose, as a key usage, as it is used in TLS, relates
to the ciphersuites employed.

Thus, this is also not a contradiction to 5280.

Does that help explain?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Notice of Intent to Deprecate and Remove: Trust in Symantec-issued Certificates

2017-03-23 Thread Ryan Sleevi via dev-security-policy
On Thu, Mar 23, 2017 at 12:54 PM, tarah.symantec--- via dev-security-policy
 wrote:

> What will be the process for critical infrastructure such as medical
> devices and payment systems when they're affected by this?


To avoid fragmentation of discussion, would it be possible to reply to the
blink-dev@ list?

I totally realize the overhead for participants on either side - Mozilla
dev.security.policy members having to post to a different list vs blink-dev
members potentially needing to post to this list. We've opted for blink-dev@
in this case, and welcome feedback on how to improve this process in the
future.

Given the interest and role this community has played in these issues, we
wanted to inform and solicit feedback, but we're not quite to the point
where the primary discussion would happen on this list.

Thanks for understanding
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Trust Services roots

2017-03-23 Thread Ryan Sleevi via dev-security-policy
On Thu, Mar 23, 2017 at 8:37 AM, Peter Kurrasch via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> ‎I would be interested in knowing why Google felt it necessary to purchase
> an existing root instead of, for example, pursuing a "new root" path along
> the lines of what Let's Encrypt did? All I could gather from the Google
> security blog is that they really want to be a root CA and to do it in a
> hurry. ‎Why the need to do it quickly, especially given the risks (attack
> surface)?


Clarification: I'm not speaking on behalf of Google

I think this demonstrates a lack of understanding of what Let's Encrypt
did. Let's Encrypt obtained a cross-signed certificate (from IdenTrust),
which is "purchasing" a signature for their key. This is one approach.
Purchasing a pre-existing signature (and key) is another. They are
functionally equivalent.

So what Google has done is what is what Let's Encrypt did.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Notice of Intent to Deprecate and Remove: Trust in Symantec-issued Certificates

2017-03-23 Thread Ryan Sleevi via dev-security-policy
(Posting in a Google Capacity)

I just wanted to notify the members of this Forum that we have started an 
Intent to Deprecate and Remove, consistent with our Blink process, related to 
certain certificates issued by Symantec Corporation.

This is a proposed plan, not a final commitment, and we welcome all feedback 
from members of this Forum to understand the risks and challenges. To 
understand the goals of this process, you can find more details at 
https://www.chromium.org/blink

You can participate in this discussion at 
https://groups.google.com/a/chromium.org/forum/#!topic/blink-dev/eUAKwjihhBs
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Over 14K 'Let's Encrypt' SSL Certificates Issued To PayPal Phishing Sites

2017-03-29 Thread Ryan Sleevi via dev-security-policy
On Wed, Mar 29, 2017 at 7:30 AM, Hector Martin via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> We actually have *five* levels of trust here:
>
> 1. HTTP
> 2. HTTPS with no validation (self-signed or anonymous ciphersuite)
> 3. HTTPS with DV
> 4. HTTPS with OV
> 5. HTTPS with EV
>

No, we actually only have three levels.

1. HTTP
2. "I explicitly asked for security and didn't get it" (HTTPS with no
validation)
3. HTTPS

Obvious answer? Make (1)-(2) big scary red, (3) neutral, (4) green, (5)
> full EV banner. (a) still correlates reasonably well with (4) and (5).
> HTTPS is no longer optional. All those phishing sites get a neutral URL
> bar. We've already educated users that their bank needs a green lock in the
> URL.


And that was a mistake - one which has been known since the very
introduction of EV in the academic community, but sadly, like Cassandra,
was not heeded.

http://www.adambarth.com/papers/2008/jackson-barth-b.pdf should be required
reading for anyone who believes OV or EV objectively improves security,
because it explains how since the very beginning of browsers support for
SSL/TLS (~1995), there's been a security policy at place that determines
equivalence - the Same Origin Policy.

While the proponents of SSL/TLS then - and now - want certificates to be
Something More, the reality has been that, from the get-go, the only
boundary has been the Origin.

I think the general community here would agree that making HTTPS simple and
ubiquitous is the goal, and efforts by CAs - commercial and non-commercial
- towards those efforts, whether it be through making certificates more
affordable to obtain or simpler to install or easier to support - are
well-deserving of praise.

But if folks want OV/EV, then they also have to accept there needs to be an
origin boundary, like Barth/Jackson originally called for in 2008
(httpsev://), and that any downtrust in that boundary needs to be blocked
(similar to mixed content blocking of https -> http, as those degrade the
effective assurance). Further, it seems as if it would be necessary to
obtain the goals of 4, 5, or (a) that the boundary be 'not just'
httpsev://, but somehow bound to the organization itself - an
origin-per-organization, if you will.

And that, at its core, is fundamentally opposed to how the Web was supposed
to and does work. Which is why (4), (5), and (a) are unreasonable and
unrealistic goals, despite having been around for over 20 years, and no new
solutions have been put forward since Barth/Jackson called out the obvious
one nearly a decade ago, which no one was interested in.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Researcher Says API Flaw Exposed Symantec Certificates, Including Private Keys

2017-03-29 Thread Ryan Sleevi via dev-security-policy
https://cabforum.org/wp-content/uploads/CA-Browser-Forum-BR-1.4.2.pdf

Section 6.1.2

On Wed, Mar 29, 2017 at 3:22 AM, okaphone.elektronika--- via
dev-security-policy  wrote:

> Weird.
>
> I expect there are no requirements for a CA to keep other people's private
> keys safe. After all handling those is definitely not part of being a CA.
> ;-)
>
> CU Hans
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Question: Transfering the private key of an EV-approved CA

2017-03-27 Thread Ryan Sleevi via dev-security-policy
On Mon, Mar 27, 2017 at 3:09 PM, Kai Engert via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Are there existing rules, in the CABForum BRs, or in the Mozilla CA
> policy, that
> define under which circumstances the private key of an actively used EV
> approved
> root CA may be transferred to a different company, that hasn't been
> audited for
> EV compliance?
>

A root CA is not simply approved for EV. A root CA has one or more policies
that indicate compliance with EV, and those policies are recognized for the
associated root certificates.

To your question, no, there are no policies _specific to EV_ related to
that. The general Mozilla Policy handling all root key transfer and
cross-certifications applies.


> As soon as the private key has been given to another company, the receiving
> company technically has the ability to issue EV certificates (even if they
> never
> intend to do so), right?
>

Correct, but as per the above, the actual _use_ of that is governed by the
CP/CPS and associated audits, no different than any other CA.


> I would have naively assumed that a company, that owns an EV approved
> CA, is
> expected to strictly protect their EV issuing power, and must never share
> it
> with another company that hasn't been approved for issuing EV certificates.
>

That's not stated in any special case for EV. In fact, even without the
transfer of root key material, it's possible for an EV-enabled root to
cross-certify another CA for EV issuance, by authorizing them for the
relative certificate policies. It is incumbent upon the issuing CA to
ensure that subordinated CA's policies and practices are wholly aligned
with the parent CA's CP/CPS.


> If this makes sense, and if there aren't any rules yet, I suggest to add
> them to
> the appropriate policy documents.
>

Given the bug you were asking questions on "this morning",
https://bugzilla.mozilla.org/show_bug.cgi?id=1349727 , it sounds like this
is related to the discussion on
https://groups.google.com/d/msg/mozilla.dev.security.policy/1PDQv0GUW_s/oxDWH07VDgAJ
, which has significantly more details on this, including statements from
various Mozilla peers and module owners.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Trust Services roots

2017-03-27 Thread Ryan Sleevi via dev-security-policy
Clarified on the new thread you started, but I don't believe there's any
inconsistency. Further details on the new thread you started.

On Mon, Mar 27, 2017 at 10:02 AM, Roland Fantasia via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Anyone care to comment on the fact that Google's new subCAs under the GTS
> branding have inconsistent EKU and KU bits? What's more disturbing is FF
> doesn't make a fuss about it when connecting to the test sites (after
> adding the roots manually of course).
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Next CA Communication

2017-03-27 Thread Ryan Sleevi via dev-security-policy
Gerv,

I'm curious whether you would consider 18 months an appropriate target for
a deprecation to 1 year certificates. That is, do you believe a transition
to 1 year certificates requires 24 months or 18 months, or was it chosen
simply for its appeal as a staggered number (1 year -> 2 year certs, 2
years -> 1 year certs)

On Mon, Mar 27, 2017 at 5:10 AM, Gervase Markham via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On 17/03/17 15:30, Gervase Markham wrote:
> > The URL for the draft of the next CA Communication is here:
> > https://mozilla-mozillacaprogram.cs54.force.com/Communications/
> CACommunicationSurveySample?CACommunicationId=a050S00G3K2
> >
> > Note that this is a _draft_ - the form parts will not work, and no CA
> > should attempt to use this URL or the form to send in any responses.
>
> Here is another proposed question:
>
> Certificate Validity Periods
>
> Your attention is drawn to CAB Forum ballot 193, which recently passed.
> This reduces the maximum permissible lifetime of certificates from 39 to
> 27 months, as of 1st March 2018. In addition, it reduces the amount of
> time validation information can be reused, from 39 to 27 months, as of
> 31st March 2017. Please be aware of these deadlines so you can adjust
> your practices accordingly.
>
> Mozilla is interested in, and the CAB Forum continues to discuss, the
> possibility of further reductions in certificate lifetime. We see a
> benefit here in reducing the overall turnover time it takes for an
> improvement in practices or algorithms to make its way through the
> entire WebPKI. Shorter times, carefully managed, also encourage the
> ecosystem towards automation, which is beneficial when quick changes
> need to be made in response to security incidents. Specifically, Mozilla
> is currently considering a reduction to 13 months, effective as of 1st
> March 2019 (2 years from now). Alternatively, several CAs have said that
> the need for contract renegotiation is a significant issue when reducing
> lifetimes, so in order that CAs will only have to do this once rather
> than twice, another option would be to require the reduction from 1st
> March 2018 (1 year from now), the current reduction date.
>
> Please explain whether you would support such a further reduction dated
> to one or both of those dates and, if not, what specifically prevents
> you from lending your support to such a move. You may wish to reference
> the discussion on the CAB Forum public mailing list to familiarise
> yourself with the detailed arguments in favour of certificate lifetime
> reduction.
>
>
> Comments, as always, are welcome.
>
> Gerv
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Next CA Communication

2017-03-27 Thread Ryan Sleevi via dev-security-policy
On Mon, Mar 27, 2017 at 10:18 AM, Ryan Sleevi  wrote:

> Gerv,
>
> I'm curious whether you would consider 18 months an appropriate target for
> a deprecation to 1 year certificates. That is, do you believe a transition
> to 1 year certificates requires 24 months or 18 months, or was it chosen
> simply for its appeal as a staggered number (1 year -> 2 year certs, 2
> years -> 1 year certs)
>

I suppose one further consideration - the proposal you outline would forbid
issuance. As we saw with the SHA-1 deprecation, there are a variety of PKI
communities which may rely on long-lived certificates for other purposes,
but otherwise in no way interact with Mozilla applications.

Would it be useful to thus also query whether there would be impact in
Mozilla applications failing to trust such certificates, but otherwise to
continue permitting their issuance. While this carries with it some
compatibility and interoperability risk - due to the issuance continuing
independent of applications - I suspect that if applications could agree
upon a target date to reduce the trust in acceptance, this might be a
sufficient safeguard against the "first mover" problem and allow Mozilla to
obtain its objectives without explicitly prohibiting issuance.

That is a separate, but related, question, but useful to consider if you
will be asking all CAs, some of whom may have reasons due to other PKIs
that would make them concerned about potential impact. However, if
Mozilla's goals and desires would include seeing those PKIs are operated
independently of the Web PKI, then forbidding issuance would be appropriate.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Notice of Intent to Deprecate and Remove: Trust in Symantec-issued Certificates

2017-03-23 Thread Ryan Sleevi via dev-security-policy
On Thu, Mar 23, 2017 at 1:38 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On 23/03/2017 17:09, Ryan Sleevi wrote:
>
>> (Posting in a Google Capacity)
>>
>> I just wanted to notify the members of this Forum that we have started an
>> Intent to Deprecate and Remove, consistent with our Blink process, related
>> to certain certificates issued by Symantec Corporation.
>>
>> This is a proposed plan, not a final commitment, and we welcome all
>> feedback from members of this Forum to understand the risks and challenges.
>> To understand the goals of this process, you can find more details at
>> https://www.chromium.org/blink
>>
>> You can participate in this discussion at https://groups.google.com/a/ch
>> romium.org/forum/#!topic/blink-dev/eUAKwjihhBs
>>
>>
> According to the linked document, Google is intending to distrust *all*
> Symantec issued certificates with a validity longer than 9 months,
> which is less that the 12 month validity normally being the minimum
> that site operators can purchase from CAs such as Symantec.
>
> It is also worth noting that this is apparently scheduled to occur less
> than 12 months from now (The document refers to Chrome/Blink version
> numbers with no associated dates, but contains a mention that one of
> the relevant releases would happen over the "winter holiday",
> presumably Christmas 2017).
>
> Since I know of no commercial (as opposed to free) CAs that routinely
> sell certificates with a duration of less than 12 months, this seems
> highly draconian and designed to drive Symantec out of the CA business.
>
> It also seems to ignore every mitigating factor discussed in this
> group, including those posted by Symantec themselves.
>
> For example the cited number of "30,000" affected certificates seems to
> come from the number of certificates that Symantec is actively double
> checking to ensure they were *not* misissued in a way similar to the
> original 127.
>
> It would seem that the only way to remain interoperable with both
> Chrome and the legacy devices and systems that trust only Symantec
> owned roots, would be if Chrome's TLS implementation somehow identified
> itself to servers as being a Chrome-based implementation before servers
> present their certificate.
>
> The computing world at large would be significantly inconvenienced if
> Symantec was forced to close down its CA business, in particular the
> parts of that business catering to other markets than general WebPki
> certificates.


(In Google Capacity)

By no means do I want to insist you must discuss on blink-...@chromium.org,
but I do want to highlight that the process follows our Blink Process for
assessing risk, and you're more than welcome and encouraged to share this
feedback there to ensure it's considered in relation to the proposed plan
for Chrome.

If you wish to only address this relative to the Mozilla community, please
feel free to do so here, and I in no means want to tell you where or how to
do so. I can only state that communication to blink-...@chromium.org is
what will inform Google Chrome's approach to this matter.

All the best,
Ryan
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec: Next Steps

2017-03-16 Thread Ryan Sleevi via dev-security-policy
On Thu, Mar 16, 2017 at 6:01 AM, Gervase Markham via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On 09/03/17 13:32, Ryan Sleevi wrote:
> > (Wearing Google hat only for this statement)
> > Have you considered having this discussion in the CA/Browser Forum?
> Google
> > had planned to discuss this very topic at our upcoming F2F about how to
> > address this, and would be very interested in collaborating with Mozilla
> on
> > this. I mentioned this recently to Kathleen at the WebTrust TF meetings,
> > but apologies for not mentioning to you as well.
>
> This sounds like a good idea. Do we want to get this added in an open
> slot? There may still be time.
>

Unconference future discussion. If CAs aren't interested in it, and it
doesn't get discussed, then that seems like a suitable signal to discuss in
the browser policies, doesn't it?


> > I don't understand why you
> > believe it's relevant the act of "Mozilla requiring disclosure of the
> > audits". Can you help me understand where, in the policy, that's
> required?
>
> I'm not sure where your text in quotes comes from, and nor can I work
> out the referent of "it", so I don't understand this question.
>

The quoted text was attempting to summarize the following paragraph from
you:

"""No, because in the case of a sub-CA, we require audits. And when we
receive them, if they were done by unqualified parties, the CA would
need to flag that, and we would make a judgement about that party's
suitability at the time. The issue here arises that, because of the way
things are set up, these RA's audits were not submitted to Mozilla, and
so Symantec didn't have to resolve the Schrodinger's Cat of
(qualified|not qualified and need us to make a judgement)."""

The question here is that it seems you have hinged the
acceptability/unacceptability of the auditor on the basis of whether or not
it was required to be disclosed.

Or, put differently, it sounds as if you suggest the only obligation a CA
has to ensure their DTP auditors are qualified for the task at hand is if,
and only if, Mozilla requests those audits. In the absence of that request,
the CA is allowed to make their own individual determination. Further, it
seems that you are suggesting that if a CA makes that determination, and
it's incorrect, that's not a failure upon the CAs part, because they made
'a decision', and the relevant portions of Mozilla policy only apply to the
'next' audit.

In effect, it makes the question of 'qualified' auditor one which can never
look retrospectively to prevent issues or instill a duty of care, and it
only applies forward thinking, to the 'next' audits. Or, put differently,
it sounds as if you're suggesting that Symantec, having made a
determination of qualified without input from Mozilla, has sufficiently
abided by Mozilla's policy.

I'm not sure that's a consistent read with the goals or policy stated.
Rather, by making that determination without input from Mozilla, Symantec
has instead taken on full liability for that audit. If, as in this case,
evidence appears that suggests the auditor is not qualified, then the root
issue rests with Symantec for not ensuring that the auditor was qualified.
Similarly, all other CAs who are accepting audits from third-parties
(whether DTPs or sub-CAs), and which are not ensuring those meet the
definition of qualified, similarly accept risk of violation. That risk can
be mitigated - for example, showing that the auditor is appropriately
licensed at the time they conducted the audit, rejecting audits that are
clearly problematic - but it's a risk born through exercising the
capability to delegate.

Put one last way (since this is such a thorny issue), I read your reply in
the above quoted text to say "Mozilla requires that the CA make a decision.
But it doesn't have to be a right one, and it doesn't have to use the same
data we would." I'm trying to push back on that, which is every CA has an
obligation to make the Right Decision - they have the tools at their
disposal to do so, but uncertainty or perceived risk can and should only be
mitigated by public consultation before - not after.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Include Renewed Kamu SM root certificate

2017-03-14 Thread Ryan Sleevi via dev-security-policy
On Tue, Mar 14, 2017 at 5:10 PM, tugba onder via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Upon your request, we re-examined the current version of CAB BR (v.1.4.2)
> with our CPS document that describes our way of doing business. We did this
> work under these main headings; Identity Proofing, Technologies, Life Cycle
> Management, Certificate Profiles and Auditing Requirements. We read all
> related titles in CPS and CAB Br 1.4.2. Besides, so as not to miss any
> amendment item stated in section 1.2.2 (Relevant Dates) of CAB BR v1.4.2.
> we have stated Kamu SM approach for each item. The table is in this link:
>
> https://drive.google.com/file/d/0B3Yp-DkgL_W-OTR3cWxuOE84bmM/view?usp=
> sharing
>
>
> As a result, we could not notice any major difference between our
> practices and CAB BR v.1.4.2. The minor differences stated in the table
> will be fixed as soon as possible and be ready for the next audit. We hope
> that our examination meets your request and if there exists any other point
> you want to know please do not hesitate to ask.
>

Fantastic! I really appreciate you taking a second look, and I'm glad the
extent of the misalignment was limited to the previously identified
sections. I think that should be sufficient information to proceed.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Grace Period for Sub-CA Disclosure

2017-04-03 Thread Ryan Sleevi via dev-security-policy
On Mon, Apr 3, 2017 at 11:18 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On 04/04/2017 05:03, Ryan Sleevi wrote:
>
>> On Mon, Apr 3, 2017 at 7:18 PM, Jakob Bohm via dev-security-policy <
>> dev-security-policy@lists.mozilla.org> wrote:
>>
>> I see it as part of the underlying reasoning.  Mozilla et al wants
>>> disclosure in order to take action if the disclosed facts are deemed
>>> unacceptable (under policy or otherwise).  Upon receiving the
>>> disclosure, the root program gains the ability to take counteractions,
>>> such as protesting to the issuing CA, or manually blocking the unwanted
>>> SubCA cert via mechanisms such as OneCRL.  The rules don't make the CAs
>>> wait for the root programs to get upset, but must allow at least zero
>>> time for this time to happen.
>>>
>>
>>
>> That's not correct.
>>
>>
> So why does Mozilla want disclosure and not just a blanket X on a form
> stating that all SubCAs are adequately audited, follow BRs etc.?
>
> What use does Mozilla have for any of that information if not to act on
> it in relation to trust decisions?
>

The incorrect part is that you're assuming it's a blocking process. It's
not - it's entirely asynchronous. Us folks who actually review CP/CPSes are
barely handling it at the root layer, let alone the intermediate. That's
why the CCADB - and the automation being developed by Microsoft and the
standardization I've been pushing - is key and useful.

The tradeoff has always been that CAs are granted the flexibility to
delegate, which intentionally allows them to bypass any blocking browser
dependencies, but at the risk to the issuing CA that the issuing CA may be
suspended if they do an insufficient job. It's a distribution of workload,
in which the issuing CA accepts the liability to be "as rigorous" as the
browser programs in return for the non-blocking flexibility of the
subscriber CA. In turn, that risk proposition (of the issuing CA) is offset
by the cost they impose on the subscriber CA.

The goal is not to manually approve every Sub-CA.


> Yes, across all root programs, that is the key point, see #0.
>
>
 You're still incorrect here.
>>
>>
> Not an argument.
>

I already presented the argument demonstrating why the goal - of explicitly
having every CA aligned - is not part of the goals. The goal is not to
introduce conflicting requirements, but you've demonstrated no such
conflicting requirements beyond the abstract hypothetical (and
non-participant and unknown) browser. No one is requesting what you're
proposing they are. It's a strawman.


> I have highlight how the goals that I perceive must underlie the
> disclosure requirement, combined with the general imperative of the
> Golden Rule ("do onto others..." or "You must love thy neighbor as
> thyself") leads to a logical conclusion through a number of logical
> steps.
>

I understand that, but you're misguided and incorrect in its application,
which I've tried several ways to highlight for you.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Grace Period for Sub-CA Disclosure

2017-04-03 Thread Ryan Sleevi via dev-security-policy
On Mon, Apr 3, 2017 at 7:18 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> I see it as part of the underlying reasoning.  Mozilla et al wants
> disclosure in order to take action if the disclosed facts are deemed
> unacceptable (under policy or otherwise).  Upon receiving the
> disclosure, the root program gains the ability to take counteractions,
> such as protesting to the issuing CA, or manually blocking the unwanted
> SubCA cert via mechanisms such as OneCRL.  The rules don't make the CAs
> wait for the root programs to get upset, but must allow at least zero
> time for this time to happen.


That's not correct.


> I believe you're suggesting simultaneously across all root programs, is
 that correct? But that's not a requirement (and perhaps based on the
 incorrect and incomplete understanding of point 1)

>>>
>>>
>>> Yes, across all root programs, that is the key point, see #0.
>>>
>>
You're still incorrect here.


> Also, it is argued as a logical consequence of #3, #2, #0, i.e.
>>> assume another root program enacts similar rules.  Once the SubCA cert
>>> is disclosed on the CCADB for Mozilla and Chrome, the SubCA operator
>>> can download the SubCA cert from the CCADB and use it to make users of
>>> that other root program trust issued certificates before that other
>>> root program received the disclosure.
>>>
>>
>> I see zero problem with the SubCA receiving the certificate
>> immediately from the issuing CA, even prior to disclosure in the
>> CCADB.  The proposed requirement is that the SubCA not issue prior to
>> confirming the disclosure has been made.
>>
>
I agree with Peter here.


> Not receiving the certificate prevents a rogue or rookie SubCA from
> meaningfully issuing prematurely.  After all, SubCA operators are only
> humans, and usually less experienced in all this than long time major
> CA operators.
>

That's not a problem we're trying to solve here. That's great that you
care, but you've also highlighted the many problems with your proposal, so
perhaps it is a bad goal no longer worth discussing?


> By symmetry, if Mozilla has to shut down the CCADB for maintenance for
>>> 2 days, another root program might receive and publish the disclosure
>>> first, causing the same problem for users of Mozilla and Chrome
>>> products.
>>>
>>
>> I'm not sure where you see the "problem for users" here.  This is no
>> different than what happens today for many CAs.
>>
>>
> The problem for users is that their Browser/client trusts a certificate
> from a SubCA that their trusted root program has never seen, and thus
> not even had a chance to form an opinion about.


That's great. That's not the goal. The rest logically shakes out as
irrelevant.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec Response D

2017-04-10 Thread Ryan Sleevi via dev-security-policy
On Mon, Apr 10, 2017 at 10:55 AM, Steve Medin via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Issue D: Test Certificate Misissuance (April 2009 - September 2015)
>
> Symantec has provided complete investigation results for this issue. They
> can be found at https://www.symantec.com/page.jsp?id=test-certs-update#
>
> We would like to further clarify the following statement in this issue
> summary: "Some of the test certificates (including one for www.google.com)
> left Symantec's network because they were logged in CT; Symantec claims no
> others did."
>
> We believe this statement is inaccurate for two reasons.
>
> First, the action of logging certificates to CT does not necessarily mean
> that the certificates left Symantec's network. Beginning January 1, 2015,
> Symantec began logging all EV certificates in CT log servers. Given that
> certificates are logged in CT at the time of creation in our system, any
> distribution of certificates that we issue is a second, independent step.
>
> Moreover, at the time we investigated this incident, we conducted multiple
> scans for domains used in test certificates. Following a thorough
> investigation process, we found no evidence that these certificates were
> used on external servers. Accordingly, we have no evidence that any of the
> test certificates involved in this investigation left Symantec's network.
>

Hi Steve,

Quick questions.

1) It's clear that some of the test certificates did leave Symantec's
network. The act of logging them in CT establishes this. Did you mean to
state no private keys left Symantec's networks?

2) If so, doesn't your audit conclude that you lack sufficient evidence and
documentation to objectively determine this, and merely _believe_ no
private keys left Symantec's network? This would be consistent with your
conclusion (beginning "Accordingly"), however, the existence of your
response is inherently contradictory. Clarification would be greatly
appreciated.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec Response P

2017-04-10 Thread Ryan Sleevi via dev-security-policy
Hi Steve,

Quick questions:

1) Why was Symantec unable to operate the CRL service for Unicredit?
2) Pursuant to Section 5.7.1 of the Baseline Requirements, Symantec, and
all of its sub-CAs, are required to document business continuity and
disaster recovery procedures. Had Unicredit been operating according to the
Baseline Requirements, it would have documented such a plan for review.
  a) What are Symantec's conditions for activating this plan for Symantec?
  b) How regular do you test this plan for Symantec?
  c) What requirements do you have regarding awareness and education?
3) Symantec was only permitted to not revoke this subordinate, pursuant
with the Baseline Requirements, Section 4.9.1.2, Item 8 if and only if the
Issuing CA (Symantec) has made arrangements to continue maintaining the
CRL/OCSP repository?
  a) Can Symantec clarify what it believes is permitted and not permitted
under their interpretation of this section?
  b) Please specifically document what arrangements were made, if any, -
such as providing contracts and agreements.
  c) Please specifically document what steps Symantec took, if any, to
ensure those requirements were met?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec Response J

2017-04-10 Thread Ryan Sleevi via dev-security-policy
Hi Steve,

Quick question:

1) You identified that the root cause was related to a deprecated, but not
removed, interface. Your remediation was to remove that interface.
  a) How many deprecated, but unremoved, interfaces does Symantec have, as
of 2017-04-10?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec Response B

2017-04-10 Thread Ryan Sleevi via dev-security-policy
Hi Steve,

Some quick follow-ups:

1) You're arguing that "the issuance of this cert didn't impose risk on
anyone but this specific customer"
  a) What factors lead you to that decision?
  b) What process does Symantec have in place to make such determination?
  c) Does such process continue to exist?
  d) If Symantec is incorrect in its determination, for this incident,
past, or future incidents, what do you believe should be an appropriate
response?

2) You've noted that you did not disclose it due to "contractual
obligations to protect the customer's privacy", which "remains in force".
  a) If a contractual obligation is in conflict with the Baseline
Requirements, do you have a process defined to resolve that conflict? If
so, please fully describe it.
  b) If a contractual obligation is in conflict with other Root Program
requirements, do you have a process defined to resolve that conflict? If
so, please fully describe it?
  c) Please share the details of that contract, as well as any other such
contracts that may exist, to the extent of such privacy requirements. If
you're unable to do so, please fully describe why?
  d) Specifically, how many such contracts exist?
  e) Does Symantec have a procedure in place for when no such contracts
exist (e.g. in the case of Example D, where Symantec failed to disclose to
affected parties, citing "confidentiality", where no such contract existed?)
  f) What steps has Symantec taken, if any, to eliminate such clauses, in
order to ensure that appropriate transparency for the ecosystem supersedes
that of customer obligations, particularly when faced with situations like
1.d?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec Response E

2017-04-10 Thread Ryan Sleevi via dev-security-policy
Hi Steve,

Quick questions:

1) To confirm, your response states nothing about any improved procedures
or testing put into place regarding this.
  a) Can you describe what, if anything, Symantec did, beside "fix the bug"?
  b) What assurances should the community have regarding Symantec's
committment to proactively identify bugs versus reactively respond to them,
on the basis of this disclosure?

2) Symantec did not disclose the number of certificates affected. That is,
the response states "exploitation" or "adverse impact", but that's based on
Symantec's judgement.
  a) How many certificates were affected?
  b) What steps did Symantec take regarding such certificates?
  c) Did you revoke them, pursuant to Baseline Requirements, Section
4.9.1.1, Items 4 and 9?
  d) If not, why not?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec Response L

2017-04-10 Thread Ryan Sleevi via dev-security-policy
Hi Steve,

Quick questions:

1) You identified that Symantec believed that it was a responsibility to
ensure your customers' businesses remain interrupted.
  a) What is Symantec's process for determining which of these concerns
(Baseline Requirements vs customer business) has priority?
  b) Has that process changed in response to this incident?

2) You stated that "browsers didn't process certificate policy extensions
content during path building". This fails to clarify whether you believe it
was a Baseline Requirements violation, which makes no such statements
regarding policy building. Further, no such browser has, except for EV,
made use of any policy IDs beyond path building.
  a) Does Symantec believe this was a Baseline Requirements violation?
  b) If so, why did Symantec fail to revoke this certificate, consistent
with Baseline Requirements, Section 4.9.1.2, Item 5?
  c) If so, why did Symantec fail to revoke this certificate, consistent
with Baseline Requirements, Section 4.9.1.2, Item 10?

3) Recognizing this risk, Symantec's Terms of Use under the Baseline
Requirements, Section 9.6.3, the CA is contractually obligated to include a
series of requirements, including Item 8, "An acknowledgement and
acceptance that the CA is entitled to revoke the certificate immediately if
the Applicant were to violate the terms of the Subscriber Agreement or
Terms of Use"
  a) Does Symantec's Subscriber Agreement or Terms of Use with the FPKI
include an obligation to issue consistent with Symantec's CP/CPS?
  b) Does Symantec's relevant CP/CPS state that it complies with the
Baseline Requirements?
  c) If so, does Symantec believe that such a requirement flows down to
subordinate CAs?
  d) If not, why not?

4) What steps has Symantec taken, if any, with regard to its Subscriber
Agreements or Terms of Use in light of this?

5) What steps has Symantec taken, if any, to ensure there is appropriate
transparency regarding Symantec's responsibility to their customers versus
responsibility to Root Program requirements?
  a) Specifically, what steps has Symantec taken to ensure all necessary
and sufficient information to independently evaluate that tradeoff is
available publicly?
  b) Specifically, what steps has Symantec taken to ensure that if one or
more Root Programs disagree with their assessment, that appropriate steps
can and will be taken by Symantec?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec Response Q

2017-04-10 Thread Ryan Sleevi via dev-security-policy
Hi Steve,

Quick questions:

1) What does Symantec believe is a reasonable timeframe to remedy these
issues?
2) You stated 18 months, but the issues were present from the 2013/2014
audits, the 2014/2015 audits, and the 2015/2016 audits, all as noted in
Issue V. In total, this period spans 30 months, if we assume the split
audits beginning 2016-06-16.
  a) How do you explain this discrepancy between 18 months and 30 months?
  b) How should the community see this matter?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec Response F

2017-04-10 Thread Ryan Sleevi via dev-security-policy
Hi Steve,

Quick follow-up.

1) Your audit reports failed to identify what steps Symantec was taking to
proactively resolve these issues. As further demonstrated by Issue Q,
Symantec failed to remedy these issues.
  a) What steps, if any, did Symantec take upon receiving a qualified audit?
  b) Why did these steps fail?
2) What is materially different from Symantec's past attempts to remedy the
issues (to Issue F and Issue Q) and any proposed response to the latest set
of issues (Issue V, Issue X)?

In particular, while Issue F is "problematic", it is more concerning that
this reoccurred in Issue Q. Highlighting any changes Symantec took in
response to these is useful, as would be highlighting the delta between
Issue Q and the current audits, which speak to Issue V and Issue X. I
encourage Symantec to reconsider what it considers appropriate to disclose,
because this fundamentally affects the perceived trustworthiness of any
Symantec proposals for remediation.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec Response N

2017-04-10 Thread Ryan Sleevi via dev-security-policy
Hi Steve,

Quick questions:

1) What steps, specifically, has Symantec taken to ensure such clarity is
provided in the future?
2) What steps, specifically, has Symantec taken to ensure appropriate
review prior to the execution of such processes?

These questions apply to any process involving CA key material, including,
but not limited to, certificate signing ceremonies or the bringing online
of an offline root.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec Response B

2017-04-11 Thread Ryan Sleevi via dev-security-policy
On Tue, Apr 11, 2017 at 6:02 AM, Gervase Markham via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Hi Ryan,
>
> On 10/04/17 16:38, Ryan Sleevi wrote:
> > 1) You're arguing that "the issuance of this cert didn't impose risk on
> > anyone but this specific customer"
> >   a) What factors lead you to that decision?
>
> Can you lay out for us a scenario where this issuance might impose risk
> on someone else?
>

Sure. Consider the ecosystem risk where if every CA were to continue
issuing 1024-bit certs. This imposes a risk on the collective users of the
ecosystem, but notably Mozilla users, when accessing these sites, because
it provides a weaker security guarantee than other sites. That is, it means
the 'effective' security of the lock is gated on 1024-bit.

Similarly, if we accept that 1024-bit does no one but the subscriber any
harm, then it meaningfully prevents disabling 1024-bit support for leaf
certs, both for Mozilla and the ecosystem.

Importantly though, I think the question highlights the principle at play
here - which is Symantec seems to view the Baseline Requirements as "The
Baseline Suggestions that should be Requirements for our Competitors but
Recommendations for Us". That is deeply problematic, and it's useful to
understand from Symantec what factors go in to such determinations, in
order to determine whether or not Symantec is, has been, or can be
trustworthy.


> > 2) You've noted that you did not disclose it due to "contractual
> > obligations to protect the customer's privacy", which "remains in force".
> >   a) If a contractual obligation is in conflict with the Baseline
> > Requirements, do you have a process defined to resolve that conflict? If
> > so, please fully describe it.
>
> Do you think this particular contractual obligation to privacy _is_ in
> conflict with the BRs? If so, which section?
>

The obligation itself, no, but the results of that obligation
unquestionably are.

I think it's in conflict with trustworthiness for Symantec to have policies
that would prevent it from meaningfully disclosing certificates that are
misissued (whether according to the Baseline Requirements or Symantec's
CP/CPS), because it prevents and impairs the ability to understand the
scope of the issues or the truthfulness of Symantec's claims.

I'm deeply concerned with the suggestion that details of BR violations
either cannot or should not be disclosed.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec Response L

2017-04-11 Thread Ryan Sleevi via dev-security-policy
On Tue, Apr 11, 2017 at 6:31 AM, Gervase Markham  wrote:

> Hi Ryan,
>
> On 10/04/17 17:03, Ryan Sleevi wrote:
> > 2) You stated that "browsers didn't process certificate policy extensions
> > content during path building". This fails to clarify whether you believe
> it
> > was a Baseline Requirements violation, which makes no such statements
> > regarding policy building. Further, no such browser has, except for EV,
> > made use of any policy IDs beyond path building.
>
> Can you clarify: are you asking if Steve believes that the BRs require
> _browsers_ to do such processing of certificate policy extensions?
>

No. I'm asking if Symantec, through Steve, is intending to sound like a
Scooby Doo villain , or
whether it's merely accidental that this reads as "I would have gotten away
with it, if not for you meddling browsers"

More specifically, Symantec has failed to respond as to whether or not they
agree with the facts presented and, if so, whether or not this represents a
Baseline Requirements violation, as suggested. The reply could be read as
suggesting "This was meaningfully technically controlled, it is simply that
browsers failed to enforce that."

This is problematic on multiple fronts, least of all because policy mapping
and IDs have never been a meaningful form of technical control in the Web
PKI, and so I'm hoping for further elaboration on the statement to ensure
it is it not misinterpreted.


> Or if he believes that cross-certifying into a hierarchy which relies
> upon such extensions is a BR violation?
>

This.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec Response X

2017-04-11 Thread Ryan Sleevi via dev-security-policy
On Tue, Apr 11, 2017 at 6:21 AM, Gervase Markham via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Hi Ryan,
>
> On 10/04/17 17:20, Ryan Sleevi wrote:
> > 1) You stated that this partner program applies to non-TLS certificates.
> > The audit for both STN and for the RAs fails to make this distinction.
> For
> > example, audits are listed related to the issuance of of TLS
> certificates.
>
> The audits linked to from the wiki page relating to E-Sign and MSC
> TrustGate don't seem to have any mention of TLS certificates. Can you
> explain which audits you are referring to above that do mention them?
>

The audits mention the CP/CPS has been evaluated as part of the scope of
the audit.

The CP/CPS mentions the issuance of TLS certificates as part of the
hierarchy. For example,

"E-Sign provides its services in accordance with its Certificate Policy and
Certification Practices Statement"
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec Response T

2017-04-11 Thread Ryan Sleevi via dev-security-policy
On Tue, Apr 11, 2017 at 12:42 PM, Gervase Markham via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> In various rounds of questioning at the time we were focussing purely on
> this incident, I asked Symantec what processes they had in place for
> checking that the RAs were doing what they should. Their answer was
> "WebTrust audits". So I believe they have already said that no such
> examination was done. I'm sure they'd be happy to clarify, though.
>

In attempting to make an objective evaluation of the trustworthiness of
Symantec, in either its past operations or as a future predictor, we
essentially need to understand

1) That Symantec understood the gravity of the situation
2) That Symantec took it seriously and responded appropriately relative to
the trust it was granted
3) That Symantec remains committed to doing so in the future, and with
specific plans to identify and remedy the issues

On the basis of the information provided, I see no reason to believe the
answer to #1 is that they did not (or that they "disagreed"), the answer to
#2 is that "They did not", and the answer to #3 is "They are not, and have
no specific plans".

Symantec is asserting its processes were trustworthy, but the evidence
provided wholly contradicts that conclusion (in my opinion). I'm looking to
develop a meaningful understanding of what Symantec did, so that it can
demonstrate that what it did was reasonable and expected, or to acknowledge
there were deficiencies that have a remediation plan. The current statement
appears to be that the processes were appropriate and no deficiencies -
despite the Baseline Requirements clearly contradicting this - and thus it
seems appropriate to suggest that Symantec should not be trusted and/or
have its trust meaningfully reduced to negate the impact that these
deficient practices have on the ecosystem.

The burden is two-fold:
1) Are the facts correct? It does not appear Symantec has disputed these,
except with respect to the RA partner audits (for which it provided
evidence that supports the current conclusions and refutes their
disagreement)?
2) Are there plans for the future, or an approach to the past, that is
meaningful to consider when evaluating the trustworthiness. At present,
Symantec's not shared such, beyond the RA remediation plans, which are at
conflict with the Baseline Requirements, its CP/CPS, and its Subscriber
Agreements.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec Issues doc updated

2017-04-11 Thread Ryan Sleevi via dev-security-policy
On Tue, Apr 11, 2017 at 6:49 AM, Gervase Markham via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> I have attempted to integrate the information provided by Symantec into:
> https://wiki.mozilla.org/CA:Symantec_Issues
> and started to draw some conclusions where that is warranted.
>
> There are of course still open questions from myself, Ryan and others,
> and so the truth relating to some incidents is not yet clear.
>

Can you clarify what issues you believe this to be related?

Many of my questions related to Symantec's approach to handling incidents,
which were unaddressed or meaningfully deficient in ways that undermines
trust substantially, but were limited related to the material facts.

The only point where I believe Symantec disagreed with the summary was
Issue W, but it does not appear Symantec was bothered to support that claim
with objective data, rather than with statements. That is, that Symantec
disagrees is useful to know, but if Symantec has failed to provide any
evidence with that disagreement, I do not believe it should block reaching
a conclusion.

Given that Symantec has a routine habit of exceeding any reasonable
deadline for response, at what point do you believe it is appropriate for
the Mozilla Root Store to begin discussing what steps can or should be
taken with respect to the documented and supported incidents, which
Symantec has not provided counter-factual data?

Does the Mozilla Root Program seek to consider the intent of the CA that
violated the Baseline Requirements repeatedly for a span of several years?
If so, does it have a process at which point it will stop considering
feedback, versus allowing a CA to indefinitely delay meaningful action to
protect users?

It's unclear from your remark "Started to draw some conclusions where that
is warranted" what you see as the process and next steps. Perhaps you could
clarify what you imagine happening next, and on what timeline, to provide
clarity both to Symantec and the general population here. I must admit, I'm
quite confused as to where things stand, given that many items have
conclusions to them.

With respect to the conclusion to Issue T "Symantec's reaction to the
discovery of these problems was unarguably swift and comprehensive.", I
would disagree with this. Symantec's response was not swift, relative to
other CAs that have been informed of issues. It was not comprehensive -
Symantec failed to identify the issues until question, and still maintains,
in the latest response, that there is a conclusion unsupported by the
evidence they have shared with the community. Their timeline for
responsiveness was not swift - we're still discussing this specific issue,
and it was first reported on Issue T. I would be happy to find evidence of
issues from other CAs that demonstrate a more thorough response or a more
timely response.

With respect to the conclusion to Issue T, "Their case is that WebTrust
audit monitoring should have been sufficient," it's unclear if you are
agreeing with that conclusion or simply stating Symantec claims.

With respect to the conclusion to Issue V, "to specifically address the
GeoRoot audit status and remediation plan" - this was not reflected within
https://www.symantec.com/content/en/us/about/media/repository/23_Symantec_GeoTrust_WTBR_period_end_11-30-2016.pdf
, the relevant audit for the roots, ending on 2016-11-30. Do you believe
this should play in with any determination about the reliability of KPMG
audits (to discover this) and/or to Symantec (to disclose this to their
auditors)?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec Response T

2017-04-11 Thread Ryan Sleevi via dev-security-policy
On Mon, Apr 10, 2017 at 10:57 AM, Steve Medin via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Issue T: RA Program Misissuances (January 2010 - January 2017)
>
> Program Background:
>
> Symantec has operated an RA program designed to deliver a superior
> customer experience in global markets where language skills, understanding
> of local business requirements, and physical local presence are necessary.
> RA partners have supported various certificate types, including those for
> publicly trusted SSL/TLS.
>
> The RA program for publicly trusted SSL/TLS authorized appropriately
> trained personnel at select RA partners to complete all steps for
> authentication, review, and certificate issuance.
>
> In 2011, prior to ratification of the CA/Browser Forum Baseline
> Requirements, Symantec scaled back the scope of the RA program for publicly
> trusted SSL/TLS to support only those partners whose scale of business and
> investment in the future success of that business warranted the additional
> cost associated with supporting the then-new BRs. Since 2013, there have
> been only 4 RA partners still capable of processing and issuing publicly
> trusted SSL/TLS certificates: CrossCert, Certisign, Certsuperior, and
> Certisur.
>
> Symantec has had multiple controls in place to ensure these RAs'
> compliance with the BRs:
>
> Documentation:
> 1. Symantec operates an internal Knowledge Base ("KB") for its
> authentication staff and RA partners that contains detailed step-by-step
> procedures for performing each of the tasks required to validate the
> identity asserted in a certificate request.
> 2. The KB reinforces acceptable and unacceptable sources of validation
> information and processes using a subset of the information in the BRs.
> 3. The KB explains request flagging, flag reasons, and flag clearing
> procedures.
>
> Training & Exams:
> 1. Topics include BR changes, CPS changes, process changes resulting from
> industry incidents regardless of the CA involved, and a review of
> Symantec's procedures that extend the Baseline Requirements.
> 2. Exams are modified and retaken annually as criteria to renew individual
> access certificates or after significant internal or external process
> changes.
>
> Technology During Authentication:
> 1. Each request is screened for trade compliance, high-risk names,
> potential phishing (strings used in scam domains, high-profile brands), and
> other potentially risky content such as "test". Potential failures are
> flagged, preventing RA issuance, until and unless further review and
> analysis is completed.
> 2. Risk flags require manual override by authentication personnel -
> internal or RA personal as appropriate - for certificate issuance to
> proceed. Flag clearing privileges are only granted to personnel who are
> have completed the requisite training and passed appropriate exams.
>
> Technology Pre- and Post- Issuance:
> 1. Each request is screened to ensure elements outside of the subject
> information are BR compliant (e.g. SAN fields are complete, proper validity
> limits are in place, 2048 bit or higher key lengths are used, etc.). This
> scan is done after Authentication personnel approve the request and before
> it is issued. These checks cannot be overridden.
> 2. Daily, we rescan all certificates issued on the prior day using these
> same checks.
>
> Audit:
> 1. We requested independent WebTrust audit reports from RAs and assessed
> them for material findings pursuant to BR 8.4 regarding WebTrust audited
> Delegated Third Parties. See issue V addressing audits.
>
> Customer-Facing Controls:
> 1. Symantec supports Certification Authority Authorization, putting
> control of authorized CAs in the hands of customers.
> 2. Symantec logs publicly trusted certificates to Certificate Transparency
> Logs and offers a CT monitor to provide visibility for all customers to
> enable detection of suspect certificates.
>
> CrossCert Test Certificate Issue:
>
> On January 19, 2017, Andrew Ayer, an independent researcher posted the
> results of an analysis of public Certificate Transparency logs through
> which he identified roughly 270 instances of suspicious certificates issued
> by multiple CAs, including Symantec, containing the words "test" or
> "example" in the subject information.
>
> Symantec determined that 127 of these certificates were issued from
> Symantec operated CAs and that all 127 had been issued by the RA CrossCert.
> All but 31 had already expired or been revoked.
>
> Immediate Response
> Andrew Ayer's report was a certificate problem report under BR 4.9.5 which
> required us to begin an investigation within 24 hours, which we did. We
> determined that 127 certificates were in scope of the problem report.
>
> 1. On January 19, 2017, after becoming aware of this issue, Symantec
> disabled issuance privileges for all CrossCert staff.
> 2. On January 20, 2017, Symantec revoked the 31 still valid and active
> certificates. These 

Re: Symantec Response V

2017-04-11 Thread Ryan Sleevi via dev-security-policy
>
>
> Hi Steve,

Some follow-up questions:

1) Symantec stated "This information was in their management assertions,
and repeated in the audit findings. So the poor audit situation was ongoing
and known."
  a) Symantec did not meaningfully provide any explanation, now, or in the
past, as to why it took multiple audit periods to resolve these issues. In
order to establish for Relying Parties that Symantec is trustworthy and
competent, please supply additional details as to why it took so long.
  b) On the basis of the provided information, it does not appear Symantec
asked their GeoRoot partners for audits. This is also consistent with the
reports from UniCredits management, and we would be happy to reach out to
other GeoRoot partners regarding Symantec's communications over the past
several years. Given the issues such as Aetna, do you believe Symantec had
a material obligation to be diligent in obtaining an audit?
  c) What provisions, if any, did Symantec contractually have to ensure
such audits and compliance with Symantec's CP/CPS?
  d) Did such provisions include the ability for Symantec to revoke such
certificates for non-compliance, as required by the Baseline Requirements,
Section 9.6.3?
  e) If not, what steps have been taken to address this in all existing and
future business relationships?
  f) If it already existed, why did Symantec not exercise that option, as
required by the Baseline Requirements, Section 4.9.1.2?
  g) What assurances, if any, should Relying Parties have that Symantec
will execute its Baseline Requirements required obligations in the future,
given its documented failures in the past?

2) Symantec states "Because GeoRoot only operates under GeoTrust roots and
the associated CPS, the Symantec Trust Network and Thawte audits are fairly
stated."
  a) It has been identified that Symantec has failed to provide
BR-compliant audits for your RAs. Do you still believe this statement is
accurate?
  b) If so, why?
  c) If not, have you re-evaluated every statement Symantec has made in
response to these issues, to ensure that Symantec has not overlooked any
other material or contradictory evidence?

3) Do you believe the actions taken with respect to Aetna and Unicredit
were consistent with the Baseline Requirements?
  a) If so, specifically, what provisions?
  b) If not, what steps have you taken to ensure Symantec will abide by the
Baseline Requirements in the future, as is necessary and expected for
continued trust?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec Response B

2017-04-11 Thread Ryan Sleevi via dev-security-policy
On Tue, Apr 11, 2017 at 11:44 AM, Kurt Roeckx via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:
>
> The reply indicated that it was a non-browser application. So I understand
> that a browser should never see that certificate.
>

There's no way to objectively quantify or assess that, however. My question
still remains - what are the criteria for determining this, and what
process is in place for disagreement about this risk?


> The question is, can that certificate be used for authenticating something
> it shouldn't? And I guess that's not the case.
>

No. That is not the question.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec Response X

2017-04-11 Thread Ryan Sleevi via dev-security-policy
On Tue, Apr 11, 2017 at 12:33 PM, Gervase Markham via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:
>
> E-Sign's CPS URL is given in its audit statement as:
> https://www.e-sign.cl/uploads/cps_esign_388.pdf
>
> Grepping that document for "TLS" gives no hits. Can you help me some more?
>

para Certificados de servidor Web - Table 4

Section 3.1.1.1 "subjectAltName" including type dNSName and iPAddress

Also, search SSL. Not TLS :)


> E-sign appear to be a Symantec SSL reseller:
> https://www.e-sign.cl/soluciones/seguridad
> but of course, I'm sure many companies are, and that's not necessarily a
> problem.
>

Sure, but then such activities would not be audited or part of its CP/CPS,
as that would be handled by the issuing CA that performs these roles.


>
> MSC Trustgate's audit statement gives no CPS URL.
> https://cert.webtrust.org/SealFile?seal=2127=pdf


https://www.msctrustgate.com/repository.htm

https://www.msctrustgate.com/pdf/MSC%20Trustgate%20CPS%2001OCT2012%20V3%203%208%20final.pdf

Which has Symantec's logo on it. And states

"At this time, the domain-validated and organization-validated SSL
Certificates issued by MSC
Trustgate.com CAs under this CP are governed by the CABF Requirements. "

Further, its CPS states

"MSC Trustgate.com is a “Processing Center,” as described in CP §
1.1.2.1.2, which
means MSC Trustgate.com has established a secure facility housing, among
other
things, CA systems, including the cryptographic modules holding the private
keys
used for the issuance of Certificates. MSC Trustgate.com acts as a CA in
the STN and
performs all Certificate lifecycle services of issuing, managing, revoking,
and
renewing Certificates. "
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec Issues doc updated

2017-04-11 Thread Ryan Sleevi via dev-security-policy
On Tue, Apr 11, 2017 at 12:53 PM, Gervase Markham via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> > "to specifically address the
> > GeoRoot audit status and remediation plan" - this was not reflected
> within
> > https://www.symantec.com/content/en/us/about/media/
> repository/23_Symantec_GeoTrust_WTBR_period_end_11-30-2016.pdf
> > , the relevant audit for the roots, ending on 2016-11-30.
>
> I'm a little confused - I think Symantec are saying that the cover
> letter explains the plan to wind down the two sub-CAs, not that the
> audit does?


I believe you are correct that they are claiming the letter sent addresses
that.

I am highlighting, however, that such a statement was not recorded in their
audit, despite it being a violation of the Baseline Requirements during the
period of time in the audit.

That is, if you are accepting that such a letter is relevant to the
discussion (and I believe it's fair to consider it as part of that), then
you should also consider as relevant to the conversation the failure to
either disclose that to the auditors or for the auditors to note that
(whichever it may be).
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Grace Period for Sub-CA Disclosure

2017-04-03 Thread Ryan Sleevi via dev-security-policy
On Mon, Apr 3, 2017 at 12:58 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:
>
> taking a holiday and not being able to process a disclosure of a new
> SubCA.
>

Considering that the CCADB does not require any of these parties to process
a disclosure, can you again explain why the proposed wording would not be
sufficient?

I think you may be operating on incomplete/incorrect assumptions about
disclosure, and it would be useful to understand what you believe happens,
since that appears to have factored in to your suggestion. Given that the
proposal allows the CA to fully self-report (if they have access) or to
defer until they do have access, that does seem entirely appropriate and
relevant to allow for one week.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec Issues List

2017-04-03 Thread Ryan Sleevi via dev-security-policy
On Mon, Apr 3, 2017 at 12:46 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:
>
> How about this simple explanation (purely a guess, not at all checked):
>

I think we should focus on objective facts and statements. While there are
a number of possible ways to interpret things, both positively and
negatively, the fact that such multiple interpretations exist highlight a
problem that needs public clarification and resolution - both on behalf of
Symantec and their auditors.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec Issues List

2017-03-31 Thread Ryan Sleevi via dev-security-policy
On Fri, Mar 31, 2017 at 2:39 PM, Gervase Markham via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> As we continue to consider how best to react to the most recent incident
> involving Symantec, and given that there is a question of whether it is
> part of a pattern of behaviour, it seemed best to produce an issues list
> as we did with WoSign. This means Symantec has proper opportunity to
> respond to issues raised and those responses can be documented in one
> place and the clearest overayll picture can be seen by the community.
>
> So I have prepared:
> https://wiki.mozilla.org/CA:Symantec_Issues
>
> I will now be dropping Symantec an email asking them to begin the
> process of providing whatever comment, factual correction or input they
> feel appropriate.
>
> If anyone in this group feels they have an issue which it is appropriate
> to add to the list, please send me email with the details.
>

(Wearing a Google hat)

Gerv,

Thanks for organizing this information, as much of it was related to and
relevant to Google's recent announcement. I want to take this opportunity
to share additional details regarding the interactions for UniCredit, which
I believe may be useful and relevant both for understanding that issue and
the GeoTrust audits.

As the Chrome team announced at
https://security.googleblog.com/2015/10/sustaining-digital-certificate-security.html
, we made steps to require that all Symantec-issued certificates be
disclosed via Certificate Transparency.

In March of last year, Symantec provided us a list of five sub-CAs which
they termed GeoRoots: Apple, Google, Unicredit, Aetna, NTT Docomo - and
requested they be excluded from this requirement. We asked Symantec to
provide current audit statements for each of these CAs.

Symantec indicated that the audit information for these sub-CAs would be
added to the CCADB. This was on 3/29.

We then followed-up with Symantec, again, because as of 6/28, there were
several outstanding issues with Symantec's disclosures:

- Apple IST CA 3 was not covered by the general set of Apple audits
- No audit information for Aetna was provided, and its CPS was dated in 2011
- No audit information for Unicredit was provided
- NTT Docomo (DKHS and DKHS CA2) were disclosed as being part of Symantec's
audit

Upon follow-up, Symantec provided Aetna's WebTrust for BRs audit. On it,
there were 15 qualifications, some of which would have spanned the totality
of operation. If Symantec is not willing or able to provide this audit, we
would be happy to, in the interest of public transparency.

This audit was dated May 11, 2016, but covered the period January 1, 2015
through December 31, 2015. I highlight this, because this means it was
provided 132 days after the close of the period, or 42 days after provided
for by the Baseline Requirements. This audit was performed by Symantec's
auditors, KPMG, and thus demonstrates the pattern of delayed audits that
KPMG has shown a tendency for, and extends beyond just Symantec.

I want to highlight some of the qualifications on this audit:
"It was noted that:
 physical access to the cage housing the CA system was not logged;
 event logs for the CA prior to July 2015 were not available; and
 logs are not reviewed periodically."
"It was noted that no training programs were in place for Trusted Role
personnel."
"It was noted that a security risk assessment of the Aetna CA operations
was not performed during the examination period."
"It was noted that a security risk assessment of the Aetna CA operations
was not performed during the examination period."
"It was noted that penetration testing was not performed on the PKI
environment during the examination period."

Symantec's proposed remediation for this was to allow their contract to
expire, and proposed revoking this certificate in October. Note that this
was still nearly 3 months later.

Regarding NTT Docomo, Symantec repeatedly asserted they controlled issuance
for these CAs. However, I highlight this, because the Geotrust audits note
5 sub-CAs, so the fifth sub-CA, if not NTT Docomo, remains unknown and
unidentified by Symantec during the same time as their audit. If NTT Docomo
was issued and managed by Symantec, then their auditor (KPMG) did not
examine this.

Regarding Apple's IST CA 3 (and subsequently issued by Symantec, IST CA 8 -
G1), Apple requested and Google accepted a discussion with their CA
operations team. After discussions with the team, we received suitable
assurances from Apple that these two CAs were operated in accordance with
their CP/CPS and would be part of the next audit. As has been discussed in
other threads, there is a natural period of time where the scope of the
audit does not cover any newly issued intermediates, but we ensured that
these were listed within Apple's next CP/CPS, and thus scoped for the next
audit. Because of this reason alone, we excluded these sub-CAs from our CT
requirement.

Regarding Unicredit, Google requested that 

Re: Grace Period for Sub-CA Disclosure

2017-03-31 Thread Ryan Sleevi via dev-security-policy
On Fri, Mar 31, 2017 at 12:24 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:
>
> As previously stated, I think this will be too short if the issuance
> happens at a time when a non-CCADB root program (or the CCADB
> infrastructure) is closed for holidays, such as the following:
>

I'm not sure I've heard of many web pages being closed for the holidays.
Are yours?

I think Rob Stradling's suggestion more than addresses this - within 1 week
of the intermediate being issued or the CA being granted access to the
CCADB, whichever is the later?

Considering CAs must have 24/7 uptime and be able to review and respond to
certificate problem reports within 24 hours, I think the suggestion of how
to define holidays is unnecessary.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec Issues List

2017-04-01 Thread Ryan Sleevi via dev-security-policy
On Sat, Apr 1, 2017 at 12:57 AM, Peter Bowen  wrote:

> (Wearing my personal hat)
>
> Ryan,
>
> I haven't reviewed the audit reports myself, but I'll assume all you
> wrote is true.  However, I think it is important to consider it in the
> appropriate context.


> The GeoRoot program was very similar to that offered by many CAs a few
> years ago.  CyberTrust (then Verizon, now DigiCert) has the OmniRoot
> program, Entrust has a root signing program[1], and GlobalSign Trusted
> Root[2] are just a few examples.
>
> In almost every case the transition to requiring complete unqualified
> audits of the subordinates by a licensed practitioner was a rocky one.
> See DigiCert's thread
> (https://groups.google.com/d/msg/mozilla.dev.security.
> policy/tHUcqnWPt3o/U2U__7-UBQAJ)
> about the OmniRoot program or look at the audits available for some of
> the Entrust subordinates.
>
> I'm not suggesting that the GeoRoot subordinate issues should not be
> considered, but it seems the GeoRoot program was not notably
> exceptional a few years ago.
>

(Wearing a personal hat)

Peter,

There are a few issues to unpack from your reply. I think we're in
agreement that GeoRoot was by no means unique as an offering. I think, when
considering severity, it's important to instead focus on what the CAs
obligations were, what they were aware of, and what they did in response.
Further, in considering the broader scope of attempted remediation, it's
important to consider what risks were or are present as a result of this,
because it significantly impacts the ability to trust the existing set of
issued certificates.

On 2014-05-13, Mozilla requested all participating CAs disclose their
externally operated subordinate CAs. [1]
On 2014-06-03, Symantec reported it disclosed its sub-CAs in [2]
On 2015-04-06, Kathleen pointed out Symantec's disclosure was incomplete,
in [3] and [4]
On 2016-03-29, Symantec informed Google that there were 5 participants in
their GeoRoot program - Aetna, Google, Unicredit, Apple, NTT Docomo (DKHS).
On 2016-05-11 (or later), Symantec received Aetna's audit.
On 2016-05-13, Symantec's most recent audit for the Geotrust roots was made
available [5], which states there were 5 external partner subordinate CAs.
The timing of Aetna's letter suggests that this may be the audit that
"Symantec subsequently received an audit report for the other" - but that
cannot be confirmed without further detail from KPMG and Symantec.
On 2016-06-28, Symantec informs Google that NTT Docomo is part of
Symantec's audit, not separately audited.

This timeline hopefully highlights a particular serious issue: If NTT
Docomo is operated as part of Symantec's operations, then there are several
ways to interpret Symantec's audit statements:
1) KPMG failed to include NTT Docomo as part of the 5 externally operated
sub-CAs noted, and instead treated it as part of Symantec's audit. If this
is true, then there is an as-yet-unidentified intermediate certificate
issued as part of the GeoRoot program
2) KPMG was treating NTT Docomo as part of the 5 externally operated
sub-CAs noted. If this is correct, then it is in one of three sets
  a) The 3/5 sub-CAs for which KPMG identified as having audit reports
  b) The 1/5 sub-CAs for which KPMG identified as having a deficient audit
report (not appropriate to the scheme)
  c) The 1/5 sub-CAs for which KPMG identified Symantec as having later
received an audit report for.

If 2 is correct, then it's unclear of which set Aetna belongs to - that is,
if NTT Docomo is 2a, then Aetna is either 2b/2c, and it suggests that KPMG
may have been incomplete in its examination of the 2a set. If NTT Docomo is
2b, then Aetna is either 2a/2c, but calls into question Symantec's
operations if they were themselves operating this root, as it was not part
of the scope of the audit. If NTT Docomo is 2c, then Aetna is either 2a/2b,
both of which would call into question KPMG. Any of these possibilities is
quite troubling, but nowhere near as troubling as the possibility of 1,
which would imply an undisclosed sub-CA.

Based on the information provided by Unicredit, Unicredit would appear to
be 2b, because it was not performed by a licensed WebTrust practitioner to
the appropriate standards. Based on the information provided, Aetna would
seem to be 2c, but that would require confirmation from Symantec or KPMG.
This means that NTT Docomo is either 2a or 2c - either of which should be
concerning.

Independent of any questions regarding how other CAs (such as the
critically mismanaged Omniroot program) responded to disclosure, the
questions about the scope of "which sub-CAs were examined by KPMG" is very
much relevant to the discussion at hand, and gets to the heart of whether
or not there can be sufficient confidence to trust the existing set of
certificates. This also sets aside the question about whether or not
Symantec can/should be trusted going forward. It also highlights the limits
of relying on a report such 

Re: Policy 2.5 Proposal: Expand requirements about what an Audit Statement must include

2017-04-12 Thread Ryan Sleevi via dev-security-policy
On Wed, Apr 12, 2017 at 10:15 AM, Peter Bowen <pzbo...@gmail.com> wrote:

> On Wed, Apr 12, 2017 at 5:57 AM, Ryan Sleevi via dev-security-policy
> <dev-security-policy@lists.mozilla.org> wrote:
> >
> > A certificate hash does provide distinct value.
> >
> > The certificate hash is what is desired. Yes, there could be multiple
> > certificates. But within the context of the scope of an audit and a
> > 'logical' CA, the auditor can and should be clear about what physical
> > certificates corresponded to the logical operations of that CA.
>
> What portions of the certificate(s) naming that CA as the subject will
> impact the audit?
>
> As I see it, the only certificates that are relevant to the audit are
> those that have the CA as the issuer.  It really doesn't matter who
> cross-signs the CA.
>

So we talked about this (briefly) during the CA/Browser Forum F2F 40 in
Raleigh, but:

As you know, RFC 5280 defines a trust anchor as a DN/Key tuple as the basis
for trust. That is, if a thing signed by a CA bears a particular DN in the
field, we say that it was 'issued' by that CA
  - CAs can issue different things using a single key, governed by the
relevant specification
  - For example, if a TBSCertificate (
https://tools.ietf.org/html/rfc5280#section-4.1 ) contains the given DN in
the Issuer field, and is signed by the associated key (creating a
Certificate), then we say the CA issued the Certificate
  - For example, if a TBSCertList (
https://tools.ietf.org/html/rfc5280#section-5.1 ) contains the given DN in
the Issuer field, and is signed by the associated key (creating a
CertList), then we say the CA issued the CRL
  - For example, if a CA's key is used to sign a ResponseData (
https://tools.ietf.org/html/rfc6960#section-4.2.1 ) in the production of a
BasicOCSPResponse, then we say the CA issued the OCSP response (notably,
there's no encoding of the Issuer DN within the ResponseData beyond that of
the CertID, which comes from the request and contains the hash of the
Issuer DN and Issuer Key, but not their actual values; the binding to the
CA comes from the unsigned portion of the BasicOCSPResponse which
establishes the certificate chain of the issuer, or is implied to be the
issuer of the current CertID if absent)
  - For example, if a CA's key is used to sign a TBSCertificate (
https://tools.ietf.org/html/rfc5280#section-4.1 ) containing the given DN
in the Issuer field, a critical poison extension (
https://tools.ietf.org/html/rfc6962#section-3.1 ) and signed by the
associated key (creating a Certificate), then we say the CA issued the
Precertificate (the confusion and complexity here about whether a
Precertificate-is-a-Cert is well known)

I mention all of these examples to illustrate that the act is with the key,
and whether or not it was 'issued' determines on where, how, and if the
given ASN.1 structure encodes the DN. There's a whole host of complexity
there - for example, if I create a Sleevi-ID and submit to the IETF that
uses the same ASN.1 structure of a Certificate/TBSCertificate, but name it
differently (and perhaps use slightly different encoding, such as omitting
the DEFAULT production rule for some fields in the syntax), is that or is
that not a certificate?

Now further, imagine a given CA has multiple certificates bearing the
associated DN in the Subject, and sharing the same key. This might be the
common case of having a self-signed certificate and one which may be
cross-signed by either the same legal entity or a different legal entity.

One of these certificates contains no nameConstraints extension (and the
subject and issuer match)
Another of these certificates contains a nameConstraints extension
restricting its issuance practices to test.example (and a different issuer)

I take that private key and copy it between two distinct infrastructures.
The first infrastructure is my publicly trusted infrastructure. The second
is what I call my 'test' instance. Both are independently maintained and
operated, and responsible for their own serial number production (e.g. they
may collide)

I issue all sorts of 'evil' certs from the latter infrastructure (e.g. I
don't perform domain validation). All of these I claim are benign, because
nameConstraints means they are not processed as valid. Except for the fact
that all of these 'evil' certs could be intepreted as chaining to the first
CA (and thus be actively used for nefarious purposes).

Now, if the auditor only comes in and examines the first infrastructure -
the one that is acting properly - and issues an audit report, then they
will have only examined one part of the issuance infrastructure, and only
in the 'context' of the self-signed, well-behaving certificate. Without
binding that audit to the certificate, my evil self can take that audit
report and present it as being binding to my 'evil' infrastructure as proof
that I have acted good and well, despit

Re: Symantec Response B

2017-04-12 Thread Ryan Sleevi via dev-security-policy
On Wed, Apr 12, 2017 at 4:24 AM, Kurt Roeckx via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:
>
> I don't think 2) applies. It's only their software, that obviously can't
> be updated yet, and so won't enforce such limit. That doesn't prevent the
> rest of us to set such limit.
>

Hi Kurt,

I appreciate that you're engaged and offering your thoughts. I would
appreciate, however, if you allowed Steve to respond on behalf of Symantec.
I do not agree with your conclusions or interpretation of matters, but more
importantly, the questions are for Symantec. #2 absolutely applies as a
principle.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Criticism of Google Re: Google Trust Services roots

2017-04-06 Thread Ryan Sleevi via dev-security-policy
On Thu, Apr 6, 2017 at 1:42 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

>
> Here are some ideas for reasonable new/enhanced policies (rough
> sketches to be discussed and honed before insertion into a future
> Mozilla policy version):
>

Are you suggesting that the current policies that have been pointed out are
insufficient to address these cases?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Remove BR duplication: reasons for revocation

2017-04-20 Thread Ryan Sleevi via dev-security-policy
Gerv,

I must admit, I'm not sure I understand what you consider irrelevant
reasons for 4.9.1 in the context of e-mail addresses.

The only one I can think of is
"7. The CA is made aware that a Wildcard Certificate has been used to
authenticate a fraudulently misleading
subordinate Fully-Qualified Domain Name;"

But that's because such e-mail CAs are effectively wildcards (e.g. they can
issue for subdomains, unless a nameconstraint includes a leading . to
indicate for host not domain)

But given that e-mail addresses include Domain portions (after all, that is
the definition, localpart@domain), and Fully-Qualified Domain Name doesn't
imply a sAN of type dNSName, this all seems... ok as is?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Remove BR duplication: reasons for revocation

2017-04-20 Thread Ryan Sleevi via dev-security-policy
On Thu, Apr 20, 2017 at 6:15 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:
>
> Technically, the part after the @ could also be a bang!path, though
> this is rare these days.
>

No, technically, it could not.

RFC 5280, Section 4.2.1.6.  Subject Alternative Name
   When the subjectAltName extension contains an Internet mail address,
   the address MUST be stored in the rfc822Name.  The format of an
   rfc822Name is a "Mailbox" as defined in Section 4.1.2 of [RFC2821].
   A Mailbox has the form "Local-part@Domain".  Note that a Mailbox has
   no phrase (such as a common name) before it, has no comment (text
   surrounded in parentheses) after it, and is not surrounded by "<" and
   ">".  Rules for encoding Internet mail addresses that include
   internationalized domain names are specified in Section 7.5.

Note that RFC 2821 was OBSOLETEd by RFC 5321. RFC 5321 Section 4.1.2 states

   Mailbox= Local-part "@" ( Domain / address-literal )
   address-literal  = "[" ( IPv4-address-literal /
IPv6-address-literal /
General-address-literal ) "]"
; See Section 4.1.3
   Domain = sub-domain *("." sub-domain)
   sub-domain = Let-dig [Ldh-str]
   Let-dig= ALPHA / DIGIT
   Ldh-str= *( ALPHA / DIGIT / "-" ) Let-dig

Section 4.1.3 states
   IPv4-address-literal  = Snum 3("."  Snum)
   IPv6-address-literal  = "IPv6:" IPv6-addr
   General-address-literal  = Standardized-tag ":" 1*dcontent
   Standardized-tag  = Ldh-str
 ; Standardized-tag MUST be specified in a
 ; Standards-Track RFC and registered with IANA

To confirm, I also checked the IANA registry established, which is
https://www.iana.org/assignments/address-literal-tags/address-literal-tags.xhtml

The only address literal defined is IPv6.

Could you indicate where you believe RFC 5280 supports the conclusion that
a "bang!path" is permitted and relevant to Mozilla products?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec Conclusions and Next Steps

2017-04-21 Thread Ryan Sleevi via dev-security-policy
On Fri, Apr 21, 2017 at 6:16 AM, Gervase Markham via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> I've updated the Issues list:
> https://wiki.mozilla.org/CA:Symantec_Issues
> with the latest information. 3 issues have been marked as STRUCK due to
> lack of evidence of anything actually being wrong - including,
> importantly, the suggestion that they have unaudited unconstrained
> intermediates (further audits have been published).
>

Gerv,

I would encourage you to talk to Kathleen before considering that matter
resolved, because it is different than the advice and requirements that
have been given to other CAs, and to the work required of them.

For example, as you know, Mozilla required that the Belgian subordinates
previously under the Verizon brands, now under Digicert, under go a BR
audit to attest that no SSL certificates have been issued. This is not the
only CA, but it was merely the most recent for which such a requirement was
made - of both the sub-CA and the parent CA. The conclusion to strike this
would thus be be an inconsistent application of Mozilla policy. I believe
you're on some of those threads.

The audits provided are also not consistent with the Mozilla Root Program
requirements, which define technical capability of issuance and the
appropriate audit standards. Specifically, section 5.3 of the policy
appears to provide unambiguous clarification that the audit scheme used for
these sub-CAs, and their sub-CAs, is not consistent with Mozilla policy,
and this non-consistency has been made clear to other CAs with a
requirement for remediation or revocation.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA Validation quality is failing

2017-04-19 Thread Ryan Sleevi via dev-security-policy
On Wed, Apr 19, 2017 at 3:47 PM, Mike vd Ent via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Ryan,
>
> My answers on the particular issues are stated inline.
> But the thing I want to address is how could (in this case Digicert)
> validate such data and issues certificates? I am investigation more of them
> and afraid even linked company names or registration numbers could be
> false. Shouldn't those certificates be revoked?
>

You are correct that it appears these certificates should not have issued.
Hopefully Jeremy and Ben from DigiCert can comment on this thread (
https://groups.google.com/d/msg/mozilla.dev.security.policy/DgeLqKMzIds/ig8UmHT2DwAJ
for the archive) with details about the issues and the steps taken.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Remove the bullet about "fraudulent use"

2017-04-20 Thread Ryan Sleevi via dev-security-policy
+1 to what sounds like a perfectly reasonable position
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA Validation quality is failing

2017-04-20 Thread Ryan Sleevi via dev-security-policy
On Thu, Apr 20, 2017 at 6:42 AM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

>
> One thing:
>
> Could this be a result of the common (among CAs) bug of requiring entry
> of a US/Canada State/Province regardless of country, forcing applicants
> to fill in random data in that field?


That Is not common among CAs, because it's not how certificate information
is validated. Perhaps it would be best if you just waited for Jeremy to
respond, rather than attempting to speculate about the system. I appreciate
the eagerness to find answers, but those sorts of speculation don't really
help much.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Email sub-CAs

2017-04-13 Thread Ryan Sleevi via dev-security-policy
On Thu, Apr 13, 2017 at 10:48 AM, Gervase Markham via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

>
> > Section 3.1.2.1 specifies that any CA capable of issuing secure email
> > certificates must have a "WebTrust for CAs" audit (or corresponding
> > ETSI audit).  This is a huge change from 3.2 and I wonder if all CAs
> > understand this.  Even the Blog about this version does not highlight
> > this substantial change:
> > https://blog.mozilla.org/security/2017/04/04/mozilla-
> releases-version-2-4-ca-certificate-policy/
>
> I didn't realise it _was_ a substantial change. Are you saying that you
> used to think it was fine for email-only sub-CAs to have no audits at
> all? Is this because you considered all such CAs to be TCSCs (by the
> Mozilla definition)?
>
> Even if we didn't require it in our policy, I'm very surprised that
> no-one else does. Which other root store policies have requirements on
> email-only sub-CAs?
>

https://social.technet.microsoft.com/wiki/contents/articles/31635.microsoft-trusted-root-certificate-program-audit-requirements.aspx
(aka http://aka.ms/auditreqs)

S/MIME trust bit requires either "WebTrust Principles and Criteria for
Certification Authorities - WebTrust for CAs 2.0" or the combination of the
following: "WebTrust Principles and Criteria for Certification Authorities
- WebTrust for CAs 2.0" "ETSI TS 102 042 V2.4.1 or later (LCP, NCP, NCP+
policies) - Electronic Signatures and Infrastructures (ESI); Policy
requirements for certification authorities issuing public key certificates"
and "ETSI TS 101 456 V1.4.3 or later - Electronic Signatures and
Infrastructure (ESI); Policy requirements for certification authorities
issuing qualified certificates"




>
> > Obviously there are a lot of technically constrained CAs issued to
> > organizations to run their own CAs for issuing secure email and
> > client auth certificates.  In order for them to continue operations
> > they now every organization needs to be publicly reported and audited
> > (a new requirement for 2.4.1 as far as I can tell), is that right?
>
> This is issue #36 :-)
> https://github.com/mozilla/pkipolicy/issues/36
>
> Do the CAs you are thinking of in this category have name constraints,
> or not (either actually in the cert, or via business controls)?
>
> > When did (does) this take effect?   Is this for new CAs, existing or
> > both?   When would the Audit Period for these CAs need to begin?
> >
> > This is a side question, but does the Mozilla policy require that
> > these CAs meet the Network Security Requirements?
>
> https://github.com/mozilla/pkipolicy/issues/70 :-) Not at the moment.
>
> > Section 5.3.2 says that all CAs of the type I'm discussing must be in
> > the CCADB.  What's the timeline for CAs to upload them?
>
> Well, let's figure out what the right thing to do is first. If it turns
> out we've created new normative requirements accidentally, the first
> thing to do is to decide whether that's what we meant. Only then will we
> set some sort of sane implementation timeline.
>
> Gerv
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certificate issues

2017-04-18 Thread Ryan Sleevi via dev-security-policy
On Tue, Apr 18, 2017 at 12:09 PM, Jeremy Rowley via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Hi everyone,
>
>
>
> On Friday at 1:00 pm, we accidently introduced a bug into our issuance
> system that resulted in five serverAuth-code signing certificates that did
> not comply with the Baseline Requirements.  The change modified a handful
> of
> code signing certificates into a pseudo- SSL profile. Because they were
> intended to be code signing certificates, the certificates issued off a
> code-signing intermediate (with code-signing as the sole EKU). The
> certificates contain a servauth EKU despite the intermediate's EKU
> restriction. The certificates also lack a domain name. Instead, the CN and
> dNSName include the code signing applicant's name.  Because the certs lack
> a
> domain name and there is an EKU mismatch between the issuer and end entity
> certs, the certs can't be misused.
>
>
>
> Our systems detected the issue shortly after the change. We corrected the
> code, and revoked the certificates. We already scanned our entire
> certificate database to ensure these are only the certificates affected by
> the bug.
>
>
>
> The certificates in question are:
>
> * 02CD2F16F3CA4FCC7378C917FFD5F6A0
>
> * 09A88902AF0698841167E814DB8B3FB8
>
> * 0D7C350D52821BFD2326270B9215DCE5
>
> * 0356D3A74CFA29BB5E65569E0532F134
>
> * 089FBE93D335ADB8BDFCDCF492083B68
>
>
>
> The bug was introduced, ironically, in code we deployed to detect potential
> errors in cert profiles. This error caused the specified code signing
> certificates to think they needed dNSnames and serverAuth. Let me know if
> you have questions.
>
>
>
> Thanks,
>
> Jeremy
>

Thanks for posting this, Jeremy.

Are these certificates logged to Certificate Transparency? While not
wanting to suggest I'm doubting you, being able to demonstrate that all
intermediates they chain to are restricted from the serverAuth EKU is
useful.

I realize that's asking you to go above and beyond what you've disclosed so
far. I think if/once we can add clarity to the Baseline Requirements
regarding the scope, it would likely be clearer that these would be out of
scope of the Baseline Requirements, and thus any such disclosure only be
relative to root programs that recognize those paths as code-signing
capable.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certificate issues

2017-04-18 Thread Ryan Sleevi via dev-security-policy
On Tue, Apr 18, 2017 at 1:32 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> I believe the point was to check the prospective contents of the
> TBSCertificate *before* CT logging (noting that Ryan Sleevi has been
> violently insisting that failing to do that shall be punished as
> harshly as actual misissuance) and *before* certificate signing.
>

While I appreciate the explicit callout as much as anyone, I think it's a
mischaracterization to state "violently". Have I suggested actual violence?

Whether you personally agree with it or not, I should note
https://wiki.mozilla.org/CA:Symantec_Issues#Issue_J:_SHA-1_Issuance_After_Deadline.2C_Again_.28February_2016.29

"(The CT RFC states that issuance of a pre-certificate is considered
equivalent to issuance of the certificate, and so Mozilla considers that
pre-certificate misissuance is misissuance.)"


> Thus the checks would have to occur before signing, but it would still
> be useful (architecturally) to run the checks without the ability to
> change the request (other than to reject it with an error message).
> Such separation will however have non-zero cost as the prospective
> TBSCertificate or its description needs to be passed between additional
> processes.
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA Validation quality is failing

2017-04-19 Thread Ryan Sleevi via dev-security-policy
On Wed, Apr 19, 2017 at 6:41 PM, Peter Gutmann via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Kurt Roeckx via dev-security-policy 
> writes:
>
> >Both the localityName and stateOrProvinceName are Almere, while the
> province
> >is Flevoland.
>
> How much checking is a CA expected to do here?  I know that OV and DV certs
> are just "someone at this site responded to email" or whatever,


This is not correct. This can be easily answered by
https://cabforum.org/wp-content/uploads/CA-Browser-Forum-BR-1.4.2.pdf

Section 3 governs validation, Section 7 governs the profile of how to use
that validated information


> but for an
> EV cert how much further does the CA actually have to go?  When e-Szignó
> Hitelesítés-Szolgáltató in Hungary certifies Autolac Car Services, Av Los
> Frutales 487 urb., Lima, Peru, are they expected to verify that it's really
> in Av Los Frutales and not Los Tolladores, or do they just go ahead and
> issue the cert?  Can someone point to the bit of the BR that says that this
> is obviously right or wrong?
>

For an EV cert, you look in
https://cabforum.org/wp-content/uploads/EV-V1_6_1.pdf
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA Validation quality is failing

2017-04-19 Thread Ryan Sleevi via dev-security-policy
On Wed, Apr 19, 2017 at 7:53 PM, Kurt Roeckx via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:
>
> (It was a code sign certificate, but I expect if it's labeled EV
> that the same things apply.)
>

Not necessarily. A separate set of guidelines cover those -
https://cabforum.org/ev-code-signing-certificate-guidelines/

Neither Mozilla nor Google actively participate in the maintenance of those
documents.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Removing "Wildcard DV Certs" from Potentially Problematic Practices list

2017-04-23 Thread Ryan Sleevi via dev-security-policy
On Sun, Apr 23, 2017 at 7:41 AM, Nick Lamb via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:
>
> I was thinking of things like the GoDaddy incident reported in January
> where they had mistakenly been accepting HTTP 404s to validate a domain or
> the 2016 Comodo "re-dressing" attack where a bad guy could arrange for your
> contact to get emails from Comodo saying they need to click a button to
> prevent an SSL certificate being issued, but actually clicking will cause
> it to be issued to the attacker...
>
> In such cases bad guys can get a wildcard rather than validation just for
> one affected name, and that makes their life much easier.
>

Are you talking per-certificate? Because the validation method is used for
the domain namespace can be applied to the subdomains.


> Going further back DigiNotar was made worse by the certificate being
> issued for *.google.com, not to say it wasn't bad enough to have bad guys
> essentially issuing whatever they wanted from a trusted CA.
>

Right, that's the high-order bit: Wildcards would not have changed that
situation at all. They also did *.*.com, so it's not like that's a strike
against wildcards.

We have to remember that attacks target the weakest link, and that link
isn't wildcards, under any of the present or (unfortunately) proposed
validation methods.


> Also whenever we see people blaming the issuer for phishing sites
> protected by SSL, a wildcard would of course let its subscriber create any
> number of phishing sites, without any oversight of the names used prior to
> issuance. I happen to think that's fine, but it wouldn't even be a factor
> without wildcards.


Well, again, that's mistating that there is any oversight today. There
isn't - nothing formalized or normalized, it's ad-hoc, CA defined
procedures. Considering that CAs are deciding that violating the BRs by
doing things like cross-signing unaudited sub-CAs because they determined
"there wouldn't be any risk" because of contractual prohibitions, I hope we
can see that the argument that CAs are technically capable and cognizant,
to the same level, across the industry, is uh... wishful thinking :)
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Include Renewed Kamu SM root certificate

2017-03-09 Thread Ryan Sleevi via dev-security-policy
On Thu, Mar 9, 2017 at 12:26 PM, tugba onder via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Here, the part that needs to be taken care is "validate using at least one
> of the methods listed". Although we mentioned it in our previous response,
> I guess you missed it; we do not make verification just with respect to
> 3.2.2.4.6. For the further satisfaction of 3.2.2.4, we first apply
> 3.2.2.4.1, then 3.2.2.4.3 or 3.2.2.4.4 and then 3.2.2.4.5. Therefore, even
> if we do not implement 3.2.2.4.6 at all, we satisfy the condition "validate
> using at least one of the methods listed" in 3.2.2.4.
>

You're right, I did miss it / misunderstand. It's the first case I've heard
of CAs applying multiple checks in an additive fashion; I've only ever
heard of multiple layers being applied :)


> While we are implementing 3.2.2.4.6, we generate the "Required Website
> Content" concept described in ballot 169, including only the information
> that uniquely identifies the subscriber without a random value or request
> token. This practice comes from item 6 of section 3.2.2.4 of BR v1.3.7. The
> important thing that should be noted here is, the use of random value or
> request token is coming with ballot 169. The effective date for ballot 169
> was 1 March 2017, and the date on which we have received our audit report
> was December 2016, before the effective date.
>

Right, this was less a concern for misissuance, and more a concern for what
we've seen a number of CAs do - which is fail to stay up to date relative
to the changes. Your description of your validation after March 1 was
inconsistent with 3.2.2.4.6, which is why I flagged it. If you've already
validated the domain using a different form permitted, and 3.2.2.4.6 is
just a secondary layer of validation, then I agree, it's no concern.

It's only if you use the process you describe as the primary validation -
if so, it must conform to 3.2.2.4.6, or you must use some other form of
primary validation.


> If you consider any other inconsistencies, please inform us, we will
> appreciate it.


My request was one of just taking a few days / a week to re-examine what
the current BRs are, using your knowledge of your policies and practices,
and make sure that all methods are consistent. For example, the 64-bits of
entropy, the aligned-with-3.2.2.4.6 method of domain validation, etc. That
your auditor did not flag these implies that your auditor did not do that
level of analysis, but that's also not surprising given the role/function
of auditors (some auditors do this as part of their engagements, some
auditors do not, and generally both are seen as complying with the
necessary level of professional duty; just the ones that do are better
auditors, and the ones that don't may miss stuff that finds them removed as
trusted auditors in the future)

Because we've seen some CAs argue that "You didn't explicitly say we had to
follow X in the BRs", I wanted to avoid that situation, by just making sure
Kamu SM warrants that "We've read the BRs 1.4.2, we've examined our
policies and practices, we believe they're consistent and apply" (or "We
identified items X, Y, Z that we are fixing by doing A, B, C")
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec: Next Steps

2017-03-09 Thread Ryan Sleevi via dev-security-policy
On Thu, Mar 9, 2017 at 1:34 PM, Steve Medin via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> In the case of CrossCert, where we have evidence of failure to properly
> document their work, we are NOT relying on their previous work and have
> begun fully revalidating all active certificates. In the cases of the other
> 3 RAs, our focus is reviewing all of the work previously done to verify
> that it can, in fact, be relied upon and/or determine where full
> revalidation, without relying on the prior work of the RA, is warranted, if
> at all.
>

Steve,

While I appreciate your reply, I think it highlights precisely the concern
about whether or not Symantec is qualified and/or should be trusted to make
this determination, given that Symantec is in possession of documented
evidence from one of their other RA partners about a failure to properly
document their work and to ensure the authenticity of what was documented.

Given your reply above, I think it's reasonable for readers to conclude
that Symantec's Compliance Team, despite having been alerted to these
issues on February 8, and having been aware of them for far longer, has
decided that they are not significant. I'm not sure how such a conclusion
is consistent with the information provided, and eagerly await any
explanation Symantec may offer.

Further, you have acknowledged that at least one auditor lacked sufficient
skill and licensing to perform the audit. It is also clear that one or more
of these RA partners was not audited with respect to "WebTrust Principles
and Criteria for Certification Authorities - SSL Baseline with Network
Security", and as such, lacks effective demonstration of adherence to the
security-relevant Principles and Criteria contained therein, only having
produced audits to the effect of "WebTrust Principles and Criteria for
Certification Authorities".

As demonstrated by the historical audits, the issues presented issues span
multiple years, so even remediation plans that may have been effected for
one or more of these delegated third parties, such plans do not
retroactively 'correct' any misissuance or bad data logged in such systems.

Finally, I am uncertain how any of Symantec's proposal is consistent with
its CP/CPS, which incorporates the Baseline Requirements. In particular,
Symantec has now had six weeks, and still has failed to abide by the terms
of Section 4.9.1.1 regarding these 30,000 certificates.

Regardless of the next steps Symantec may take, I think it's reasonable to
suggest that these are all extremely important for members of the community
to carefully contemplate, and all of them rest specifically with actions
and statements made by Symantec since this investigation began, rather than
the RA partners.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DigiCert BR violation

2017-03-09 Thread Ryan Sleevi via dev-security-policy
(Wearing an individual hat)

On Thu, Mar 9, 2017 at 4:18 PM, Jeremy Rowley via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Although we have a policy
> against using live certificates for testing, the policy was not followed in
> this case.


Can you share why? Can you share what steps you'll be taking to make sure
policies are followed in the future? I think we've seen some pretty stark
examples about what can happen when a CA doesn't follow its policies for
test certificates - from CNNIC to Symantec.


> However, I think this discussion raises some very interesting points about
> real world scenarios and RFC 5280 that should be addressed.  DigiCert
> actually has three items that routinely show up on CABLint:
> 1)  Use of teletext in strings (although this only occurs in
> re-issues/duplicates of previously issued certificates)
>

Is this in the issuer field? Or in the subject information? I can
understand if your issuer cert has this issue, but I don't think there's
any good reason for this for the subject information.


> 2)  Too long of fields, primarily the O and OU fields
> 3)  Use of underscore characters in certs


>  We've had an open item to fix these issues for a while, but haven't
> prioritized them because:
> a)  From a technical standpoint, the WebPKI supports them,
> b)  The inclusion of longer names reflects the real world where company
> names are often longer than 64 char, especially in Europe and Asia
> (translating international names to puny-code rarely results in a nice
> short
> name),
> c)  We haven't felt that there are sufficiently significant risks
> associated with the issues to spend resources addressing them considering a
> and b compared to higher priorities (like our current project of requiring
> only 169 validation methods), and
> d)  Lots of CAs have the same or similar issues under RFC 5280
> according
> to CABLint, and those issues don't seem to be garnering a lot of attention
> (perhaps because of higher risk issues taking priority).
>

I gotta admit, this sounds pretty disheartening.

"We know we have issues, we've known about them for a while, but we've kept
doing it, because we don't think it's a big deal, and everyone else is
doing it".

The BRs, in part, exist to avoid that judgement call, because we see time
and time again where CAs are making that judgement call and it's not ending
well. If you don't think it belongs, then why not propose BR changes? If
you don't think it's important, then why not propose root policy changes?

I appreciate the effort towards only 169 validation methods, but how are
we, the relying parties, supposed to know that DigiCert won't, say,
deprioritize following the BRs on that because y'all decide it's not a big
security issue, and instead want to focus on a new product offering, since
(besides whatever revenue benefit it might have), it gets more sites on
HTTPS?

As for other CAs, shouldn't you be making sure your house is in order
first? :) But also, if there are other issues, shouldn't you be pushing for
greater disclosure and transparency? We constantly see this correlation
between smoke and fire - and if you're seeing smoke, don't you think it
should be raised?

I appreciate the principled stance you're mention, but I'm sure you can
realize the systemic and endemic harm that comes from "Trust us to evaluate
whether compliance is right or not" - we know that's absolutely a failed
mindset from the past decade of failures. Why isn't the principle "Be above
reproach" - which includes improving security as a natural consequence?


> In fact, I think security is improved by providing these
> certificates because these customers/domains would remain unsecured without
> certificates or be forced to truncate/omit important information. I believe
> most CAs have reached the same conclusion after considering the large
> number
> of issues reported through CABLint.
>

"We should be able to misissue certs, because at least people are on HTTPS"
- this is a terrible argument, and I have a tremendous amount of respect
for you, but I'm shocked to hear you make it, given your knowledge of the
industry. Misissuance of any form is a terrible practice, regardless of the
reasons, precisely because it starts us into the subjective realm.


> The discussion also raises an interesting question of when issues become
> significant enough they need to be addressed on the Mozilla list or require
> revocation. For example, one of our cross-signed partners issued a large
> number of certificates that lacked the proper OID. Should each of these
> certs warrant a discussion and separate revocation decision?


Discuss the problem - not the certificates.

Discuss what you're doing to address the problem. What caused the issue.
How many certificates did it affect? What steps are you taking? When will
those be complete?


> The browsers
> don't do anything with this information so I'm unsure whether them 

Re: Symantec: Next Steps

2017-03-08 Thread Ryan Sleevi via dev-security-policy
On Wed, Mar 8, 2017 at 10:30 AM, Gervase Markham  wrote:

> On 07/03/17 20:37, Ryan Sleevi wrote:
> > To make it simpler, wouldn't be a Policy Proposal be to prohibit
> Delegated
> > Third Parties from participating in the issuance process? This would be
> > effectively the same, in as much as the only capability to allow a
> > third-party to participate in issuance is to operate a subordinate CA.
>
> Is this the same as banning the concept of DTPs?
>
> I note, reading the BRs, that there's no process for root programs to
> get any access to, or validate, the audit documentation for DTPs. That
> doesn't sound great. Making them sub-CAs would solve that?
>

That is precisely the goal. We could define a set of process and procedures
specific to DTPs, which is effectively duplicitive with the handling of
subordinate CAs, or we could strive to align the two both conceptually and
materially, since, as you note below, there's a number of similarities in
the risk profile.

The concern with the approach of both DTPs and subCAs is that it's very
easy for nuanced and subtle distinctions to be introduced, and as such, it
seems better to avoid that when possible by aligning on the majority-common
portion.



> > Similarly, do you believe Symantec had an obligation to ensure the proper
> > licensing status of auditors, prior to accepting such audits?
>
> No. This may surprise you but, for better or worse, the Mozilla
> requirements override those of the BRs (see the Audit section of policy
> 2.4)


Note: It does not appear you've updated
https://www.mozilla.org/en-US/about/governance/policies/security-group/certs/policy/
- do you plan to?

https://www.mozilla.org/en-US/about/governance/policies/security-group/certs/
still links to it, as does https://wiki.mozilla.org/CA:Overview - so I
suspect there's still a substantial bit of cleanup work to do here ;)

(Plus the bugs introduced in 2.4 that were missed until the 2.4.1
discussion, such as the scope, which isn't present in 2.3)



> and those do not require official licensing of auditors.
> Historically, this was because we wanted to leave room for CACert. What
> they actually say is that they give definitions of a "competent party"
> and "independent party", and then say:
>

I'm surprised by that reading, because Item 3 of that section states

By "competent party" we mean a person or other entity who is authorized to
perform audits according to the stated criteria (e.g., by the organization
responsible for the criteria or by a relevant government agency) or for
whom there is sufficient public information available to determine that the
party is competent to judge the CA’s conformance to the stated criteria.

In the absence of a proper license, such parties are not "authorized to
perform audits according to the stated criteria", so the only question is
whether "there is sufficient public information available to determine that
the party is competent to judge the CA's conformance to the stated
criteria".

I recognize that Item 2 "replaces" the criteria for Section 8.2, but such a
replacement is neither reflected within the audit report produced (when
complying with the BRs) with respect to the issuing CA's oversight of the
DTP - that is, you might reasonably expect a qualification, but for Mozilla
to ignore said qualification, consistent with Item 2 of "Audit
Requirements".

"The burden is on the CA to prove that it has met the above
> requirements. However the CA may request a preliminary determination
> from us regarding the acceptability of the criteria and/or the competent
> independent party or parties by which it proposes to meet the
> requirements of this policy."
>
> I think a reasonable person might interpret this to mean that they
> needed to pick auditors who fulfilled the requirements in our policy,
> but don't need to _prove_ it unless asked. And they are not obliged to
> seek our determination. And I think that if we did ask Symantec to prove
> that the various bits of E met the criteria in the policy at the time,
> I think they could probably do that.
>

I find this an interesting and surprising interpretation, because I long
believed the intent and letter of Mozilla policy is that Mozilla required
such determination in the cases of accepting subordinate material, and the
burden of proof rests with them when presenting such material to Mozilla
during inclusion.


> Yes, I would expect externally-operated sub CAs to have the correct
> audits from a Mozilla-qualified auditor.
>

Just to be clear: Given the definitions above, you believe it's acceptable
for sub-CAs to be issued to parties on the basis of the CA's judgement as
to whether there is "sufficient public information available to determine
that the party is competent to judge the CA's conformance to the stated
criteria", and that so long as they do so, it does not represent any form
of violation of Mozilla Policy, even if the CA makes an error in that
judgement?

I can understand 

Re: Symantec: Next Steps

2017-03-08 Thread Ryan Sleevi via dev-security-policy
On Wed, Mar 8, 2017 at 8:46 AM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Yes, I agree they should be functionally equivalent, in the sense that
> all aspects of the operation and issuance are validated, and that one
> entity is ultimately responsible for the actions of the others.
>
> The distinction I am making is that the entity named as ultimately
> responsible ("the CA") needs an audit report that covers all the
> requirements with some requirements possibly audited in the form of
> auditing the presence of valid audit report from the other entities
> involved.
>

Except that, from discussions with a number of WebTrust auditors, there is
an issue accepting such evidence. So the scenario you describe further is
not what actually happens in practice - and this is part of the motivation
for the Policy suggestion I provided, so that theory and practice can align.

For example, an auditor will not necessarily examine the audits of other
parties - this is true whether the other party is, for example, a
datacenter operator (which relates to physical security principles), a
"Cloud HSM" provider (which relates to key security principles), or what
we've identified as a Delegated Third Party.

If the function is not at all performed by the CA, then as Peter has noted,
the auditor will not report on it - and as a consequence, be unable to
produce a seal.
If the function is _partially_ performed by the CA, then the auditor will
report to the extent that function is provided by the CA.

So the disconnect here is your assertion that auditors are examining these
reports - whether they be sub-CA or DTPs. The extent of the reporting an
auditor performs during such an engagement is to report on the controls
relative to the principles - e.g. does the CA have a documented process to
review such audits, does the auditor have an opinion that such controls
provide sufficient evidence to the criteria and principles, and were they,
to the extent for the period in question, performed as such.

So we end up in a situation where such audits are not required to be
disclosed (at present), such audits may not conform to the expected
standards (and the audits Symantec has provided amply demonstrate this),
and for which the auditor of the 'issuing CA' may provide a clean opinion,
because their opinion was scoped to the specific activities of those
provided by Symantec Corporation (notably, in its omission, excluding that
of delegated third parties). This does not seem a desirable outcome,
particularly because it conflicts with Mozilla's many improvements towards
transparency.

Each of the Policy proposals Gerv has mentioned ends up in this scenario of
insufficiently controlling for and disclosing the concerns related to
issuance.

So now we circle back to the provision of delegated third party services by
reforming it such that it's treated as an externally operated sub-CA. As
Peter has noted, the extent of such audits would need to include the full
activities of that sub-CA in some form - you don't get to carve that up. In
practice, I'm suggesting that the "Issuing CA", during their annual audit
cycle, would have all the relevant controls and policies examined for that
sub-CA as part of the audit engagement and scope, and would perform some
form of 'site visit' to examine the set of controls and procedures relative
to the function they provide. This is, I assert, functionally similar to
the site audits a number of auditors already perform with respect to
third-party datacenter operations (fairly common) or more complex cases
such as managed key material (rare, but done).

It has the benefit, however, of aligning the practice of what an audit
opinion covers (e.g. there's no carve-out for the DTP operations), when the
audit is disclosed (publicly), and the technical capability for distinctive
issuance. I further suggest that anything less is to undermine the goal and
intent of Mozilla policy, which is quite reasonable - know who can issue
certs and what their policies are.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec: Next Steps

2017-03-08 Thread Ryan Sleevi via dev-security-policy
On Wed, Mar 8, 2017 at 9:23 AM, Peter Bowen via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> > This is why I'm suggesting, from an audit scope, they're functionally
> > equivalent approach, except one avoids the whole complexity of
> identifying
> > where or not a DTP is something different-than a sub-CA, since the
> _intent_
> > is true in both, which is that 100% of the capabilities related to
> issuance
> > are appropriately audited - either by the DTP/sub-CA or by the issuing
> > CA/managed CA provided
> >
> > Does this make it clearer the point I was trying to make, which is that
> > they're functionally equivalent - due to the fact that both DTPs and
> sub-CAs
> > have the issue of multi-party audit scopes?
>
> I agree that you suggest an approach that is probably functionally
> equivalent, but what you describe is not how WebTrust audits work.
>

Peter, does my recent clarification help align this? I think we are in
violent agreement with respect to sub-CAs that you don't get to "pick and
choose" the principles and criteria, but for the specific case of DTPs and
their capabilities, was trying to describe how it could fit within the
'site visit' examination, due to the inability to rely on / use third-party
audits as evidence for the basis of opinion forming.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


DigiCert BR violation

2017-03-08 Thread Ryan Sleevi via dev-security-policy
It appears that DigiCert has violated the Baseline Requirements, as recently 
notified to the CA/Browser Forum. 

The certificate at https://crt.sh/?id=98120546 does not comply with RFC 5280.

RFC 5280 defines the upper-bound of the commonName field as 64 characters, 
specifically

ub-common-name INTEGER ::= 64
-- Naming attributes of type X520CommonName:
--   X520CommonName ::= DirectoryName (SIZE (1..ub-common-name))
--
-- Expanded to avoid parameterized type:
X520CommonName ::= CHOICE {
  teletexString TeletexString   (SIZE (1..ub-common-name)),
  printableString   PrintableString (SIZE (1..ub-common-name)),
  universalString   UniversalString (SIZE (1..ub-common-name)),
  utf8StringUTF8String  (SIZE (1..ub-common-name)),
  bmpString BMPString   (SIZE (1..ub-common-name)) }

The commonName encoded in this certificate is 67 characters
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Trust Services roots

2017-03-08 Thread Ryan Sleevi via dev-security-policy
Hi Richard,

That's not how Certificate Policy OIDs work - either in the specifications
or in the Baseline Requirements. I'm also not aware of any program
requiring what you describe.

Because of this, it's unclear to me, and I suspect many other readers, why
you believe this is the case, or if you meant that it SHOULD be the case
(for example, developing a new policy requirement), why you believe this.

Perhaps you could share more details about your reasoning?

On Wed, Mar 8, 2017 at 9:15 PM Richard Wang via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> As I understand, the EV SSL have two policy OID, one is the CABF EV OID,
> another one is the CA's EV OID, so the root key transfer doesn't have the
> EV OID transfer case, CA can't transfer its own EV OID to other CA
> exception the CA is full acquired.
>
> So the policy can make clear that the root key transfer can't transfer the
> EV OID, the receiver must use its own EV policy OID for its EV SSL, the
> receiver can't use the transferor's EV OID.
>
> Best Regards,
>
> Richard
>
> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-bounces+richard=
> wosign@lists.mozilla.org] On Behalf Of Gervase Markham via
> dev-security-policy
> Sent: Thursday, March 9, 2017 12:21 AM
> To: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Re: Google Trust Services roots
>
> Having gained a good understanding of Peter and Ryan's positions, I think
> I am now in a position to evaluate Peter's helpful policy suggestions.
>
> Whether or not we decide to make updates, as Kathleen pronounced herself
> satisfied at the time with Google's presented documentation and migration
> plan, it would be unreasonable for us to retroactively censure Google for
> following that plan.
>
> On 09/02/17 22:55, Peter Bowen wrote:
> > Policy Suggestion A) When transferring a root that is EV enabled, it
> > should be clearly stated whether the recipient of the root is also
> > receiving the EV policy OID(s).
>
> I agree with this suggestion; we should update
> https://wiki.mozilla.org/CA:RootTransferPolicy , and eventually
> incorporate it into the main policy when we fix
> https://github.com/mozilla/pkipolicy/issues/57 .
>
>
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Trust Services roots

2017-03-08 Thread Ryan Sleevi via dev-security-policy
Well, you still said the same thing, and I understood what you said, but
not why you said it or why you believe it. That's why I was asking for new
details.

Certificate Policy OIDs don't say who the certificate belongs to or who
issued the certificate. They describe the policies relative to how the
certificate was issued and validated. This is much clearer if you read the
relevant ETSI TS/EN series of docs related to Certificate Policies.

To your point about identifying the issuer, I may be misunderstanding your
point, but it sounds like you're just confused about how browsers work.
Browsers don't look up the EV OID to determine who the issuer is, so if
you're concerned that would present a problem, it doesn't.

Instead, browsers look for *any* EV enabled OID in the leaf certificate,
then attempt to build/verify that a chain can be built to one or more root
certificates "enabled" for that OID. If they can, the leaf is called EV,
and the browser determines who issued it by looking at the root.

So if Symantec were to issue such a certificate using GlobalSign's EV OID,
and Symantec's root was enabled for that OID, and it validated according to
RFC5280 for that OID, then the certificate would appear as a
Symantec-issued (because Symantec root) EV cert.

Of course, I have oversimplified this for you - the actual UI browsers tend
to take is not from the root, but from the issuing intermediate, or
metadata external to the root, so it's also not an issue for the root to
say Symantec, ValiCert, Equifax, Norton, or something else - because that's
ignored when better data is available, and it always is, if the CA is
responsive to root program requirements.


On Wed, Mar 8, 2017 at 10:31 PM Richard Wang  wrote:

> Maybe I don’t say it clearly.
>
>
>
> The EV SSL have two policy OID, one is the CABF EV OID, another one is the
> CA's EV OID, right?
>
> Check the EV SSL for www.symantec.com, the CABF EV OID is 2.23.140.1.1,
> and the Symantec EV OID is 2.16.840.1.113733.1.7.23.6
>
> And check www.gloabalsign.com EV SSL that no CABF EV OID, GlobalSign EV
> OID is 1.3.6.1.4.1.4146.1.1
>
>
>
> What I mean is the GlobalSign EV OID 1.3.6.1.4.1.4146.1.1 is belong to
> GlobalSign that the browser can identity the EV SSL Issuer is GlobalSign,
> so Google can’t use this EV OID for its own EV SSL, Google must use its own
> EV OID for its EV SSL.
>
>
>
> So, no EV OID transfer issue for root key transfer.
>
>
>
>
>
> Best Regards,
>
>
>
> Richard
>
>
>
> *From:* Ryan Sleevi [mailto:r...@sleevi.com]
> *Sent:* Thursday, March 9, 2017 11:14 AM
> *To:* Gervase Markham ; Richard Wang ;
> mozilla-dev-security-pol...@lists.mozilla.org
>
>
> *Subject:* Re: Google Trust Services roots
>
>
>
> Hi Richard,
>
>
>
> That's not how Certificate Policy OIDs work - either in the specifications
> or in the Baseline Requirements. I'm also not aware of any program
> requiring what you describe.
>
>
>
> Because of this, it's unclear to me, and I suspect many other readers, why
> you believe this is the case, or if you meant that it SHOULD be the case
> (for example, developing a new policy requirement), why you believe this.
>
>
>
> Perhaps you could share more details about your reasoning?
>
>
>
> On Wed, Mar 8, 2017 at 9:15 PM Richard Wang via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
> As I understand, the EV SSL have two policy OID, one is the CABF EV OID,
> another one is the CA's EV OID, so the root key transfer doesn't have the
> EV OID transfer case, CA can't transfer its own EV OID to other CA
> exception the CA is full acquired.
>
> So the policy can make clear that the root key transfer can't transfer the
> EV OID, the receiver must use its own EV policy OID for its EV SSL, the
> receiver can't use the transferor's EV OID.
>
> Best Regards,
>
> Richard
>
> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-bounces+richard=
> wosign@lists.mozilla.org] On Behalf Of Gervase Markham via
> dev-security-policy
> Sent: Thursday, March 9, 2017 12:21 AM
> To: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Re: Google Trust Services roots
>
> Having gained a good understanding of Peter and Ryan's positions, I think
> I am now in a position to evaluate Peter's helpful policy suggestions.
>
> Whether or not we decide to make updates, as Kathleen pronounced herself
> satisfied at the time with Google's presented documentation and migration
> plan, it would be unreasonable for us to retroactively censure Google for
> following that plan.
>
> On 09/02/17 22:55, Peter Bowen wrote:
> > Policy Suggestion A) When transferring a root that is EV enabled, it
> > should be clearly stated whether the recipient of the root is also
> > receiving the EV policy OID(s).
>
> I agree with this suggestion; we should update
> https://wiki.mozilla.org/CA:RootTransferPolicy , and eventually
> incorporate it into the main policy when we fix
> 

Re: Google Trust Services roots

2017-03-08 Thread Ryan Sleevi via dev-security-policy
On Wed, Mar 8, 2017 at 1:02 PM, Ryan Hurst via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:
>
> There are some limitations relative to where this domain information is
> used, for example
>  in the case of an EV certificate, if Google were to request Microsoft
> use this capability the
> EV badge would say verified by Google. This is because they display the
> root name for the
> EV badge. However, it is the subordinate CA in accordance with its CP/CPS
> that is responsible
> for vetting, as such the name displayed in this case should be GlobalSign.
>
> Despite these limitations, it may make sense in the case of Firefox to
> maintain a similar capability.


Outside of EV, can you articulate why (preferably in a dedicated thread)

There have been requests over the years from a variety of CAs for this.
Each time, they've been rejected. If there's new information at hand, or a
better understanding of the landscape since then, it would be good to
articulate why, specifically for Mozilla products :)
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec: Next Steps

2017-03-07 Thread Ryan Sleevi via dev-security-policy
On Tue, Mar 7, 2017 at 6:37 AM, Gervase Markham via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Policy Proposal 1: require all CAs to arrange it so that certs validated
> by an RA are issued from one or more intermediates dedicated solely to
> that RA, with such intermediates clearly labelled with the name of the
> RA in the Subject.
>
> If we enact Policy Proposal 1, that allows RAs to be cut off, and also
> provides a natural point for the CP/CPS and audits of the RA to be
> monitored in the CCADB, because they would be attached by the CA to the
> issuing intermediate for that RA.
>
> Symantec's oversight of their RAs was clearly inadequate; various forms
> of misissuance were not detected.
>
>
To make it simpler, wouldn't be a Policy Proposal be to prohibit Delegated
Third Parties from participating in the issuance process? This would be
effectively the same, in as much as the only capability to allow a
third-party to participate in issuance is to operate a subordinate CA.

I think it's procedurally identical to Policy Proposal 1, but it clarifies
more explicitly that RAs are forbidden, and that all participants in the
issuance ecosystem have a specific set of obligations.


>
>
> Failures
> 
>
> As noted by module peer Ryan Sleevi, this is not the first time Symantec
> has had difficulties with misissued "test" certificates. It is
> disappointing that investigations related to the last incident did not
> turn up the problems which have now been discovered. Various forms of
> investigation and remediation were not, apparently, applied to
> Symantec's RA network in the same way they were supposedly applied at
> Symantec.
>
> It seems to me that Symantec's claim is of lack of knowledge - that they
> contracted and trained CrossCert to do the right things, and the
> auditors said that they were, and they had no evidence that they were
> not, and so they assumed everything was fine. The question is whether
> that lack of knowledge amounts to negligence.
>
> Comments on this topic, with careful justification, are invited.
>
> [The alleged audit failures, as opposed to alleged failures by Symantec,
> will be discussed in a separate process.]
>

Gerv,

have you examined the most recent set of audits? Do you, in your capacity
as CA Certificate policy peer, believe the audits were correct for their
capability and role? Note that several of them were "WebTrust for CAs" -
not "WebTrust for CAs - SSL BR and Network Security". Do you believe that
complies with letter of the Baseline Requirements?

Similarly, do you believe Symantec had an obligation to ensure the proper
licensing status of auditors, prior to accepting such audits?

I think these may represent important questions for Mozilla to determine,
in order to evaluate the fullness of the claim you have summarized, and I
think would equally apply if we were discussing externally-operated
subordinate CAs, correct?

Considering the capability afforded to these RAs - full certificate
issuance through independent domain validation - I'm curious whether you
believe this materially represents a practical distinction from the
issuance of an unconstrained subordinate CA, and how responsible the
issuing CA is for overseeing those operations.

How would Mozilla respond if in every case of "RA", it was replaced with
"Sub-CA"? That seems to be the guiding principle here, since they're
functionally indistinguishable in this case, except the RA brought with it
even greater risk, and lacked sufficient audit controls or technical
mitigations to prevent unauthorized access or ensure adequate logging.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Trust Services roots

2017-03-09 Thread Ryan Sleevi via dev-security-policy
Yes, it means the two companies used the same policy for issuance - as
identified by that policy. Did you read the ETSI materials I suggested you
do? Perhaps this would make it easier for you.

I don't think encouraging a CA to misissue - which if you read other
people's replies, you would see Ryan identified it as misissuance (but not
for the reasons you note), is productive. Misissuing is very bad, as you
hopefully know.

If two certificates, from different organizations, have the same policy
OID, it means they were issued in whatever manner necessary to comply with
that OID at the time they were issued. And that's perfectly ok and not at
all prohibited.

If your worried that GlobalSign's policy might describe GlobalSign-only
things, then you're forgetting GlobalSign can update their policy at any
time. Just like we use the same CABF EV OID despite the policies for EV
changing every time we update the EVG, at any point GlobalSign could
indicate their EV OID "just" means following the EVGs, which any
organization that is trusted to issue certificates can do at any time.

On Thu, Mar 9, 2017 at 1:14 AM Richard Wang via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Why we setup one EV OID for all roots is that we use the same policy for
> all EV SSL certificate no matter it is issued by which root. The policy OID
> is unique ID
>
> If Google use the GlobalSign EV OID, and GlobalSign also use this EV OID,
> this means two companies use the same policy?
>
> It is better to do a test: Google issue a EV SSL certificate from this
> acquired root using the GlobalSign EV OID, then check every browser's UI
> display info, to check if that info will confuse the browser users.
>
>
> Best Regards,
>
> Richard
>
> -Original Message-
> From: Peter Bowen [mailto:pzbo...@gmail.com]
> Sent: Thursday, March 9, 2017 1:11 PM
> To: Richard Wang 
> Cc: Ryan Sleevi ; Gervase Markham ;
> mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Re: Google Trust Services roots
>
> Richard,
>
> I'm afraid a few things are confused here.
>
> First, a single CA Operator may have multiple roots in the browser trust
> list.  Each root may list one or more certificate policies that map to the
> EV policy.  Multiple roots that follow the same policy may use the same
> policy IDs and different roots from the same operator may use different
> policies.
>
> For example, I see the following in the Microsoft trust list:
>
> CN=CA 沃通根证书,O=WoSign CA Limited,C=CN
> CN=Class 1 Primary CA,O=Certplus,C=FR
> CN=Certification Authority of WoSign,O=WoSign CA Limited,C=CN CN=CA WoSign
> ECC Root,O=WoSign CA Limited,C=CN CN=Certification Authority of WoSign
> G2,O=WoSign CA Limited,C=CN each of these has one EV mapped policy:
> 1.3.6.1.4.1.36305.2
>
> CN=AffirmTrust Commercial,O=AffirmTrust,C=US has policy
> 1.3.6.1.4.1.34697.2.1 mapped to EV
> CN=AffirmTrust Networking,O=AffirmTrust,C=US has policy
> 1.3.6.1.4.1.34697.2.2 mapped to EV
> CN=AffirmTrust Premium,O=AffirmTrust,C=US has policy
> 1.3.6.1.4.1.34697.2.3 mapped to EV
> CN=AffirmTrust Premium ECC,O=AffirmTrust,C=US has policy
> 1.3.6.1.4.1.34697.2.4 mapped to EV
> All of these are from the same company but each has their own policy
> identifier.
>
> The information on "Identified by " in Microsoft's browsers
> comes from the "Friendly Name" field in the trust list. For example the
> friendly name of CN=Class 1 Primary CA,O=Certplus,C=FR is "WoSign 1999".
>
> For something like the AffirmTrust example, they could easily sell one
> root along with the exclusive right to use that root's EV OID without
> impacting their other OIDs.
>
> Does that make sense?
>
> Thanks,
> Peter
>
> On Wed, Mar 8, 2017 at 8:44 PM, Richard Wang via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
> > I don’t think so, please check this page:
> https://cabforum.org/object-registry/ that listed most CA’s EV OID, and
> all browsers ask for the CA’s own EV OID when applying inclusion and EV
> enabled. So, as I understand that the browser display EV green bar and
> display the “Identified by CA name” is based on this CA’s EV OID.
> >
> >
> >
> > I don’t think Symantec have the reason to use GlobalSign EV OID in its
> EV SSL certificate, why Symantec don’t use his own EV OID? If Symantec
> issued a EV SSL using GlobalSign's EV OID, I think IE browser will display
> this EV SSL is identified by GlobalSign, not by Symantec.
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec: Next Steps

2017-03-09 Thread Ryan Sleevi via dev-security-policy
On Thu, Mar 9, 2017 at 6:48 AM, Gervase Markham via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> That seems to make sense to me. Given that the BRs have the concept of a
> DTP, how can we best align the two in practice? Does requiring every RA
> to have its own subCA do that?
>

(Wearing Google hat only for this statement)
Have you considered having this discussion in the CA/Browser Forum? Google
had planned to discuss this very topic at our upcoming F2F about how to
address this, and would be very interested in collaborating with Mozilla on
this. I mentioned this recently to Kathleen at the WebTrust TF meetings,
but apologies for not mentioning to you as well.


> > I recognize that Item 2 "replaces" the criteria for Section 8.2, but
> such a
> > replacement is neither reflected within the audit report produced (when
> > complying with the BRs) with respect to the issuing CA's oversight of the
> > DTP - that is, you might reasonably expect a qualification, but for
> Mozilla
> > to ignore said qualification, consistent with Item 2 of "Audit
> > Requirements".
>
> Can an audit be qualified (in the audit sense) by virtue of the person
> _doing_ the audit not being formally qualified (in the other sense!) to
> use those criteria? I would expect audit qualifications to relate to the
> audit subject, not the auditor.
>

(Back to non-Google hat)
You've misunderstood. An auditor performing an audit is not going to
"self-qualify" because they aren't licensed. HOWEVER, an Auditor examining
the Principles/Criteria of an Issuing CA is going to examine the controls
of that CA relative to the operation of DTPs and sub-CAs, and those
Principles/Criteria are based on the Baseline Requirements. If the "sub"
auditor is not properly licensed - despite that "sub" auditor meeting the
definition of Mozilla's "replacement" 8.2, then the issuing CA should
reasonably be expected to receive a qualification that their controls are
insufficient for the criteria of the Baseline Requirements (which does not
have the replacement 8.2)

Does that make more sense? In the Sub-CA case, this is "Principle 2: SSL
Service Integrity", Criteria 8.2, and for DTPs, Criteria 8.4


> >> Yes, I would expect externally-operated sub CAs to have the correct
> >> audits from a Mozilla-qualified auditor.
> >
> > Just to be clear: Given the definitions above, you believe it's
> acceptable
> > for sub-CAs to be issued to parties on the basis of the CA's judgement as
> > to whether there is "sufficient public information available to determine
> > that the party is competent to judge the CA's conformance to the stated
> > criteria", and that so long as they do so, it does not represent any form
> > of violation of Mozilla Policy, even if the CA makes an error in that
> > judgement?
>
> No, because in the case of a sub-CA, we require audits. And when we
> receive them, if they were done by unqualified parties, the CA would
> need to flag that, and we would make a judgement about that party's
> suitability at the time. The issue here arises that, because of the way
> things are set up, these RA's audits were not submitted to Mozilla, and
> so Symantec didn't have to resolve the Schrodinger's Cat of
> (qualified|not qualified and need us to make a judgement).
>
> Having danced enough angels for sufficiently long on the head of this
> pin, though, it's clear we should fix this. I propose we switch our
> auditor requirements to requiring qualified auditors, and saying that
> exceptions can be applied for in writing to Mozilla in advance of the
> audit starting, in which case Mozilla will make its own determination as
> to the suitability of the suggested party or parties. This would involve
> removing bullets 3-6 in the Audit section of 2.4, and rewording bullet 2
> to say something like the above.
>

I'm not sure that we can or should so easily dismiss this with a suggestion
that we're dancing on the head of a pin here. I don't understand why you
believe it's relevant the act of "Mozilla requiring disclosure of the
audits". Can you help me understand where, in the policy, that's required?

I highlight this because the question of "qualified or not qualified", for
RAs (which are not disclosed), is one where the CA accepts a liability of
the decision if they do not seek Mozilla's guidance. For the question of
appropriately WebTrust licensed, this has an objective basis for which
compliance with Mozilla can be demonstrated at the time the audit was
accepted. However, if entering in the to the "CA's discretion" side of the
availability of the public information, any CA that fails to obtain
Mozilla's opinion a priori bears the liability and responsibility if they
stuff that judgement up.

I agree that removing the conflicting definition of qualified auditor is
likely a suitable outcome, and a much welcome improvement, but I do think
we owe it to the community to provide a greater degree of clarity then
currently provided by this thread 

Re: Include Renewed Kamu SM root certificate

2017-03-08 Thread Ryan Sleevi via dev-security-policy
On Wed, Mar 8, 2017 at 9:56 AM, tugba onder via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> 3.2.2.4.6: Applicant representative is requested to change a web page
> hosted in certificate requested domain. That change is done by serving the
> file which we sent for this purpose. This method means Request-upon-change
> for us but until the next audit, we plan to use the request token method
> which is indicated in CAB BR section 3.2.2.4.6.
>

Right, but the reason I highlighted this is that the audit noted
conformance to v1.4.1, but the process you described wasn't consistent with
v1.4.1. It's understandable that the auditable controls for 1.4.1 have not
been developed, so I'm not particularly surprised that this wouldn't have
been called out in the audit, but it did highlight a divergence between the
statement as to how you validate domains and the stated compliance.

To me, it signaled that there may be other places where the asserted
compliance is to v1.4.1, but the absence of audited criteria relative to
the changes in v.1.4.1 may not have actually been implemented. The serial
number is another example of that - where the practice and statement
diverged.

Here's another example: Section 2.2 of the Baseline Requirements requires
that the CA SHALL publicly give effect to these Requirements and represent
that it will adhere to the latest published version. (and then describes an
illustrative examples of fulfilling that obligation)

Rather than including that clause, Section 1 of your CPS states "Kamu SM
conforms to the updated versions of the standard ... and CA/Browser Forum
Baseline Requirements (BR) for the Issuance and Management of Publicly
Trusted Certificates". This is all perfectly fine and compliant with
Section 2.2 - you've made the statement and represented adherence.

However, the matters of both serial numbers and domain validation (as
described) are examples of non-adherence to that Section 1 of your CPS,
because the procedures used weren't consistent with Kamu SM conforming to
the updated version of the standard.

So that's why I suggested that you carefully examine the updated version
for any other divergences. For example, the Mozilla community would not
have been aware of the non-compliance to 3.2.2.4.6 had you not shared
details, which is why Andrew originally requested them. There's the
possibility of other areas of non-compliance, hence the similar request to
fully examine the Baseline Requirements and double check to make sure all
policies/processes are consistent - since Section 1 of your CPS says they
will be, but it was determined they weren't.

Once you've done that examination and identified any other issues, I was
suggesting sharing those. That way, the community can know that we're
"starting" from a good and compliant state, and then moving forward. It
also avoids any issues where, if three years down the road we find
something was overlooked, there's no way to excuse that - as there was the
opportunity now to examine and comply.

As it stands right now, Section 3.2.2 is in conflict with Section 1. I
think that needs to be fixed.



> Prior to CA/B BR v1.3.7, the certificate serial numbers are required to
> contain at least 20 bits of entropy. We were satisfying this condition by
> adding 32 bits entropy to the serial number. We had implemented the 64-bit
> entropy restriction beginning with v1.3.7 which went into effect on
> September 30th, but the system is left to add 32-bit entropy. As a result
> of Andrew's warnings, we have quickly deployed 64-bit random generator
> implementation and updated the test web page certificate to ensure this.
> There is no active certificate that we have issued since the process of our
> new root application has not been completed. Certificates that will issue
> after our application process is completed will provide this feature.
>

Similarly, at the time audit report was produced, Section 10.3 ("End Entity
SSL Certificate Template") was not consistent with Section 1 (current BRs).

With your current update, this is resolved, although the matter still
remains for Section 3.2.2 above.

Further, given these, I'm suggesting it would be good to review your
policies and practices for consistency with/adherence to Section 1 (or,
more aptly, to the Baseline Requirements), share if there are any further
inconsistencies identified, and then continue with the discussions :)
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec: Next Steps

2017-03-08 Thread Ryan Sleevi via dev-security-policy
On Wed, Mar 8, 2017 at 12:57 AM, Peter Bowen via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> If the DTP is only performing the functions that Jakob lists, then
> they only need an auditor's opinion covering those functions. In fact
> there is no way for an auditor to audit functions that don't exist.
> For example, consider the WebTrust for CA criteria called "Subordinate
> CA Certificate Life Cycle Management".  If the only CA in scope for
> the audit does not issue Subordinate CA Certificates, then that
> criteria is not applicable.  Depending on the auditor, it might be
> that the CA needs to write in some policy (public or private) "the CA
> does not issue Subordinate CA Certificates."
>
> Many auditors vary how much they charge for their work based on the
> expected effort required to compete the work.  I believe Jakob's point
> is that an audit where all the criteria are just "we do not do X" is
> very quick -- for example a DTP that does not have a HSM and does not
> digitally sign things is going to be a much cheaper audit than one
> that does have a HSM and signs things under multi-person control.


So I agree with this - namely, that a DTP audit does not include the
Principles and/or Criteria relevant for the operational aspects they don't
control, because the auditor neither forms an opinion about the third-party
operation. I think a good example, to continue with yours, if the issuing
CA handles the HSM, and is already audited as such, then the auditor will
not opine on another auditors work.

So the scope of a DTP audit will be limited to the functions provided by
the DTP.

But the same is true for an externally operated sub-CA, for which the
majority of services are provided for by the "issuing" CA, and the DTP
performs the validation functions for this sub-CA.

This is why I'm suggesting, from an audit scope, they're functionally
equivalent approach, except one avoids the whole complexity of identifying
where or not a DTP is something different-than a sub-CA, since the _intent_
is true in both, which is that 100% of the capabilities related to issuance
are appropriately audited - either by the DTP/sub-CA or by the issuing
CA/managed CA provided

Does this make it clearer the point I was trying to make, which is that
they're functionally equivalent - due to the fact that both DTPs and
sub-CAs have the issue of multi-party audit scopes?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec: Next Steps

2017-03-08 Thread Ryan Sleevi via dev-security-policy
On Wed, Mar 8, 2017 at 1:29 AM, Santhan Raj via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

>
> Ryan,
>
> Section 8.4 (cited below), as worded today, does not mandate a DTP to go
> through an audit. Rather, it requires the CA to perform additional
> out-of-band checks or perform the domain/IPAddress validation (3.2.2.4 &
> 3.2.2.5) by itself, when the DTP is not audited as per 8.4 (btw BR
> incorrectly refers to section 8.1 for audit schemes).
>
> It allows (or doesn't prohibit) the DTP to perform other validation checks
> in 3.2.2 (while the CA performs 3.2.2.4/5) without going through an
> WebTrust/ETSI audit, and a CA may choose to perform an internal audit of
> the DTP's process vs forcing them through a WebTrust/ETSI audit.
>
> There are other checks the CA must perform, but as far as I can tell there
> isn't any requirement that states a "DTP MUST go through an audit" in the
> BRs.
>
> "If a Delegated Third Party is not currently audited in accordance with
> Section 8 and is not an Enterprise RA, then prior to certificate issuance
> the CA SHALL ensure that the domain control validation process required
> under Section 3.2.2.4 or IP address verification under 3.2.2.5 has been
> properly performed by the Delegated Third Party by either (1) using an
> out-of-band mechanism involving at least one human who is acting either on
> behalf of the CA or on behalf of the Delegated Third Party to confirm the
> authenticity of the certificate request or the information supporting the
> certificate request or (2) performing the domain control validation process
> itself.


I think we may read this different, Santhan.

Either the issuing CA must themselves verify the information present in the
request - in which case, the DTP acts as an information aggregator,
effectively, and the CA is performing the verification function - or if the
DTPs validation of the information is to be trusted, then they MUST undergo
an audit.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DigiCert BR violation

2017-03-13 Thread Ryan Sleevi via dev-security-policy
On Monday, March 13, 2017 at 5:12:39 PM UTC-4, Jeremy Rowley wrote:
> I don't disagree that teletext shouldn't be used, and we no longer include
> it in new certificate requests or renewals.  However, we do include teletext
> in certificates that originally had teletext strings but are being re-keyed.
> Teletext inclusion wasn't intentional and should shortly be fixed. 

Are you saying that there are one or more clients that require DigiCert to 
support Teletext strings?

Do you have an ETA on the other issues?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla Root Store Policy 2.4.1

2017-03-06 Thread Ryan Sleevi via dev-security-policy
Hi Gerv,

I'm assuming as with previous discussions, you'd like to keep the
discussion on the list.

Overall: I would suggest every "should" be replaced with either a "must" or
a "shall" RFC2119 style, to avoid any "best practice" vs "required mandate"
confusion.

1.1 Scope
  Item 2:
Bullet 1: This would allow the anyEKU to be considered 'out of scope'.
Is that intentional? (notwithstanding Section 5.3.1)
Bullet 2: This potentially leads to confusion as to what it means to
'not allow' such types, given that nameConstraints only apply to the type
for which they're present. That is, the absence of an iPAddress
nameConstraint means there's no restriction, while the presence has to be
constructed in a way to exclude all IP addresses in the excludedSubtrees.
Similarly, as captured during the SRVName discussions in the CA/Browser
Forum, there's uncertainty as to how to capture such an exclusion with an
SRVName nameConstraint.

  I don't know how best to suggest rephrasing this, other than I think the
scope may need to forward-reference a subsequent section that defines the
technical means for that scope. I suspect you were trying to avoid this,
but I think that to avoid ambiguity as to what the scope is, you'll want to
ensure a precise technical definition is linked to the prosaic goal.

  Item 3: Similar to above, this allows excluding the anyEKU from scope
(notwithstanding Section 5.3.1)

3
  Item 2: I realize the intent is to match the current wording, but it may
be worth considering clarifying here, in the event of an RA who performs
validation/verification functions, but does not press the so-called "Big
Red Button" to issue the cert. Imagine a process where CA receives a
request, RA does domain validation (... incorrectly), RA tells Subscriber,
Subscriber then asks CA to issue, CA issues now that RA has fulfilled the
DCV - this is absolutely something for which multi-factor authentication is
intended, in the current Mozilla policy, but which ambiguity regarding
"directly" leads to uncertainty. Perhaps this is for a separate policy
modification (and I'll let you track that work as appropriate), but perhaps
"all accounts capable of causing certificate issuance or perform validation
functions" can clarify this?
  Item 3: "verify certificate signing requests" may also lead to ambiguity
as to whether this applies only to CSRs (which are but one way of
manifesting a certificate request) or you mean "requests for certificates"
or simply "certificate request" (omitting signing to avoid the CSR
ambiguity)

3
  - Your markdown formatting for a-c is off :)

3
  - As you reformat this, perhaps it's worth borrowing the Microsoft of
approach of mapping trust bits to criteria

4.1.2
  - You link to the "Baseline Requirements" document, but don't define what
a BR audit is. While 4.1.1 lists audit criteria, this ambiguity may be
undesirable. As with my immediately preceeding section, it may be worth
mapping "trust bits" to "accepted audits", e.g. "For CA certificates which
have the SSL trust bit set, we expect the following audits ..."
   - Similarly, when two audit schemes are interchangable, it may be worth
clarifying. For example, would Mozilla accept an ETSI TS 102 042 audit to
the DVCP profile along with a WebTrust for cAs - 2.0 audit? My hope would
be 'no', but the proposal leaves this ambiguous. https://aka.ms/auditreqs
gives a clearer idea of what I'm thinking.

4.2
  - Another a/b/c markdown formatting snafu

5.2
  - There's a thread in CA/Browser Forum regarding what an "ASN.1 DER
encoding error" is. Given 5280/X.509 describe the signature as over the DER
encoding (but that the certificate doesn't necessarily match - and see the
IETF discussion), perhaps its' worth clarifying that CAs must not _sign_
such certificates.

6
  - Is this a subset, superset, or replacement for the Baseline
Requirements?


That's a quick scan of more than enough feedback, and I figure if we start
from there, I can review the subsequent sections if/as you make
modifications.

On Mon, Mar 6, 2017 at 10:10 AM, Gervase Markham via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> The next stage in the improvement of the Mozilla Root Store Policy is
> version 2.4.1. This is version 2.4, but rearranged significantly to have
> a more topic-based ordering and structure to it. I have also made
> editorial changes to clean up and clarify language, and improved the
> Markdown markup.
>
> *There is no intent to change any normative requirement in this update.*
>
> Therefore, I would appreciate people reviewing it to make sure I have
> not accidentally done so. You can find the draft here:
>
> https://github.com/mozilla/pkipolicy/blob/master/rootstore/policy.md
>
> Version 2.4, the current version, is here:
> https://github.com/mozilla/pkipolicy/blob/2.4/rootstore/policy.md
>
> The diff isn't particularly useful, but here it is:
> https://github.com/mozilla/pkipolicy/compare/2.4...master
>
> To assist with that review, 

Re: Mozilla Root Store Policy 2.4.1

2017-03-07 Thread Ryan Sleevi via dev-security-policy
On Tue, Mar 7, 2017 at 5:09 AM, Gervase Markham via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> > 1.1 Scope
> >   Item 2:
> > Bullet 1: This would allow the anyEKU to be considered 'out of
> scope'.
> > Is that intentional? (notwithstanding Section 5.3.1)
> > Bullet 2: This potentially leads to confusion as to what it means to
> > 'not allow' such types, given that nameConstraints only apply to the type
> > for which they're present. That is, the absence of an iPAddress
> > nameConstraint means there's no restriction, while the presence has to be
> > constructed in a way to exclude all IP addresses in the excludedSubtrees.
> > Similarly, as captured during the SRVName discussions in the CA/Browser
> > Forum, there's uncertainty as to how to capture such an exclusion with an
> > SRVName nameConstraint.
> >
> >   I don't know how best to suggest rephrasing this, other than I think
> the
> > scope may need to forward-reference a subsequent section that defines the
> > technical means for that scope. I suspect you were trying to avoid this,
> > but I think that to avoid ambiguity as to what the scope is, you'll want
> to
> > ensure a precise technical definition is linked to the prosaic goal.
> >
> >   Item 3: Similar to above, this allows excluding the anyEKU from scope
> > (notwithstanding Section 5.3.1)
>
> Are these issues also present in 2.4?
>

Ish? I can't quite decide whether or not, hence why I raised it.

For example, Inclusion, Item 9 describes what it takes for something to be
technically constrained, which explicitly excludes anyExtendedKeyUsage and
then further refines the definition (with a forward declaration to the BRs)
for id-kp-serverAuth.

So overall, I can't see an explicit prohibition on anyExtendedKeyUsage
within the existing Mozilla Policy, and all requirements (particularly
audits) flow down.


> 3
> >   - As you reformat this, perhaps it's worth borrowing the Microsoft of
> > approach of mapping trust bits to criteria
>
> Can you link to an example?
>

I did in my 4.1.2 notes - but http://aka.ms/auditreqs and more specifically
https://social.technet.microsoft.com/wiki/contents/articles/31635.microsoft-trusted-root-certificate-program-audit-requirements.aspx#Conventional_CA_Audit_Standards


I think 4.1.2 is the appropriate place for such a mapping, but I
highlighted it because Section 3.3 leaves some confusion relative to 4.1.2,
so perhaps it may be worth
 c

Small 3.3 nit: Replace "Below" with "The following list" ? "Below" leaves
it uncertain if 'every conflict in Section 3.3 + onwards is intentiona' ;)

>
> > 4.1.2
> >   - You link to the "Baseline Requirements" document, but don't define
> what
> > a BR audit is. While 4.1.1 lists audit criteria, this ambiguity may be
> > undesirable. As with my immediately preceeding section, it may be worth
> > mapping "trust bits" to "accepted audits", e.g. "For CA certificates
> which
> > have the SSL trust bit set, we expect the following audits ..."
> >- Similarly, when two audit schemes are interchangable, it may be
> worth
> > clarifying. For example, would Mozilla accept an ETSI TS 102 042 audit to
> > the DVCP profile along with a WebTrust for cAs - 2.0 audit? My hope would
> > be 'no', but the proposal leaves this ambiguous.
> https://aka.ms/auditreqs
> > gives a clearer idea of what I'm thinking.
>
> I've added lists of acceptable criteria beside each audit requirement.
>
> Should we simply say that a given root (and I say root, as opposed to
> 'CA') has to be covered by all-WebTrust or all-ETSI auditing?
>

I think your new wording is still fairly unclear, and had quite a bit of
time parsing it.

For example, 4.1.1 (7) leaves it ambiguous what "appropriate for the trust
bit(s) being applied for". 4.1.1 (4) suggests QCP is appropriate for TLS
(it isn't; it's accepted for email though?)

Your new wording still suggests a mix and match approach, so I'd suggest:

4.1.2 Required Audits

(Do all sub-CAs need to use the same scheme as the parent CA? I would
presume yes, but not clear)

4.1.2.1 WebTrust

If being audited to the criteria developed by the WebTrust Task Force of
AICPA (or is it just CPA Canada? I think it's still AICPA), the following
audits are required:

* For the SSL trust bit, a CA and all subordinate CAs technically capable
of issuing server certificates [ref] must have all of the following:
  * WebTrust for CAs - v2.0
  * WebTrust for CAs - SSL Baseline with Network Security - v2.0
  * If applying for EV recognition, a WebTrust for CAs - EV SSL v.1.4.5+
* For the email trust bit, a CA and all subordinate CAs technically capable
of issuing email certificates [ref] must have all of the following:
  * WebTrust for CAs - v2.0

4.1.2.2 ETSI

If being audited ...

* For the SSL trust bit, a CA and all subordinate CAs ... must have all of
the following:
  * ETSI TS 102 042 v.2.3.1 DVCP, OVCP, PTC-BR  [note: This will shortly be
disallowed and replaced with 

Re: Maximum validity of pre-BR certificates

2017-03-04 Thread Ryan Sleevi via dev-security-policy
On Sat, Mar 4, 2017 at 4:20 PM, Daniel Cater via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Saturday, 4 March 2017 21:21:41 UTC, Jeremy Rowley  wrote:
> > Common practice amongst certain cas. There were several cas that have
> always opposed cert validity periods longer than three years. This
> opposition lead to the reducing the validity period first to 60 months then
> to 39 months.
>
> The reason I brought this up is that I found this certificate in the wild
> with a validity of almost 124 months (10 years and 4 months):
> https://crt.sh/?id=710954=cablint,x509lint
>
> I read the cablint warning and wondered if the certificate was in breach
> of any pre-BR policies at the time that it was issued, but I assume not.
>
> Note that the certificate is live and trusted by browsers that haven't yet
> blocked SHA-1 certificates: https://newleaderscouncil.org/


Even if SHA-1 was still enabled, Chrome blocked such certificates.

Currently Chrome sets the absolute upper max at 10 years if pre-BRs, 5
years if BR effective date, and 3 years after the sunset. My hope for
Chrome 59 is to change that to 3 years across the board soon, with further
reductions thereafter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Incapsula via GlobalSign issued[ing] a certificate for non-existing domain (testslsslfeb20.me)

2017-03-02 Thread Ryan Sleevi via dev-security-policy
Hi Jakob,


On Thu, Mar 2, 2017 at 9:14 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:
>
> I read his previous answer as saying that the system will in no case
> extend the validity of a validation beyond the duration of the
> certificate in which it was originally listed (that duration being
> clearly visible in the certificates in question).
>

For the avoidance of doubt or confusion, I suspect it would be best for
Doug to be able to answer the questions posed to Doug :)


> The only corner cases seemingly not answered are these:
>
> Does GlobalSign allow (for this product) that initial inclusion of a SAN
> within a subscription period to be accepted based on a previous
> validation occurring more than 39 months before the last permitted
> certificate reissuance with added/removed SANs?
>

I'm having trouble understanding what you're asking here. While I may be
the only one confused, perhaps you can reword this question?


> Does GlobalSign allow other certificate products that can be freely
> reissued within their validity period to be based on validation data
> that could exceed the 39 month age limit before the certificate and its
> reissuance option expires?
>

This is a similar question which I personally find confusingly worded, so
perhaps you can expand.


> Conversely there are questions about what the BRs requires in such
> corner cases:
>
> Do the BRs require the 39 month age limit to be satisfied when a
> certificate is reissued with unchanged subject data and expiry date,
> (but with new serial and public key), thus expiring less than the BR
> permitted maximum validity duration after an original issuance date
> within that 39 month limit?
>

The Baseline Requirements do not define reissue. Every certificate is new
issuance. There is no such thing as "reissue", even if two certificates are
markedly similar in various aspects.

The Baseline Requirements allow you to validate at T=0, issue at T=38 for
L=39, where T means 'time' (and 38 just means 'one second before 39
months') and L means lifetime.

However, if a new certificate is issued - with new serial and public key,
at T=40, the Baseline Requirements require this information be revalidated.


> That's a bit harsh on the subscriber (for a simple failure to notify),
> but probably within the legal requirements of the BRs.


Why is it harsh? CAs are required to revoke such certificates. The fact
that the Subscriber Agreement was simply one way of describing the
Revocation Requirements. GlobalSign is equally obligated to revoke under
4.9.1.1, Item 6, which states

"6. The CA is made aware of any circumstance indicating that use of a
Fully-Qualified Domain Name or IP
address in the Certificate is no longer legally permitted (e.g. a court or
arbitrator has revoked a Domain Name
Registrant’s right to use the Domain Name, a relevant licensing or services
agreement between the Domain
Name Registrant and the Applicant has terminated, or the Domain Name
Registrant has failed to renew the
Domain Name); "
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


  1   2   3   4   5   6   7   8   9   10   >