Re: An alternate perspective on Symantec

2017-06-06 Thread userwithuid via dev-security-policy
Inspired by David's message, 2 suggestions for the Symantec plan:

1. Mozilla - and ideally Google as well - should clearly and explicitly 
communicate in the official statement on this that the "new" Symantec will 
still be strictly monitored even after the current remediation plan has been 
implemented. Their issue history still very much counts, potentially resulting 
in much harsher responses to future policy violations than would be the case 
for first-time offenders/other CAs. This is to counter the potential 
misconception (aka marketing) that everything is totally fine now.

2. Having Symantec inform their subscribers, as David mentions, is a great 
idea. Specifically, I think Symantec should be required to make their 
subscribers aware of the Mozilla-written! statement regarding the future of 
their CA, soon (<=1 month?) after its release. This is to prevent too many 
subscribers from getting caught by surprise in the future (see StartCom), to 
give them a chance to see more than one side, CA view _and_ Mozilla view, and 
to ensure they know they are Symantec subscribers in the first place (RapidSSL 
cert chaining to a GeoTrust root bought from some reseller = Symantec? yeah, 
totally obvious...).
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec response to Google proposal

2017-06-06 Thread userwithuid via dev-security-policy
On Tuesday, June 6, 2017 at 2:03:29 PM UTC, Gervase Markham wrote:
>
> 1) Scope of Distrust
> 
> Google proposal: existing CT-logged certificates issued after 1st June
> 2016 would continue to be trusted until expiry.
> Symantec proposal: all CT-logged certificates should continue to be
> trusted until expiry.
> Rationale for change: if transparency is enough to engender trust, that
> principle should be applied consistently. This also significantly
> reduces the revalidation burden.

As mentioned in the other Symantec thread, right now Firefox doesn't do CT so 
notBefore >=2016-06 is the non-CT way of at least partially distrusting the 
old/unknown PKI soon-ish. I don't think it's a good idea to just broaden this 
to 2015-01 unless we know we can do CT by 2018-02. (Not sure if we'd be able to 
defend 2016-06 alone if Google agrees to do 2015-01 though)

Then again, also in the other thread, you said "Mozilla would wish" the old PKI 
to be distrusted "sooner than November 2020" and you "expect it to be some time 
in 2018". Which I found to be a very bold proposition. Has Symantec commented 
on that yet? If not, can you make them? :-) In the event that we actually get 
2018, allowing some older certs for a few more months might be worth conceding. 
A little less technically enforcable risk reduction from 2018-02 to 2018-?? in 
exchange for the "real deal" sooner than expected sounds good.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Add definition of "mis-issuance"

2017-06-06 Thread Jakob Bohm via dev-security-policy

On 06/06/2017 22:08, Ryan Sleevi wrote:

On Tue, Jun 6, 2017 at 2:28 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


I am saying that setting an administrative policy for inclusion in a
root program is not the place to do technical reviews of security
protocols.



Of course it is. It is the only one that has reliably worked in the history
of the Web PKI. I would think that would be abundantly evident over the
past five years.



I have yet to see (but I haved studied ancient archives) the root
program and or the CAB/F doing actual review of technical security
protocols and data formats.




And I proceeded to list places that *do* perform such peer
review at the highest level of competency, but had to note that the list
would be too long to enumerate in a stable root program policy.



Except none of them are, as evidenced by what they've turned out. The only
place where Mozilla users are considered, en masse, is in Mozilla policy.
It is the one and only place Mozilla can ensure its needs are appropriately
and adequately reflected.



SDO?  Unfamiliar with that TLA.



Standards defining organization.



Ah, like the very examples I gave of competent protocol review
organizations that should do this.




And why should Mozilla (and every other root program) be consulted to
unanimously preapprove such technical work?  This will create a massive
roadblock for progress.  I really see no reason to create another FIPS
140 style bureaucracy of meaningless rule enforcement (not to be
confused with the actual security tests that are also part of FIPS 140
validation).



This is perhaps the disconnect. It's not meaningless. A significant amount
of the progress made in the past five years in the Web PKI has come from
one of two things:
1) Mozilla or Google forbidding something
2) Mozilla or Google requiring something



Yes, but there is a fundamental difference between Mozilla/Google
enforcing best practices and Mozilla/Google arbitrarily banning
progress.


The core of your argument seems to be that you don't believe Mozilla can
update it's policy in a timely fashion (which this list provides ample
counter-evidence to this), or that the Mozilla community should not be
consulted about what is appropriate for the Mozilla community (which is, on
its face, incorrect).



No, I am saying the the root program is the wrong place to do technical
review and acceptance/rejection of additional CA features that might
improve security with non-Mozilla code, with the potential that at some
future point in time Mozilla might decide to start including such
facilities.

For example, the Mozilla root program was not the right place to discuss
if CAs should be allowed to do CT logging at a time when only Google
code was actually using that.

The right place was Google submitting the CT system to a standard
organization (in this case the IETF), and once any glaring security
holes had been reviewed out, begin to have some CAs actually do this,
before the draft RFC could have the implementations justifying
publication as a standards track RFC.  Which is, I believe, exactly what
happened.  The Mozilla root policy did not need to change to allow this
work to be done.

One litmus-test for a good policy would be "If this policy had existed
before CT, and Mozilla was not involved with CT at all, would this
policy had interfered with the introduction of CT by Google".



Look, you could easily come up with a dozen examples of improved validation

methods - but just because they exist doesn't mean keeping the "any other
method" is good. And, for what it's worth, of those that did shake out of
the discussions, many of them _were_ insecure at first, and evolved
through
community discussion.



Interestingly, the list of revocation checking methods supported by
Chrome (and proposed to be supported by future Firefox versions) is
essentially _empty_ now.  Which is completely insecure.



Not really "interestingly", because it's not a response to the substance of
the point, but in fact goes to an unrelated (and technically incorrect)
tangent.

Rather than engage with you on that derailment, do you agree with the
easily-supported (by virtue of the CABF Validation WG's archives) that CAs
proposed the use of insecure methods for domain validation, and those were
refined in time to be more appropriately secure? That's something easily
supported.



I am not at all talking about "domain validation" and the restrictions
that had to be imposed to stop bad CA practices.

I am talking about allowing non-Mozilla folk, working with competent
standard defining organizations to create additional security
measures requiring signatures from involved CAs.





Within *this thread* proposed policy language would have banned that.




And neither I, nor any other participant seemed to realize this specific
omission until my post this morning.



Yes, and? You're showing exactly the value of community review - and where
it 

Re: An alternate perspective on Symantec

2017-06-06 Thread David E. Ross via dev-security-policy
On 6/6/2017 12:10 PM, Peter Kurrasch wrote:
> Over the past months there has been much consternation over Symantec and
> the idea of "too big to fail". That is a reasonable idea but makes
> difficult the discussion of remedies for Symantec's past behavior: How
> does one impose a meaningful sanction without causing Symantec to fail
> outright since the impact would be catastrophic?
> 
> I'd like to offer an alternate perspective on the situation in the hope
> that it might simplify the discussions of sanctions. The central point
> is this: Symantec is too big and too complicated to function properly.
> 
> Consider:
> 
> * Symantec has demonstrated an inability to exercise sufficient
> oversight and control over the totality of their PKI systems.
> Undoubtedly there are parts which have been and continue to be well-run,
> but there are parts for which management has been unacceptably poor.
> 
> * No cases have been identified of a breach or other compromise of
> Symantec's PKI technology/infrastructure, nor of the infrastructure of a
> subordinate PKI organization for which Symantec is responsible. The
> possibility does exist, however, that compromises have occurred but
> might never be known because of management lapses.
> 
> * Many of Symantec's customers play a critical role in the global
> economy and rely on the so-called "ubiquitous roots" to provide their
> services. Any disruption in those services can have global impacts.
> Symantec, therefore, plays a significant role in the global economy but
> only insofar as it is the gatekeeper to the "ubiquitous roots" upon
> which the global economy relies.
> 
> * ‎Symantec has demonstrated admirable commitment to its customers but
> appear less so when it comes to the policies, recommendations, and
> openness of the global PKI community. Whether this indicates a willful
> disregard for the community‎ or difficulty in incorporating these
> viewpoints into a large organization (or something else?) is unclear.
> 
> 
> From this standpoint, the focus of sanctions would be on Symantec's
> size. Obviously Mozilla is in no position to mandate the breakup of a
> company but Mozilla (and others) can mandate a reduced role as
> gatekeeper to the "ubiquitous roots". In fact, Symantec has already
> agreed to do just that.
> 
> In addition, this viewpoint would discourage increasing Symantec's size
> or adding to the complexity of their operations. I question Symantec's
> ability to do either one successfully. Symantec is certainly welcome to
> become bigger and more complex if that's what they should choose, but
> not as a result of some external mandate.
> 
> 
> Comments and corrections are welcome.

I was going to suggest that indeed Symantec be put out of business by
refusing to add any new roots to NSS and refusing to update any expiring
roots.  However, that would be a weak response since only one root
expires in the current decade, in a little less than two years from now.
 Many of the Symantec-branded roots do not expire until 6-8 years from
now.  Most of Symantec's Verisign-branded roots will not expire in my
lifetime (and my family has extraordinary longevity).

I would suggest that Symantec be placed on a form of probation.  The
terms of probation would be that any further negligence or other
unacceptable operations within the next nn [some small number] years
would cause all Symantec controlled roots to be promptly deprecated
(e.g., removed from NSS, left in NSS but marked invalid).  The terms of
probation would specify clear, detailed, objective meanings of
negligence and other unacceptable operations sufficient to withstand
legal challenges if deprecation is indeed imposed.

A notice of this probation must be made broadly public.  Before public
release, however, hard copies of the notice should sent by postal mail
both to the CEO of Symantec and to the top management of Symantec's
outside auditors, signed by both Mitchell Baker and Chris Beard.  That
mailed notice should direct Symantec to notify promptly all holders of
subscriber certificates of the terms of the probation.  This would warn
potential users of concern over Symantec's operations.  This would also
give existing users time to consider renewing their expiring subscriber
certificates with other certification authorities.  The end result would
shrink Symantec.

-- 
David E. Ross


Consider:
*  Most state mandate that drivers have liability insurance.
*  Employers are mandated to have worker's compensation insurance.
*  If you live in a flood zone, flood insurance is mandatory.
*  If your home has a mortgage, fire insurance is mandatory.

Why then is mandatory health insurance so bad??
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: On remedies for CAs behaving badly

2017-06-06 Thread Matthew Hardeman via dev-security-policy
On Monday, June 5, 2017 at 11:17:17 AM UTC-5, Ryan Sleevi wrote:

> While on paper the idea sounds quite good, it turns out to simply trade
> technical complexity for complexity of the non-technical sort. As such,
> it's best to focus on meaningful and actionable technical solutions.

Ryan,

I greatly appreciate the time you spent crafting a thoughtful response to my 
idea/question and am especially grateful for the great depth of relevant 
references you provided.

Having taken [some of] these into account, I find I fully concur with your 
assessment.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Add definition of "mis-issuance"

2017-06-06 Thread Ryan Sleevi via dev-security-policy
On Tue, Jun 6, 2017 at 2:28 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:
>
> I am saying that setting an administrative policy for inclusion in a
> root program is not the place to do technical reviews of security
> protocols.


Of course it is. It is the only one that has reliably worked in the history
of the Web PKI. I would think that would be abundantly evident over the
past five years.


> And I proceeded to list places that *do* perform such peer
> review at the highest level of competency, but had to note that the list
> would be too long to enumerate in a stable root program policy.
>

Except none of them are, as evidenced by what they've turned out. The only
place where Mozilla users are considered, en masse, is in Mozilla policy.
It is the one and only place Mozilla can ensure its needs are appropriately
and adequately reflected.


> SDO?  Unfamiliar with that TLA.
>

Standards defining organization.


> And why should Mozilla (and every other root program) be consulted to
> unanimously preapprove such technical work?  This will create a massive
> roadblock for progress.  I really see no reason to create another FIPS
> 140 style bureaucracy of meaningless rule enforcement (not to be
> confused with the actual security tests that are also part of FIPS 140
> validation).
>

This is perhaps the disconnect. It's not meaningless. A significant amount
of the progress made in the past five years in the Web PKI has come from
one of two things:
1) Mozilla or Google forbidding something
2) Mozilla or Google requiring something

The core of your argument seems to be that you don't believe Mozilla can
update it's policy in a timely fashion (which this list provides ample
counter-evidence to this), or that the Mozilla community should not be
consulted about what is appropriate for the Mozilla community (which is, on
its face, incorrect).

Look, you could easily come up with a dozen examples of improved validation
>> methods - but just because they exist doesn't mean keeping the "any other
>> method" is good. And, for what it's worth, of those that did shake out of
>> the discussions, many of them _were_ insecure at first, and evolved
>> through
>> community discussion.
>>
>>
> Interestingly, the list of revocation checking methods supported by
> Chrome (and proposed to be supported by future Firefox versions) is
> essentially _empty_ now.  Which is completely insecure.
>

Not really "interestingly", because it's not a response to the substance of
the point, but in fact goes to an unrelated (and technically incorrect)
tangent.

Rather than engage with you on that derailment, do you agree with the
easily-supported (by virtue of the CABF Validation WG's archives) that CAs
proposed the use of insecure methods for domain validation, and those were
refined in time to be more appropriately secure? That's something easily
supported.


> Within *this thread* proposed policy language would have banned that.


> And neither I, nor any other participant seemed to realize this specific
> omission until my post this morning.
>

Yes, and? You're showing exactly the value of community review - and where
it would be better to make a mistake that prevents something benign, rather
than allows something dangerous, given the pattern and practice we've seen
over the past decades.


> However the failure mode for "signing additional CA operational items"
>>> would be a lot less risky and a lot less reliant on CA competency.
>>>
>>
>>
>> That is demonstrably not true. Just look at the CAs who have had issues
>> with their signing ceremonies. Or the signatures they've produced.
>>
>
> Did any of those involve erroneously signing non-certificates of a
> wholly inappropriate data type?
>

I'm not sure I fully understand or appreciate the point you're trying to
make, but I feel like you may have misunderstood mine.

We know that CAs have had issues with their signing ceremonies (e.g.
signing tbsCertificates that they should not have)
We know that CAs have had issues with integrating new technologies (e.g.
CAA misissuance)
We know that CAs have had considerable issues adhering to the relevant
standards (e.g. certlint, x509lint, Mozilla Problematic Practices)

Signing data is heavily reliant on CA competency, and that's in
unfortunately short supply, as the economics of the CA market make it easy
to fire all the engineers, while keeping the sales team, and outsourcing
the rest.


> I am not an AV vendor.
>
> Technical security systems work best with whitelists wherever possible.
>
> Human-to-human policy making works best with blacklists wherever
> possible.
>
> Root inclusion policies are human-to-human policies.
>

Root inclusion policies are the embodiment of technical security systems.
While there is a human aspect in determining the trustworthiness for
inclusion, it is the technical competency that is the core to that trust.
The two are deeply related, and the human aspect of the CA trust 

Re: New undisclosed intermediates

2017-06-06 Thread Matthew Hardeman via dev-security-policy
On Tuesday, June 6, 2017 at 4:14:00 AM UTC-5, Gervase Markham wrote:
> On 05/06/17 14:29, Alex Gaynor wrote:
> > As I've expressed before, I find it baffling that this still happens.
> 
> I am also disappointed. I have half a mind to keep track of how often
> this happens per CA, and impose a mandatory delay of 1 month per
> incident to that CA's next attempt to include a new root or get a trust
> bit or EV change in our store. :-)

I've wondered for quite some time why these circumstances aren't regarded as 
equivalent to mis-issuance?

I recognize that they likely are not mis-issuance of a certificate in any 
traditional sense.  Likely these are all intended and meant to be issued and 
proper validation and cause for the issuance can be shown.

However...  Isn't the point of the CCADB to document these SubCAs, track 
audits, and build up the whole trust framework and provide rational, documented 
support for confidence in the ability to trust certificates issued descendant 
of these CAs?

If so, allowing issuance of a SubCA without requiring disclosure provides 
opportunities for these CAs to facilitate improper certificate issuance without 
necessarily suffering the full consequence.  It also deprives the public of the 
opportunity to critically examine these "hidden" parts of the trust 
infrastructure.

On that basis, it would seem that "concealing" a SubCA for a significant period 
of time has the consequence of benefiting the Root CA program participant 
without a corresponding "time to pay the piper" when the SubCA is discovered.

Why not adjust the program requirements such that:

If we are exposed to a SubCA chaining to an included root via any mechanism 
other than the included program participant directly disclosing said SubCA to 
us, having not previously had this SubCA properly disclosed to us, this will be 
regarded as a serious security incident which may require remediation.  (In 
other words, the program will just assume that in the absence of a prior 
disclosure, the disclosure, when/if it would come, says that an external SubCA 
without constraints of any sort was issued without any audits to your least 
favorite authoritarian regime.)

I think if any CA publicly said that this would be a substantive burden upon 
them that said CA should probably be subject to far greater scrutiny, as that 
would be evidence of poor procedural or organizational structure.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec response to Google proposal

2017-06-06 Thread Matthew Hardeman via dev-security-policy
On Tuesday, June 6, 2017 at 9:03:29 AM UTC-5, Gervase Markham wrote:

> I'm slightly surprised to see no engagement here. Perhaps it would be
> help to break it down. Symantec's specific requests for modification are
> as follows (my interpretation):
> 
> 1) Scope of Distrust
> 
> Google proposal: existing CT-logged certificates issued after 1st June
> 2016 would continue to be trusted until expiry.
> Symantec proposal: all CT-logged certificates should continue to be
> trusted until expiry.
> Rationale for change: if transparency is enough to engender trust, that
> principle should be applied consistently. This also significantly
> reduces the revalidation burden.

At this point, it seems reasonable to trust the current certificates that were 
properly CT logged and include proper SCTs, at least up until we no longer 
trust new certificates issued from the current infrastructure.

> 
> 2) Timeline
> 
> Google proposal: a set of dates by which certain milestones must be
> achieved.
> Symantec proposal: no specific alternative dates (more info by the end
> of June), but the Google dates are too aggressive.
> Rationale: need to establish business relationships; capacity issues at
> Managed CAs; international requirements further complicate things; the
> revalidation burden is very large; writing solid code takes time.
> 

It is believable that getting the new issuance infrastructure up and running 
could be a significant burden.  The question may really become "Is it 
acceptable that a (possibly significant) period of time during which Symantec 
can issue no new / renewed certificates occur?"  I note that Symantec even sets 
out that there is some material question as to whether there is even another 
participant within the qualified marketplace that could be prepared in a timely 
fashion to serve as the Managed CA.

> 3) SubCA Audit Type
> 
> Google proposal: SubCAs are audited with the standard audits.
> Symantec proposal: treat SubCAs as Delegated Third Parties and so give
> them BR section 8 audits (an audit by the CA not an auditor; 3% sampling).
> Rationale: none given.

If we mean non-constrained SubCAs, those certainly should be subject to full 
WebTrust audit, whether in the scope of Symantec's audits or separately audited 
for the SubCA organization.  Why would Symantec get a pass otherwise?

> 
> 4) Validation Task Ownership
> 
> Google proposal: Symantec and its affiliates must not participate in any
> of the information verification roles permitted under the Baseline
> Requirements. They may, however, collect and aggregate information.
> Symantec proposal: Symantec currently uses a 2-step process - validation
> and review. Symantec should be allowed to do the first, with the SubCA
> doing the second (with 100% review, not samplingh).
> Rationale: reducing the burden on the SubCA, reducing the time for them
> to ramp up, and (presumably) to allow the Symantec personnel to continue
> to have jobs.

I think this question should be left to the Managed CA, with the Managed CA's 
understanding and acknowledgement that their own roots and trust will be held 
responsible for any mis-issuances detected.  Let the Managed CA's own self 
interest set this where it actually needs to be.

> 
> 5) Use of DTPs by SubCA
> 
> Google proposal: SubCAs may not use Delegated Third Parties in the
> validation process for domain names or IP addresses.
> Symantec proposal: SubCAs should be allowed to continue to use them in
> situations where they already do.
> Rationale: SubCAs should not be required to rejig their processes to
> work with Symantec.

If it were on behalf of any one else, the other CA would have no new 
requirements.  Why change that?  Make any new / extra burdens fall upon 
Symantec, not the other CA partner.

> 
> 6) SubCA Audit Timing
> 
> Google proposal: SubCAs are audited at 3 month intervals in the 1st
> year, 6 months intervals in the 2nd year, and then yearly.
> Symantec proposal: after the initial audit, only yearly audits should be
> required.
> Rationale: Because SubCAs are established CAs, once an audit has been
> done to validate the transition, the subsequent audit schedule should be
> the standard yearly one, not the high-frequency 3/6 month one proposed.
> 

Upon transition back to Symantec control, enforce a higher audit period then, 
based upon their prior misdeeds.

> 7) Detailed Audits
> 
> Google proposal: Symantec may be requested to provide "SOC2" (more
> detailed) audits of their new infrastructure prior to it being ruled
> acceptable for use.
> Symantec proposal: such audits should be provided only under NDA.
> Rationale: they include detailed information of a sensitive nature.

To the extent that the audit includes supporting documentation as to facilities 
physical security details, etc, etc, I can see support for not distributing 
that widely.  Is there any reason to believe that this type of audit will 
reveal anything beyond whether or not they are fully compliant without 

An alternate perspective on Symantec

2017-06-06 Thread Peter Kurrasch via dev-security-policy
 Over the past months there has been much consternation over Symantec and the idea of "too big to fail". That is a reasonable idea but makes difficult the discussion of remedies for Symantec's past behavior: How does one impose a meaningful sanction without causing Symantec to fail outright since the impact would be catastrophic?I'd like to offer an alternate perspective on the situation in the hope that it might simplify the discussions of sanctions. The central point is this: Symantec is too big and too complicated to function properly.Consider:* Symantec has demonstrated an inability to exercise sufficient oversight and control over the totality of their PKI systems. Undoubtedly there are parts which have been and continue to be well-run, but there are parts for which management has been unacceptably poor.* No cases have been identified of a breach or other compromise of Symantec's PKI technology/infrastructure, nor of the infrastructure of a subordinate PKI organization for which Symantec is responsible. The possibility does exist, however, that compromises have occurred but might never be known because of management lapses.* Many of Symantec's customers play a critical role in the global economy and rely on the so-called "ubiquitous roots" to provide their services. Any disruption in those services can have global impacts. Symantec, therefore, plays a significant role in the global economy but only insofar as it is the gatekeeper to the "ubiquitous roots" upon which the global economy relies.* ‎Symantec has demonstrated admirable commitment to its customers but appear less so when it comes to the policies, recommendations, and openness of the global PKI community. Whether this indicates a willful disregard for the community‎ or difficulty in incorporating these viewpoints into a large organization (or something else?) is unclear.From this standpoint, the focus of sanctions would be on Symantec's size. Obviously Mozilla is in no position to mandate the breakup of a company but Mozilla (and others) can mandate a reduced role as gatekeeper to the "ubiquitous roots". In fact, Symantec has already agreed to do just that.In addition, this viewpoint would discourage increasing Symantec's size or adding to the complexity of their operations. I question Symantec's ability to do either one successfully. Symantec is certainly welcome to become bigger and more complex if that's what they should choose, but not as a result of some external mandate.Comments and corrections are welcome.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec response to Google proposal

2017-06-06 Thread Jakob Bohm via dev-security-policy

On 06/06/2017 16:02, Gervase Markham wrote:

On 02/06/17 15:53, Gervase Markham wrote:

https://www.symantec.com/connect/blogs/symantec-s-response-google-s-subca-proposal


I'm slightly surprised to see no engagement here. Perhaps it would be
help to break it down. Symantec's specific requests for modification are
as follows (my interpretation):



Thanks for actually putting the information in the newsgroup, not in
linked documents.  Makes responding much easier.


1) Scope of Distrust

Google proposal: existing CT-logged certificates issued after 1st June
2016 would continue to be trusted until expiry.
Symantec proposal: all CT-logged certificates should continue to be
trusted until expiry.
Rationale for change: if transparency is enough to engender trust, that
principle should be applied consistently. This also significantly
reduces the revalidation burden.



I think the period where trust is added because of CT logging doesn't 
need to be limited.


There may be specific *other* dates before/after which Symantec
validation processes cannot be trusted, but a specific reason other than
availability of CT logs should be given for such dates.


2) Timeline

Google proposal: a set of dates by which certain milestones must be
achieved.
Symantec proposal: no specific alternative dates (more info by the end
of June), but the Google dates are too aggressive.
Rationale: need to establish business relationships; capacity issues at
Managed CAs; international requirements further complicate things; the
revalidation burden is very large; writing solid code takes time.



Given how long this has dragged out, 3rd party CA negotiations may need
a slight extension, but not forever.  3rd party CAs approached for this 
job may negotiate harder due to the deadline imposed on Symantec, but 
that's a consequence of Symantec's actions, which Symantec must simply 
suffer.  It should be possible for Symantec to negotiate with other 3rd 
party CAs to get a better deal later in the transitional (outsourced) 
period, which would imply Mozilla and Chrome accepting new "Managed 
SubCAs" being stood up as a consequence.



3) SubCA Audit Type

Google proposal: SubCAs are audited with the standard audits.
Symantec proposal: treat SubCAs as Delegated Third Parties and so give
them BR section 8 audits (an audit by the CA not an auditor; 3% sampling).
Rationale: none given.



Full audit should be required.


4) Validation Task Ownership

Google proposal: Symantec and its affiliates must not participate in any
of the information verification roles permitted under the Baseline
Requirements. They may, however, collect and aggregate information.
Symantec proposal: Symantec currently uses a 2-step process - validation
and review. Symantec should be allowed to do the first, with the SubCA
doing the second (with 100% review, not samplingh).
Rationale: reducing the burden on the SubCA, reducing the time for them
to ramp up, and (presumably) to allow the Symantec personnel to continue
to have jobs.



If Symantec retains their ability to issue non-TLS certs in-house, the
excess validation team man-hours should be used to improve the
thoroughness of the validation of those other certificate types, in
preparation for using such better validation practices in the
post-transition new PKI.  Other important uses for those excess
man-hours is security training, participating in design and beta-testing
the systems for the new PKI, and perhaps some paid vacations.

It would be bad long-term strategy for Symantec to fire specially
trained personnel that they will need again after rebuilding their
"factory".  Paying people to just remain available during factory
downtime is a cost that any business risks, and Symantec will just have
to eat that cost.


5) Use of DTPs by SubCA

Google proposal: SubCAs may not use Delegated Third Parties in the
validation process for domain names or IP addresses.
Symantec proposal: SubCAs should be allowed to continue to use them in
situations where they already do.
Rationale: SubCAs should not be required to rejig their processes to
work with Symantec.



Maybe the 3rd party SubCAs should be allowed to still use RAs *only to
the extent* they do so for their own already included roots.

For example, they may use RAs to check local document types for OV and
EV certs, if they already do so.

They should not be allowed to use Symantec or any of the (former)
Symantec RAs as RAs for the "Managed SubCA" work.

They should be allowed to still use "Enterprise RAs" as defined in the
BRs.



6) SubCA Audit Timing

Google proposal: SubCAs are audited at 3 month intervals in the 1st
year, 6 months intervals in the 2nd year, and then yearly.
Symantec proposal: after the initial audit, only yearly audits should be
required.
Rationale: Because SubCAs are established CAs, once an audit has been
done to validate the transition, the subsequent audit schedule should be
the standard yearly one, not the high-frequency 3/6 month one proposed.


Re: Symantec response to Google proposal

2017-06-06 Thread Matthew Hardeman via dev-security-policy
I broadly echo many of the comments and thoughts of Martin Heaps earlier in 
this thread.

Much of Symantec's response is disheartening, especially in the "inaccuracies": 
(the apparent dichotomy between how they have acted and their statement that 
they only employ the best people implementing best practice to ensure 
compliance, etc.)

There is one aspect, however, which I feel needs the greatest amount of 
attention:

Symantec has in multiple aspects raised what I believe to be reasonable 
concerns and doubts regarding the practicality of implementation of the 
proposed out-of-house managed CA transition in a timely fashion.

Symantec has made numerous claims as to necessary qualifications, necessary 
up-scaling, necessary integrations, etc.

Ultimately, I do think that the question which arises is:

Can an already third-party work with Symantec to stand up new infrastructure 
and processes and staffing and integration in a sufficiently timely manner to 
be relevant to this discussion?

If it takes so long to stand this up that Symantec could alternatively stand up 
a new, distinct root CA infrastructure and get that included faster  does 
it even become relevant to migrate to a managed CA model for a period of time?

How much critical analysis of the potential marketplace and realities of 
achieving such a relationship with another CA and qualification that there 
exist a market of CAs who could timely handle the load, etc. can reasonably be 
performed by the browser programs and/or the larger relying party community?

Is what has been demanded of Symantec reasonable?  Moreover...  What if the 
requested remedy is actually infeasible?  Where does that leave us and where 
does that leave Symantec?  If a managed CA running their issuance for a time is 
demonstrably infeasible in a relevant time frame, what's the fallback position?

Matt
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Add definition of "mis-issuance"

2017-06-06 Thread Jakob Bohm via dev-security-policy

On 06/06/2017 07:45, Ryan Sleevi wrote:

On Mon, Jun 5, 2017 at 6:21 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


If you read the paper, it contains a proposal for the CAs to countersign
the computed super-crl to confirm that all entries for that CA match the
actual revocations and non-revocations recorded by that CA.  This is not
currently deployed, but is an example of something that CAs could safely
do using their private key, provided sufficient design competence by the
central super-crl team.



I did read the paper - and provide feedback on it.

And that presumption that you're making here is exactly the reason why you
need a whitelist, not a blacklist. "provided sufficient design competence"
does not come for free - it comes with thoughtful peer review and community
feedback. Which can be provided in the aspect of policy.



I am saying that setting an administrative policy for inclusion in a
root program is not the place to do technical reviews of security
protocols.  And I proceeded to list places that *do* perform such peer
review at the highest level of competency, but had to note that the list
would be too long to enumerate in a stable root program policy.




Another good example could be signing a "certificate white-list"
containing all issued but not revoked serial numbers.  Again someone
not a random CA) should provided a well thought out data format
specification that cannot be maliciously confused with any of the
current data types.



Or a bad example. And that's the point - you want sufficient technical
review (e.g. an SDO ideally, but minimally m.d.s.p review).


SDO?  Unfamiliar with that TLA.

And why should Mozilla (and every other root program) be consulted to
unanimously preapprove such technical work?  This will create a massive
roadblock for progress.  I really see no reason to create another FIPS
140 style bureaucracy of meaningless rule enforcement (not to be
confused with the actual security tests that are also part of FIPS 140
validation).



Look, you could easily come up with a dozen examples of improved validation
methods - but just because they exist doesn't mean keeping the "any other
method" is good. And, for what it's worth, of those that did shake out of
the discussions, many of them _were_ insecure at first, and evolved through
community discussion.



Interestingly, the list of revocation checking methods supported by
Chrome (and proposed to be supported by future Firefox versions) is
essentially _empty_ now.  Which is completely insecure.




Here's one item no-one listed so far (just to demonstrate our collective
lack of imagination):



This doesn't need imagination - it needs solid review. No one is
disagreeing with you that there can't be improvements. But let's start with
the actual concrete matters at hand, appropriately reviewed by the
Mozilla-using community that serves a purpose consistent with the mission,
or doesn't pose risks to users.



Within *this thread* proposed policy language would have banned that.

And neither I, nor any other participant seemed to realize this specific
omission until my post this morning.




However the failure mode for "signing additional CA operational items"
would be a lot less risky and a lot less reliant on CA competency.



That is demonstrably not true. Just look at the CAs who have had issues
with their signing ceremonies. Or the signatures they've produced.


Did any of those involve erroneously signing non-certificates of a
wholly inappropriate data type?





It is restrictions for restrictions sake, which is always bad policy
making.



No it's not. You would have to reach very hard to find a single security
engineer would argue that a blacklist is better than a whitelist for
security. It's not - you validate your inputs, you don't just reject the
badness you can identify. Unless you're an AV vendor, which would explain
why so few security engineers work at AV vendors.


I am not an AV vendor.

Technical security systems work best with whitelists wherever possible.

Human-to-human policy making works best with blacklists wherever
possible.

Root inclusion policies are human-to-human policies.





If necessary, one could define a short list of technical characteristics
that would make a signed item non-confusable with a certificate.  For
example, it could be a PKCS#7 structure, or any DER structure whose
first element is a published specification OID nested in one or more
layers of SEQUENCE or SET tags, perhaps more safe alternatives could be
added to this.



You could try to construct such a definition - but that's a needless
technical complexity with considerable ambiguity for a hypothetical
situation that you are the only one advocating for, and using an approach
that has repeatedly lead to misinterpretations and security failures.



Indeed, and I was trying not to until forced by posts rejecting simply
saying that if it looks like a certificate, it counts as a 

Re: Symantec response to Google proposal

2017-06-06 Thread Gervase Markham via dev-security-policy
Here are some thoughts from me:

On 06/06/17 15:02, Gervase Markham wrote:
> 1) Scope of Distrust

I have sought more information from Google on this.

> 2) Timeline

I think the question here is, what is our position, and on what basis do
we decide it? If we want to impose an aggressive but achievable
timeline, how do we determine what that is? Who do we ask for a second
opinion? How do we evaluate statements from Symantec?

> 3) SubCA Audit Type

This would be very difficult to agree to without good rationale; section
8 audits are very weak things compared to the normal ones.

> 4) Validation Task Ownership

I have sought more information from Google on this.

> 5) Use of DTPs by SubCA
> 
> Google proposal: SubCAs may not use Delegated Third Parties in the
> validation process for domain names or IP addresses.
> Symantec proposal: SubCAs should be allowed to continue to use them in
> situations where they already do.

Our research in the last CA Communication suggests that only two small
CAs do any form of delegation of domain name or IP address ownership
validation. Therefore, it's not clear why Symantec would need this
ability, and my sense is to say No.

> 6) SubCA Audit Timing

I have sought more information from Google on this.

> 7) Detailed Audits
> 
> Google proposal: Symantec may be requested to provide "SOC2" (more
> detailed) audits of their new infrastructure prior to it being ruled
> acceptable for use.
> Symantec proposal: such audits should be provided only under NDA.
> Rationale: they include detailed information of a sensitive nature.

If these audits are to be useful to Mozilla, we need to be able to make
them available to people of our choosing. They can be behind a login
system, if we are able to give out access credentials as we choose. But
an NDA is not acceptable.

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: New undisclosed intermediates

2017-06-06 Thread Inigo Barreira via dev-security-policy
Hello all,

I also did it but it´s not reflected.
In my case was also my fault because I was disclosing a different one.

Best regards

Iñigo Barreira
CEO
StartCom CA Limited

-Original Message-
From: dev-security-policy
[mailto:dev-security-policy-bounces+inigo=startcomca@lists.mozilla.org]
On Behalf Of Stephen Davidson via dev-security-policy
Sent: martes, 6 de junio de 2017 15:59
To: Alex Gaynor ; MozPol

Subject: RE: New undisclosed intermediates

Hello:

I acknowledge that QuoVadis Grid ICA2 was missing from the CCADB.  The
omission was human error (my own) when entering a group of issuing CAs into
SalesForce.  Ongoing, when new ICAs are created, the CCADB disclosure is
part of our process.

For the sake of clarity, that ICA is disclosed in our Repository and
included in our WebTrust audit reports.

Regards, Stephen
QuoVadis


-Original Message-
From: dev-security-policy
[mailto:dev-security-policy-bounces+s.davidson=quovadisglobal@lists.mozi
lla.org] On Behalf Of Alex Gaynor via dev-security-policy
Sent: Monday, June 5, 2017 10:30 AM
To: MozPol 
Subject: New undisclosed intermediates

Happy Monday!

Another week, another set of intermediate certs that have shown up in CT
without having been properly disclosed:
https://crt.sh/mozilla-disclosures#undisclosed

There are four intermediates here, and with exception of the StartCom one,
they were all issued more than a year ago.

As I've expressed before, I find it baffling that this still happens. To
approach this more productively, I'd be very appreciative if someone from a
CA could describe how they approach disclosing intermediates, where it fits
into their process, how they track progress, etc.

Cheers,
Alex
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec response to Google proposal

2017-06-06 Thread Alex Gaynor via dev-security-policy
On Tue, Jun 6, 2017 at 10:02 AM, Gervase Markham via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On 02/06/17 15:53, Gervase Markham wrote:
> > https://www.symantec.com/connect/blogs/symantec-s-
> response-google-s-subca-proposal
>
> I'm slightly surprised to see no engagement here. Perhaps it would be
> help to break it down. Symantec's specific requests for modification are
> as follows (my interpretation):
>

I suspect many of us are a bit exhausted by the discussion :-).
Particularly since at this point it feels like there's some divergence
between our goals of protecting security and not breaking the web. It's
clear that Symantec's actions for a much smaller CA would have resulted in
complete distrust (perhaps over time). That's not being pursued because
Symantec is too big too fail.

I'll try to engage on some specific points.


>
> 1) Scope of Distrust
>
> Google proposal: existing CT-logged certificates issued after 1st June
> 2016 would continue to be trusted until expiry.
> Symantec proposal: all CT-logged certificates should continue to be
> trusted until expiry.
> Rationale for change: if transparency is enough to engender trust, that
> principle should be applied consistently. This also significantly
> reduces the revalidation burden.
>

Transparency is not a magic solution to trust, though it is helpful. We
know that it primarily acts as a canary-in-the-coalmine, not perfect
coverage. Distrusting older certs issued under a less strict validation
regime makes sense.


>
> 2) Timeline
>
> Google proposal: a set of dates by which certain milestones must be
> achieved.
> Symantec proposal: no specific alternative dates (more info by the end
> of June), but the Google dates are too aggressive.
> Rationale: need to establish business relationships; capacity issues at
> Managed CAs; international requirements further complicate things; the
> revalidation burden is very large; writing solid code takes time.
>
>
I'm concerned that a lack of commitment to any particular date reflects a
lack of urgency for this process. Fundamentally, the legacy Symantec PKI
reflects risk for relying parties. Taking additional months or years to
move away from it is all time that RPs bear that risk.


> 3) SubCA Audit Type
>
> Google proposal: SubCAs are audited with the standard audits.
> Symantec proposal: treat SubCAs as Delegated Third Parties and so give
> them BR section 8 audits (an audit by the CA not an auditor; 3% sampling).
> Rationale: none given.
>
> 4) Validation Task Ownership
>
> Google proposal: Symantec and its affiliates must not participate in any
> of the information verification roles permitted under the Baseline
> Requirements. They may, however, collect and aggregate information.
> Symantec proposal: Symantec currently uses a 2-step process - validation
> and review. Symantec should be allowed to do the first, with the SubCA
> doing the second (with 100% review, not samplingh).
> Rationale: reducing the burden on the SubCA, reducing the time for them
> to ramp up, and (presumably) to allow the Symantec personnel to continue
> to have jobs.
>
> 5) Use of DTPs by SubCA
>
> Google proposal: SubCAs may not use Delegated Third Parties in the
> validation process for domain names or IP addresses.
> Symantec proposal: SubCAs should be allowed to continue to use them in
> situations where they already do.
> Rationale: SubCAs should not be required to rejig their processes to
> work with Symantec.
>
> 6) SubCA Audit Timing
>
> Google proposal: SubCAs are audited at 3 month intervals in the 1st
> year, 6 months intervals in the 2nd year, and then yearly.
> Symantec proposal: after the initial audit, only yearly audits should be
> required.
> Rationale: Because SubCAs are established CAs, once an audit has been
> done to validate the transition, the subsequent audit schedule should be
> the standard yearly one, not the high-frequency 3/6 month one proposed.
>
> 7) Detailed Audits
>
> Google proposal: Symantec may be requested to provide "SOC2" (more
> detailed) audits of their new infrastructure prior to it being ruled
> acceptable for use.
> Symantec proposal: such audits should be provided only under NDA.
> Rationale: they include detailed information of a sensitive nature.
>

> Gerv
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>

Alex
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: New undisclosed intermediates

2017-06-06 Thread Stephen Davidson via dev-security-policy
Hello:

I acknowledge that QuoVadis Grid ICA2 was missing from the CCADB.  The
omission was human error (my own) when entering a group of issuing CAs into
SalesForce.  Ongoing, when new ICAs are created, the CCADB disclosure is
part of our process.

For the sake of clarity, that ICA is disclosed in our Repository and
included in our WebTrust audit reports.

Regards, Stephen
QuoVadis


-Original Message-
From: dev-security-policy
[mailto:dev-security-policy-bounces+s.davidson=quovadisglobal@lists.mozi
lla.org] On Behalf Of Alex Gaynor via dev-security-policy
Sent: Monday, June 5, 2017 10:30 AM
To: MozPol 
Subject: New undisclosed intermediates

Happy Monday!

Another week, another set of intermediate certs that have shown up in CT
without having been properly disclosed:
https://crt.sh/mozilla-disclosures#undisclosed

There are four intermediates here, and with exception of the StartCom one,
they were all issued more than a year ago.

As I've expressed before, I find it baffling that this still happens. To
approach this more productively, I'd be very appreciative if someone from a
CA could describe how they approach disclosing intermediates, where it fits
into their process, how they track progress, etc.

Cheers,
Alex
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: New undisclosed intermediates

2017-06-06 Thread Rob Stradling via dev-security-policy

On 06/06/17 14:22, Alex Gaynor via dev-security-policy wrote:

On Tue, Jun 6, 2017 at 9:05 AM, Ryan Sleevi via dev-security-policy <



Alex, do you have the specific list of CAs at the time of your posting?



Yes, it was:

* QuoVadis
* AC Camerfirma, S.A.
* Chunghwa Telecom Corporation
* Start Commercial (StartCom) Ltd.

QuoVadis disclosed their intermediate within a few hours of my email, the
others still have not.


"QuoVadis Grid ICA G2"
https://crt.sh/?q=74CE8C1631EF9F38E7A4197DA3F5474DBC34F001F2967C25B5999562BCC8C9D4

First seen by Censys just over a month ago:
https://censys.io/certificates/74ce8c1631ef9f38e7a4197da3f5474dbc34f001f2967c25b5999562bcc8c9d4

--
Rob Stradling
Senior Research & Development Scientist
COMODO - Creating Trust Online

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: New undisclosed intermediates

2017-06-06 Thread Alex Gaynor via dev-security-policy
On Tue, Jun 6, 2017 at 9:05 AM, Ryan Sleevi via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Tue, Jun 6, 2017 at 5:13 AM, Gervase Markham via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
> > On 05/06/17 14:29, Alex Gaynor wrote:
> > > As I've expressed before, I find it baffling that this still happens.
> >
> > I am also disappointed. I have half a mind to keep track of how often
> > this happens per CA, and impose a mandatory delay of 1 month per
> > incident to that CA's next attempt to include a new root or get a trust
> > bit or EV change in our store. :-)
> >
>
> A potential downside to that is that it favors incumbents, who often can
> continue to utilize existing root certificates, while new entrants would
> face a barrier to entry.
>
> That said, it absolutely should be getting tracked, per CA, as incident
> reports in Bugzilla and provided to the community.
>
> Alex, do you have the specific list of CAs at the time of your posting?
>
>
Yes, it was:

* QuoVadis
* AC Camerfirma, S.A.
* Chunghwa Telecom Corporation
* Start Commercial (StartCom) Ltd.

QuoVadis disclosed their intermediate within a few hours of my email, the
others still have not.


>
> > Aside from taking a note of how often this happens and it perhaps
> > appearing in a future CA investigation as part of evidence of
> > incompetence, does anyone else have ideas about how we can further
> > incentivise CA compliance with a requirement which was promulgated some
> > time ago, for which all the deadlines have passed, and which should be a
> > simple matter of paperwork?
> >
>
> Short of disabling trust bits on a 'go forward' basis (e.g. no new issuance
> after date X), most of the ideas favor existing legacy CAs at the expense
> of newer CAs.
>
> This is why I suggested the broader proposal of only adding root CAs with a
> defined 'shutdown' period (e.g. after 3-5 years), and requiring the
> frequent rotation of included root certificates. This ensures that the
> ability to distrust certificates on a go-forward basis is built into the
> ecosystem as the steady state, such that non-compliance, stalling tactics,
> incomplete disclosures, incomplete remediations, lack of addressing
> community questions and feedback, etc can all be appropriately addressed as
> the default state, with the only CAs continuing participation being those
> that are active and engaged with the community and the issues.
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>

Alex
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: New undisclosed intermediates

2017-06-06 Thread Ryan Sleevi via dev-security-policy
On Tue, Jun 6, 2017 at 5:13 AM, Gervase Markham via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On 05/06/17 14:29, Alex Gaynor wrote:
> > As I've expressed before, I find it baffling that this still happens.
>
> I am also disappointed. I have half a mind to keep track of how often
> this happens per CA, and impose a mandatory delay of 1 month per
> incident to that CA's next attempt to include a new root or get a trust
> bit or EV change in our store. :-)
>

A potential downside to that is that it favors incumbents, who often can
continue to utilize existing root certificates, while new entrants would
face a barrier to entry.

That said, it absolutely should be getting tracked, per CA, as incident
reports in Bugzilla and provided to the community.

Alex, do you have the specific list of CAs at the time of your posting?


> Aside from taking a note of how often this happens and it perhaps
> appearing in a future CA investigation as part of evidence of
> incompetence, does anyone else have ideas about how we can further
> incentivise CA compliance with a requirement which was promulgated some
> time ago, for which all the deadlines have passed, and which should be a
> simple matter of paperwork?
>

Short of disabling trust bits on a 'go forward' basis (e.g. no new issuance
after date X), most of the ideas favor existing legacy CAs at the expense
of newer CAs.

This is why I suggested the broader proposal of only adding root CAs with a
defined 'shutdown' period (e.g. after 3-5 years), and requiring the
frequent rotation of included root certificates. This ensures that the
ability to distrust certificates on a go-forward basis is built into the
ecosystem as the steady state, such that non-compliance, stalling tactics,
incomplete disclosures, incomplete remediations, lack of addressing
community questions and feedback, etc can all be appropriately addressed as
the default state, with the only CAs continuing participation being those
that are active and engaged with the community and the issues.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: New undisclosed intermediates

2017-06-06 Thread Matt Palmer via dev-security-policy
On Tue, Jun 06, 2017 at 10:13:20AM +0100, Gervase Markham via 
dev-security-policy wrote:
> Aside from taking a note of how often this happens and it perhaps
> appearing in a future CA investigation as part of evidence of
> incompetence, does anyone else have ideas about how we can further
> incentivise CA compliance with a requirement which was promulgated some
> time ago, for which all the deadlines have passed, and which should be a
> simple matter of paperwork?

"If we find 'em, rather than you telling us about them, they go in OneCRL
as soon as we come across them"?  It'll upset a few site operators because their
sites won't work, and the CA will have to work to fix, but hopefully not
enough certs will be issued before the intermediate surfaces to cause
sufficiently widespread pain.

Alternately, flag roots that have had submarine intermediates surface
before, and switch them to an intermediates whitelist approach.  That'll
cause some degree of pain and suffering for those CAs that can't manage to
remember to tell the CCADB when they issue, by delaying the utility of any
future intermediates until some time after they've finally got around to
submitting them (when the whitelist gets updated, whether that's via a new
release or otherwise).

- Matt

-- 
aren't they getting rarer than amigas now?  just without all that fuzzy
"good times" nostalgia?
-- Ron Lee, in #debian-devel, on Itanic

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Add definition of "mis-issuance"

2017-06-06 Thread Gervase Markham via dev-security-policy
On 04/06/17 03:03, Matt Palmer wrote:
> For whatever it is worth, I am a fan of this way of defining "misissuance".

So you think we should use the word "misissuance" for all forms of
imperfect issuance, and then have a gradated reaction depending on the
type and circumstances, rather than use the word "misissuance" for a
security problem, and another word (e.g. "misconstructed") for the other
ones?

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: New undisclosed intermediates

2017-06-06 Thread Gervase Markham via dev-security-policy
On 05/06/17 14:29, Alex Gaynor wrote:
> As I've expressed before, I find it baffling that this still happens.

I am also disappointed. I have half a mind to keep track of how often
this happens per CA, and impose a mandatory delay of 1 month per
incident to that CA's next attempt to include a new root or get a trust
bit or EV change in our store. :-)

Aside from taking a note of how often this happens and it perhaps
appearing in a future CA investigation as part of evidence of
incompetence, does anyone else have ideas about how we can further
incentivise CA compliance with a requirement which was promulgated some
time ago, for which all the deadlines have passed, and which should be a
simple matter of paperwork?

> To
> approach this more productively, I'd be very appreciative if someone from a
> CA could describe how they approach disclosing intermediates, where it fits
> into their process, how they track progress, etc.

Well, I suspect the processes are different per-CA, and if you get such
an explanation, it'll be from a CA which doesn't make this sort of
mistake :-)

Also, different CAs have different PKI complexities. While the deadline
we imposed on them for getting things in order has passed, I would be a
bit less grumpy about DigiCert discovering a 'new' old intermediate in
their Verizon-inherited mess that they didn't know about before, than if
some small CA with a simple PKI doesn't disclose one they issued a
couple of months ago.

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: On remedies for CAs behaving badly

2017-06-06 Thread Gervase Markham via dev-security-policy
On 05/06/17 16:52, Matthew Hardeman wrote:
> Has there ever been an effort by the root programs to directly assess
> monetary penalties to the CAs -- never for inclusion -- but rather as
> part of a remediation program?

Another fact to bear in mind when discussing this is that, for a number
of reasons, and unlike e.g. Microsoft, Mozilla has no formal contract
with the CAs in its program. The relevance of that to this idea should
be reasonably obvious. :-)

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Make it clear that Mozilla policy has wider scope than the BRs

2017-06-06 Thread Gervase Markham via dev-security-policy
On 02/06/17 17:07, Peter Bowen wrote:
> Should Mozilla include a clear definition of "SSL certificates" in the
> policy?  And should it be based on technical attributes rather than
> intent of the issuer?

Absolutely Yes to your second sentence :-). We do have a clear
definition of what's in scope; however, we don't subclassify
specifically into "SSL" and "email" except by implication from the EKU.
And that leaves the question of what to do with anyEKU.

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Make it clear that Mozilla policy has wider scope than the BRs

2017-06-06 Thread Gervase Markham via dev-security-policy
On 02/06/17 17:24, Kurt Roeckx wrote:
> On Fri, Jun 02, 2017 at 04:50:44PM +0100, Gervase Markham wrote:
>> On 02/06/17 12:24, Kurt Roeckx wrote:
>>> Should that be "all certificates" instead of "all SSL certificates"?
>>
>> No; the Baseline Requirements apply only to SSL certificates.
> 
> Then I don't understand what you're trying to do. If the BR
> already apply to all SSL certificates, 

No. The Baseline Requirements state that they apply to _some_ SSL
certificates. Exactly which ones is not clear because the BRs use
language of intent. From section 1.1: "These Requirements only address
Certificates intended to be used for authenticating servers accessible
through the Internet."

Mozilla does not believe the language of intent is useful, and wants to
use language of capability to define scope. Therefore, we have our own
scope statement for our policy, and now want to make it clear that
there's no such thing as an SSL certificate which falls under the
Mozilla policy but does not fall under the BRs, despite the differing
and unclear scope statement in the BRs.

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Clarify requirement for multi-factor auth

2017-06-06 Thread Gervase Markham via dev-security-policy
On 02/06/17 12:29, Ryan Sleevi wrote:
> 2) "performing RA or DTP functions"

I'll go with that :-)

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy