Re: Policy 2.5 Proposal: Require all CAs to have appropriate network security

2017-05-22 Thread Peter Kurrasch via dev-security-policy
I think the term "industry best practices" is too nebulous. For example, if I 
patch some of my systems but not all of them I could still make a claim that I 
am following best practices even though my network has plenty of other holes in 
it.

I assume the desire is to hold CA's to account for the security of their 
networks and systems, is that correct? If so, I think we should have something 
with more meat to it. If not, the proposal as written is probably just fine 
(although, do you mean the CABF's "Network Security Requirements" spec or is 
there another guidelines doc?).

For consideration: ‎Mozilla can--and perhaps should--require that all CA's 
adopt and document a cybersecurity risk management framework for their networks 
and systems (perhaps this is already mandated somewhere?). I would expect that 
the best run CA's will already have something like this in place (or something 
better) but other CA's might not. There are pros and cons to such frameworks 
but at a minimum it can demonstrate that a particular CA has at least 
considered the cybersecurity risks that are endemic to their business.


  Original Message  
From: Gervase Markham via dev-security-policy
Sent: Friday, May 19, 2017 7:56 AM
To: mozilla-dev-security-pol...@lists.mozilla.org
Reply To: Gervase Markham
Subject: Policy 2.5 Proposal: Require all CAs to have appropriate network 
security

At the moment, the CAB Forum's Network Security guidelines are audited
as part of an SSL BR audit. This means that CAs or sub-CAs which only do
email don't technically have to meet them. However, they also have a
number of deficiencies, and the CAB Forum is looking at replacing them
with something better, ideally maintained by another organization. So
just mandating that everyone follow them doesn't seem like the best thing.

Nevertheless, I think it's valuable to make it clear in our policy that
all CAs are expected to follow best practices for network security. I
suggest this could be done by adding a bullet to section 2.1:

"CAs whose certificates are included in Mozilla's root program MUST:

* follow industry best practice for securing their networks, for example
by conforming to the CAB Forum Network Security Guidelines or a
successor document;"

This provides flexibility in exactly what is done, while making it
reasonably clear that leaving systems unpatched for 5 years would not be
acceptable.

This is: https://github.com/mozilla/pkipolicy/issues/70

---

This is a proposed update to Mozilla's root store policy for version
2.5. Please keep discussion in this group rather than on Github. Silence
is consent.

Policy 2.4.1 (current version):
https://github.com/mozilla/pkipolicy/blob/2.4.1/rootstore/policy.md
Update process:
https://wiki.mozilla.org/CA:CertPolicyUpdates
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla Policy and CCADB Disclosure scope

2017-05-22 Thread Matthew Hardeman via dev-security-policy
On Monday, May 22, 2017 at 7:24:42 PM UTC-5, Ryan Sleevi wrote:

> https://groups.google.com/d/msg/mozilla.dev.security.policy/yS_L_OgI5qk/OhLX9iyZBAAJ
> specifically proposed
> 
> "For example, no requirement of audit by the enterprise holding the
> technically constrained intermediate, and no requirement for audit or
> disclosure of certificates issued by the enterprise from the technically
> constrained subordinate."
> 
> You're certainly correct that, under today's scheme, TCSCs exemption from
> requirements under the Baseline Requirements simply requires Self-Audits
> (Pursuant to Section 8.7). However, that does not mean that TCSCs must be
> on the same infrastructure as the issuing CA - simply that "the CA which
> signed the Subordinate CA SHALL monitor adherance to the CA's CP and the
> SubCA's CPS" and a sampling audit, by the issuing CA, of either one
> certificate or three percent of certificates issued.
> 
> That's a much weaker requirement than subCAs.
> 

It's true that I set forth a particular goal that I represented as part of my 
interest in seeing the bar for the issuance of a properly designed TCSC 
lowered.  I concur that the realization of that goal would mean that there are 
far more unique systems issuing publicly trusted (although within a very 
narrowly defined window as enforced by technical constraints) certificates.

I even concede that that alone does create a potential for compatibility issues 
should a need arise to make a global web pki-wide change to certificate 
issuance (say, for example, sudden need to deprecate SHA-256 signatures in 
favor of NGHA-1 [ostensibly the Next Great Hash Algorithm Instance #1]).  For 
mitigation of that matter, I firmly believe that any research and development 
in the area of improved techniques for demonizing server administrators would 
be most beneficial.

> 
> > It seems this discussion is painting TCSCs with a broad brush.  I
> > don't see anything in this discussion that makes the TCSC relationship
> > any different from any other subordinate CA.  Both can be operated
> > either by the same organization that operates the root CA or an
> > unrelated organization.  The Apple and Google subordinate CAs are
> > clearly not TCSCs but raise the same concerns.  If there were 10,000
> > subordinates all with WebTrust audits, you would have the exact same
> > problem.
> >
> 
> Indeed, although the realities and costs of that make it unpractical - as
> do the risks exposed to CAs (as recently seen) in engaging in such
> relationships without sufficient and appropriate oversight.

If I understand correctly, then, the issue is that you wish to minimize the 
growth of distinct issuing systems wherever they may occur in the PKI 
hierarchy, not TCSCs in particular.

> 
> But I'm responding in the context of the desired goal, and not simply
> today's reality, since it is the goal that is far more concerning.

If I understand correctly, your position is that full disclosure and indexing 
of the TCSCs is to be desired principally because the extra effort of doing so 
may discourage their prevalence in deployment?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla Policy and CCADB Disclosure scope

2017-05-22 Thread Ryan Sleevi via dev-security-policy
On Mon, May 22, 2017 at 7:58 PM, Peter Bowen  wrote:
>
> Why do you need to add 10,000 communication points?  A TCSC is, by
> definition, a subordinate CA.  The WebPKI is not a single PKi, is a
> set of parallel PKIs which do not share a common anchor.  The browser
> to CA relationship is between the browser vendor and each root CA.
> This is O(root CA operator) not even O(every root CA).  If a root CA
> issues 10,000 subordinate CAs, then they better have a compliance plan
> in place to have assurance that all of them will do the necessary
> things.
>

https://groups.google.com/d/msg/mozilla.dev.security.policy/yS_L_OgI5qk/OhLX9iyZBAAJ
specifically proposed

"For example, no requirement of audit by the enterprise holding the
technically constrained intermediate, and no requirement for audit or
disclosure of certificates issued by the enterprise from the technically
constrained subordinate."

You're certainly correct that, under today's scheme, TCSCs exemption from
requirements under the Baseline Requirements simply requires Self-Audits
(Pursuant to Section 8.7). However, that does not mean that TCSCs must be
on the same infrastructure as the issuing CA - simply that "the CA which
signed the Subordinate CA SHALL monitor adherance to the CA's CP and the
SubCA's CPS" and a sampling audit, by the issuing CA, of either one
certificate or three percent of certificates issued.

That's a much weaker requirement than subCAs.


> It seems this discussion is painting TCSCs with a broad brush.  I
> don't see anything in this discussion that makes the TCSC relationship
> any different from any other subordinate CA.  Both can be operated
> either by the same organization that operates the root CA or an
> unrelated organization.  The Apple and Google subordinate CAs are
> clearly not TCSCs but raise the same concerns.  If there were 10,000
> subordinates all with WebTrust audits, you would have the exact same
> problem.
>

Indeed, although the realities and costs of that make it unpractical - as
do the risks exposed to CAs (as recently seen) in engaging in such
relationships without sufficient and appropriate oversight.

But I'm responding in the context of the desired goal, and not simply
today's reality, since it is the goal that is far more concerning.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla Policy and CCADB Disclosure scope

2017-05-22 Thread Peter Bowen via dev-security-policy
On Mon, May 22, 2017 at 12:21 PM, Ryan Sleevi via dev-security-policy
 wrote:
> Consider, on one extreme, if every of the Top 1 sites used TCSCs to
> issue their leaves. A policy, such as deprecating SHA-1, would be
> substantially harder, as now there's a communication overhead of O(1 +
> every root CA) rather than O(# of root store CAs).

Why do you need to add 10,000 communication points?  A TCSC is, by
definition, a subordinate CA.  The WebPKI is not a single PKi, is a
set of parallel PKIs which do not share a common anchor.  The browser
to CA relationship is between the browser vendor and each root CA.
This is O(root CA operator) not even O(every root CA).  If a root CA
issues 10,000 subordinate CAs, then they better have a compliance plan
in place to have assurance that all of them will do the necessary
things.

> It may be that the benefits of TCSCs are worth such risk - after all, the
> Web Platform and the evolution of its related specs (URL, Fetch, HTML)
> deals with this problem routinely. But it's also worth noting the
> incredible difficulty and friction of deprecating insecure, dangerous APIs
> - and the difficulty in SHA-1 (or commonNames) for "enterprise" PKIs - and
> as such, may represent a significant slowdown in progress, and a
> corresponding significant increase in user-exposed risk.
>
> This is why it may be more useful to take a principled approach, and to, on
> a case by case basis, evaluate the risk of reducing requirements for TCSCs
> (which are already required to abide by the BRs, and simply exempted from
> auditing requirements - and this is independent of any Mozilla
> dispensations), both in the short-term and in the "If every site used this"
> long-term.

It seems this discussion is painting TCSCs with a broad brush.  I
don't see anything in this discussion that makes the TCSC relationship
any different from any other subordinate CA.  Both can be operated
either by the same organization that operates the root CA or an
unrelated organization.  The Apple and Google subordinate CAs are
clearly not TCSCs but raise the same concerns.  If there were 10,000
subordinates all with WebTrust audits, you would have the exact same
problem.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla Policy and CCADB Disclosure scope

2017-05-22 Thread Ryan Sleevi via dev-security-policy
On Mon, May 22, 2017 at 5:35 PM, Matthew Hardeman via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:
>
> It is within the above that I can see a real problem in making more broad
> use of TCSCs problematic.  If the browser community does not effectively
> move in the fashion of a single actor with a breaking change when necessary
> for addressing a security concern, I would agree that frankly anything
> which adds additional field deployment scenarios into the ecosystem will
> only make things worse.
>
> On the other hand, perhaps the lesson to be learned there is that better
> concensus as to scheduling of impact of breaking changes should be
> negotiated amongst the browser participants and handed down in one voice to
> the Root CAs and onward to the web community.
>
> The user can't blame Chrome if Safari and Firefox break for the same use
> case in quite near term.  When there is no one left to blame for a broken
> website BUT the broken website, the blame will be taxed where it is
> deserved.
>

There are both technical and non-technical hurdles that prevent that from
being meaningfully accomplished. On the non-technical front, the nature of
relationships with third-party entities (CAs) makes it complex to act in a
coordinated fashion, for what are hopefully obvious reasons.

On a pragmatic, technical front, the asymmetric release cycles prevent
there from ever being a true Flag Day, and as such, means there's always a
first mover penalty, and there's always jockeying to avoid the pain of that
first-mover penalty. I'm not sure whether to draw the parallel to the
prisoner's dilemma, but it's worth pointing out that Microsoft was the
first to announce a SHA-1 date, and the last to implement it (having only
recently shipped it, after other browsers worked through the issues - and
the user pain).

To achieve parity, one would need to implement a concerted flag day, much
like World IPv6 Day. Unfortunately, such flag days inherently mean there is
a limit to the degree of testing and assessing breakage, and any bugs -
bugs that would cause the change to be reverted or fixed - cannot be fixed
ahead of time.

These are issues that the browser community is unable to solve. Not
unwilling, but on a purely technical front, unable to achieve while also
serving their goals of shipping reliable products to users.


> One particular hesitation that I have in fully accepting your position is
> that it would seem that your position would recommend a Web PKI with a very
> few concentrated actors working subject to the best practices and with
> minimal differentiation.  (Say, for example, a LetsEncrypt and 3 distinct
> competitors diverse of geography and management but homogenous as to intent
> and operational practice.)  The 4 CAs could quickly be communicated with
> and could adapt to the needs of the community.  Extrapolating from
> LetsEncrypt's performance also suggests it would be technologically
> feasible for just a few entities to pull this off, too.  Yet, I don't see a
> call for that.  Where's the balance in between and how does one arrive at
> that?


The Web PKI already has virtually zero differentiation. That's a foregone
conclusion, by virtue of compliance to the Baseline Requirements. That is,
the only real differentiation is on ubiquity of roots, probability of
removal (due to misissuance), and price.

That said, despite this, it should be for very obvious reasons, much like
above, why the obvious conclusion is not one that is actively pursued.

Would such a system be better on a security front? In many ways, yes. The
distributed nature of the Web PKI was not, as some might claim, an
intentional design goal for security, but was done moreso for non-technical
reasons, such as perceived liability. Having 5 entities with keys to the
Internet is, unquestionably, better than having 5000 entities with the keys
to the Internet. To date, most systems have maintained an unbounded root
store (modulo per-company limits), because there has not been a desire to
include technical differentiation. One could just as easily see a goal
that, in the furtherance of Internet security, a root store limiting it to
10 CAs, all implementing a common issuance API, and objectively measured in
terms of things like performance, availability, and systemtic security.
However, as you can see from just the inclusion reviews as it stands,
that's a time consuming and difficult task, and for most root stores, the
amount of time that vendors are dedicating average around 1.5 - 2 people
for the entire company, which is far less than needed to implement such
changes.

But to the original point - can browsers unilaterally cut off (potentially
large) swaths of the Internet? No. And a profile of TCSCs that has 10,000
of them can easily mean that's what it entails. If it was otherwise
possible, we would have HTTPS-by-default by now - but as you can see from
those discussions, or the discussions of disabling plugins (which 

Sandbox: Mozilla: Audit Reminder

2017-05-22 Thread Kathleen Wilson via dev-security-policy
CAs,

I was testing some changes in my CCADB Sandbox, and accidentally sent out audit 
reminder email from it. So, if you get an email with the subject "Sandbox: 
Mozilla: Audit Reminder" you can ignore it. It's likely a duplicate of the 
email you received last Tuesday.

I apologize for the spam.

Kathleen
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla Policy and CCADB Disclosure scope

2017-05-22 Thread Matthew Hardeman via dev-security-policy
On Monday, May 22, 2017 at 3:50:30 PM UTC-5, Ryan Sleevi wrote:

> Right, but I reject that :)
> 

I hope to better understand your position.  In transitioning from a long time 
lurker to actively commenting on this list, it is my hope to contribute what 
that I usefully can, bow out gracefully when I can not, but above all to learn 
at least as much as I contribute.

> > Having said that, I think that future compatibility concerns in the face
> > of the potential of more TCSCs being deployed can be headed off by taking a
> > firm stance toward the necessity of those entities reliant on TCSCs keeping
> > their infrastructure and practices up to date.
> >
> > Deployment in this mode should probably be regarded as "This is for the
> > advanced class.  If that isn't you and/or you encounter problems, go back
> > to working with a CA for your EE certificates."
> >
> 
> A firm stance to use users as hostages in negotiations? Browsers undertake
> that with fear and trembling - because as much as you can say it, and the
> end of the day, the user is going to blame the most recent one to change -
> which will consistently be the browser.

It is within the above that I can see a real problem in making more broad use 
of TCSCs problematic.  If the browser community does not effectively move in 
the fashion of a single actor with a breaking change when necessary for 
addressing a security concern, I would agree that frankly anything which adds 
additional field deployment scenarios into the ecosystem will only make things 
worse.

On the other hand, perhaps the lesson to be learned there is that better 
concensus as to scheduling of impact of breaking changes should be negotiated 
amongst the browser participants and handed down in one voice to the Root CAs 
and onward to the web community.

The user can't blame Chrome if Safari and Firefox break for the same use case 
in quite near term.  When there is no one left to blame for a broken website 
BUT the broken website, the blame will be taxed where it is deserved.

One particular hesitation that I have in fully accepting your position is that 
it would seem that your position would recommend a Web PKI with a very few 
concentrated actors working subject to the best practices and with minimal 
differentiation.  (Say, for example, a LetsEncrypt and 3 distinct competitors 
diverse of geography and management but homogenous as to intent and operational 
practice.)  The 4 CAs could quickly be communicated with and could adapt to the 
needs of the community.  Extrapolating from LetsEncrypt's performance also 
suggests it would be technologically feasible for just a few entities to pull 
this off, too.  Yet, I don't see a call for that.  Where's the balance in 
between and how does one arrive at that?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla Policy and CCADB Disclosure scope

2017-05-22 Thread Ryan Sleevi via dev-security-policy
On Mon, May 22, 2017 at 4:34 PM, Matthew Hardeman via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:
>
> I'm not certain that I accept the premise that TCSCs fundamentally or
> substantively change that dynamic.  Particularly if the validity period of
> the TCSC is limited in much the same manner as an EE certificate, it would
> seem that there's a sufficiently limited time window to changing any needed
> aspect of the restrictions in a TCSC.
>

Right, but I reject that :)

The window of change is not simply the validity window of the extant
certificates. That is, you cannot, at T=1, necessitate some change from
T=0. The approach that has been used has been to phase in - e.g. after
T=[some point in the future], all publicly trusted certificates need to
change.

The success or failure of the ability to make those changes has been gated
upon the diversity of the ecosystem. That is, technically diverse CAs -
those that operate on diverse infrastructures, perhaps over the course of
several years of acquisitions - have been the most difficult to adapt to
changing requirements. Those with homogenous infrastructures have
demonstrated their willingness and ability to change in a timely fashion.

When you extend that to TCSCs, it simply doesn't scale. Changing an
algorithm like SHA-1, on the extreme side, is no longer a centralized
problem, but a decentralized one. That's the benefit being argued for
TCSCs, but its also the extreme detriment. On the other side, things that
should be 'simple' changes - like the proper encoding of a certificate,
which apparently CAs find remarkably hard, are now matters of coordinating
between ISVs implementing, organizations deploying, testing, etc. In
today's world, this is a problem, but not an overwhelming one. In a world
of TCSCs, it certainly is.


>
> >
> > Consider, on one extreme, if every of the Top 1 sites used TCSCs to
> > issue their leaves. A policy, such as deprecating SHA-1, would be
> > substantially harder, as now there's a communication overhead of O(1
> +
> > every root CA) rather than O(# of root store CAs).
>
> I definitely concede that there would arise risks in having more TCSCs in
> deployment, with respect specifically to compatibility, if and only if an
> expectation of lax timelines and enforcement were required.
>
> I think the key issue which held back the SHA-256 migration was not CA
> readiness as much as server administrator and consuming application
> (generally proprietary non-browser stuff on dual-propose (web + proprietary
> interface) shared endpoints) pushback.
>

Unfortunately, this is not an accurate summary :) The SHA-256 migration was
very much held back by CA readiness, moreso than server administrator
unreadiness. Many CAs were simply not capable of issuing SHA-256
certificates as recently as late 2014/early 2015, either directly or
through their APIs. And we're talking LARGE CAs.


> I think part of the right balance would be to presume that those who are
> advanced enough to use TCSCs will do what maintenance is necessary to
> continue to comply with technically enforced browser requirements, assuming
> there is reasonable notice of those changes.  Ultimately, it should also be
> a burden of the sponsoring CA to communicate to their TCSCs about impending
> changes.
>

There is no such evidence to support such presumption, and ample evidence
(as from the BRs) to suggest it doesn't work.

While I agree it should be the burden of the sponsoring CA, the
externalities are entirely incorrect to ensure that happens. That is, if a
CA fails to communicate, much like SHA-1 saw, then it becomes an issue with
a site operator/TCSC operator being ill-prepared for a browser change, and
the result is a broken site in the browser, which then further increases
warning fatigue on the user.

In an ideal world, it'd work like you describe. The past decade of CA
changes have shown the world is anything but ideal :)


> Having said that, I think that future compatibility concerns in the face
> of the potential of more TCSCs being deployed can be headed off by taking a
> firm stance toward the necessity of those entities reliant on TCSCs keeping
> their infrastructure and practices up to date.
>
> Deployment in this mode should probably be regarded as "This is for the
> advanced class.  If that isn't you and/or you encounter problems, go back
> to working with a CA for your EE certificates."
>

A firm stance to use users as hostages in negotiations? Browsers undertake
that with fear and trembling - because as much as you can say it, and the
end of the day, the user is going to blame the most recent one to change -
which will consistently be the browser.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla Policy and CCADB Disclosure scope

2017-05-22 Thread Matthew Hardeman via dev-security-policy
> 
> Right now the list excludes anything with a certain set of name
> constraints and anything that has EKU constraints outside the in-scope
> set.  I'm suggesting that the first "layer" of CA certs always should
> be disclosed.
> 

I understand now.  In that, I fully concur.

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla Policy and CCADB Disclosure scope

2017-05-22 Thread Peter Bowen via dev-security-policy
On Mon, May 22, 2017 at 1:02 PM, Matthew Hardeman via
dev-security-policy  wrote:
> On Monday, May 22, 2017 at 2:43:14 PM UTC-5, Peter Bowen wrote:
>
>>
>> I would say that any CA-certificate signed by a CA that does not have
>> name constraints and not constrained to things outside the set
>> {id-kp-serverAuth, id-kp-emailProtection, anyEKU} should be disclosed.
>> This would mean that the top level of all constrained hierarchies is
>> disclosed but subordinate CAs further down the tree and EE certs are
>> not.  I think that this is a reasonable trade off of privacy vs
>> disclosure.
>
> I would agree that those you've identified as "should be disclosed" 
> definitely should be disclosed.  I am concerned, however, that SOME of the 
> remaining certificates beyond those should probably also be disclosed.  For 
> safety sake, it may be better to start with an assumption that all CA and 
> SubCA certificates require full disclosure to CCADB and then define 
> particular specific rule sets for those which don't require that level.

Right now the list excludes anything with a certain set of name
constraints and anything that has EKU constraints outside the in-scope
set.  I'm suggesting that the first "layer" of CA certs always should
be disclosed.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla Policy and CCADB Disclosure scope

2017-05-22 Thread Matthew Hardeman via dev-security-policy
On Monday, May 22, 2017 at 2:21:41 PM UTC-5, Ryan Sleevi wrote:
> > > Regarding specifically the risk of the holder of a technically
> > > constrained subCA issuing a certificate with an SHA-1 signature or
> > > other improper signature / algorithm, my belief at this time is that
> > > with respect to the web PKI, we should be able to rely upon the
> > > modern client software to exclude these certificates from
> > > functioning.  My understanding was that IE / Edge was the last
> > > holdout on that front but that it now distrusts SHA-1 signatures.
> >
> > So your proposal is that technical requirements should be enforced
> > in-product rather than in-policy, and so effectively there's no need for
> > policy for the EE certs under a TCSC.
> >
> > This is not an unreasonable position.
> >
> 
> I think it may be based on an incomplete understanding of the evolution of
> the Web PKI. While it's certainly correct that we've been able to
> technically mitigate many risks, it's not been without issue. The historic
> path to deprecation has been on the basis of establishing some form of
> sunset date or requirements change, either within the CA/Browser Forum or
> through policy, with the understanding and appreciation that, on or after
> that sunset date, it can be technically enforced without any breakage (save
> for misissued certificates).

I set forth that there are absolutely limitations to my knowledge, but feel 
that I have a fair grasp as to the history and to the evolution.

> 
> TCSCs substantially change that dynamic, in a way that I believe would be
> detrimental towards further evolution. This is already a concern when
> thinking about requirements such as Certificate Transparency - despite the
> majority of commercial CAs (and thus, equally, commercially-managed managed
> CAs) - TCSCs that are in existence may be ill-prepared to handle such
> transition. We saw this itself with the imposition of the Baseline
> Requirements, which thankfully saw many enterprise-managed CAs become
> commercially-managed CAs, due to their inability to abide by the
> requirements, so we can reasonably conclude that future requirements will
> also be challenging for enterprise-managed CAs, which TCSCs effectively are.

I'm not certain that I accept the premise that TCSCs fundamentally or 
substantively change that dynamic.  Particularly if the validity period of the 
TCSC is limited in much the same manner as an EE certificate, it would seem 
that there's a sufficiently limited time window to changing any needed aspect 
of the restrictions in a TCSC.

> 
> Consider, on one extreme, if every of the Top 1 sites used TCSCs to
> issue their leaves. A policy, such as deprecating SHA-1, would be
> substantially harder, as now there's a communication overhead of O(1 +
> every root CA) rather than O(# of root store CAs).

I definitely concede that there would arise risks in having more TCSCs in 
deployment, with respect specifically to compatibility, if and only if an 
expectation of lax timelines and enforcement were required.

I think the key issue which held back the SHA-256 migration was not CA 
readiness as much as server administrator and consuming application (generally 
proprietary non-browser stuff on dual-propose (web + proprietary interface) 
shared endpoints) pushback.

Indeed, many mis-issuances which occurred (WoSign, Startcom, Symantec) seem to 
have been attempts to improperly satisfy end-customer demand for certificates 
in those kinds of use cases.

> 
> It may be that the benefits of TCSCs are worth such risk - after all, the
> Web Platform and the evolution of its related specs (URL, Fetch, HTML)
> deals with this problem routinely. But it's also worth noting the
> incredible difficulty and friction of deprecating insecure, dangerous APIs
> - and the difficulty in SHA-1 (or commonNames) for "enterprise" PKIs - and
> as such, may represent a significant slowdown in progress, and a
> corresponding significant increase in user-exposed risk.

I think part of the right balance would be to presume that those who are 
advanced enough to use TCSCs will do what maintenance is necessary to continue 
to comply with technically enforced browser requirements, assuming there is 
reasonable notice of those changes.  Ultimately, it should also be a burden of 
the sponsoring CA to communicate to their TCSCs about impending changes.

> 
> This is why it may be more useful to take a principled approach, and to, on
> a case by case basis, evaluate the risk of reducing requirements for TCSCs
> (which are already required to abide by the BRs, and simply exempted from
> auditing requirements - and this is independent of any Mozilla
> dispensations), both in the short-term and in the "If every site used this"
> long-term.

If individual case basis assessment requiring anything more than a "this sunCA 
certificates meets rule specification XYZ123 and thus requires no CCADB 
publication and entities below this cert are not 

Re: Mozilla Policy and CCADB Disclosure scope

2017-05-22 Thread Matthew Hardeman via dev-security-policy
On Monday, May 22, 2017 at 2:43:14 PM UTC-5, Peter Bowen wrote:

> 
> I would say that any CA-certificate signed by a CA that does not have
> name constraints and not constrained to things outside the set
> {id-kp-serverAuth, id-kp-emailProtection, anyEKU} should be disclosed.
> This would mean that the top level of all constrained hierarchies is
> disclosed but subordinate CAs further down the tree and EE certs are
> not.  I think that this is a reasonable trade off of privacy vs
> disclosure.

I would agree that those you've identified as "should be disclosed" definitely 
should be disclosed.  I am concerned, however, that SOME of the remaining 
certificates beyond those should probably also be disclosed.  For safety sake, 
it may be better to start with an assumption that all CA and SubCA certificates 
require full disclosure to CCADB and then define particular specific rule sets 
for those which don't require that level.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla Policy and CCADB Disclosure scope

2017-05-22 Thread Peter Bowen via dev-security-policy
On Fri, May 19, 2017 at 6:47 AM, Gervase Markham via
dev-security-policy  wrote:
> We need to have a discussion about the appropriate scope for:
>
> 1) the applicability of Mozilla's root policy
> 2) required disclosure in the CCADB
>
> The two questions are related, with 2) obviously being a subset of 1).
> It's also possible we might decide that for some certificates, some
> subset of the Mozilla policy applies, but not all of it.
>
> I'm not even sure how best to frame this discussion, so let's have a go
> from this angle, and if it runs into the weeds, we can try again another
> way.
>
> The goal of scoping the Mozilla policy is, to my mind, to have Mozilla
> policy sufficiently broadly applicable that it covers all
> publicly-trusted certs and also doesn't leave unregulated sufficiently
> large number of untrusted certs inside publicly-trusted hierarchies that
> it will hold back forward progress on standards and security.
>
> The goal of CCADB disclosure is to see what's going on inside the WebPKI
> in sufficient detail that we don't miss important things. Yes, that's vague.
>
> Here follow a list of scenarios for certificate issuance. Which of these
> situations should be in full Mozilla policy scope, which should be in
> partial scope (if any), and which of those should require CCADB
> disclosure? Are there scenarios I've missed?

You seem to be assuming each of A-I have a path length constraint of
0, as your scenarios don't include CA-certs below each category.

> A) Unconstrained intermediate
>   AA) EE below
> B) Intermediate constrained to id-kp-serverAuth
>   BB) EE below
> C) Intermediate constrained to id-kp-emailProtection
>   CC) EE below
> D) Intermediate constrained to anyEKU
>   DD) EE below
> E) Intermediate usage-constrained some other way
>   EE) EE below
> F) Intermediate name-constrained (dnsName/ipAddress)
>   FF) EE below
> G) Intermediate name-constrained (rfc822Name)
>   GG) EE below
> H) Intermediate name-constrained (srvName)
>   HH) EE below
> I) Intermediate name-constrained some other way
>   II) EE below
>
> If a certificate were to only be partially in scope, one could imagine
> it being exempt from one or more of the following sections of the
> Mozilla policy:
>
> * BR Compliance (2.3)
> * Audit (3.1) and auditors (3.2)
> * CP and CPS (3.3)
> * CCADB (4)
> * Revocation (6)

I would say that any CA-certificate signed by a CA that does not have
name constraints and not constrained to things outside the set
{id-kp-serverAuth, id-kp-emailProtection, anyEKU} should be disclosed.
This would mean that the top level of all constrained hierarchies is
disclosed but subordinate CAs further down the tree and EE certs are
not.  I think that this is a reasonable trade off of privacy vs
disclosure.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla Policy and CCADB Disclosure scope

2017-05-22 Thread Matthew Hardeman via dev-security-policy
On Monday, May 22, 2017 at 2:14:57 PM UTC-5, Ryan Sleevi wrote:

> Another approach is to make an argument that such validations are already
> accounted for in the EV Guidelines, in which a certificate may be issued
> for 27 months, but for which the domain must be revalidated at 13 months.
> In this case, the TCSC might be issued for an 'extended' period (perhaps on
> an order of many-years), with the expectation that the CA will revoke the
> TCSC if the domain does not (periodically) revalidate.
> 
> Each of these approaches are with their own tradeoffs in design,
> complexity, and risk, and argue more or less strongly for disclosure
> because of it.
> 
> My own take is that I would prefer to see TCSCs uniquely named (e.g.
> through the use of dnQualifier), limited to the validity period permitted
> of leaf certs (since they're, effectively, "ultra" wildcards), with some
> robust diclosure & revocation story. I'm concerned that extending the
> period of time is to incentivize such certs, which introduce additional
> risks to the evolution of the Web PKI.

Hi, Ryan,

Thanks for the information regarding the chain validity periods and their 
impact as to the RFC defined behavior and in-field implementation behaviors.  I 
suspected there would be some differences across implementations.

I am inclined to agree with your assessment as to the validity period.  The 
future in which I would envision more common use of TCSCs would still be a 
future that encourages leaf deployment automation and, ideally, quite limited 
with respect to the validity period (at least, I should say, with respect to 
TLS server certificates).

To any such extent that TCSCs might discourage server operators from 
establishing good certificate and key lifecycle management, or server operators 
might attempt to rely upon a TCSC as a way of generating 
longer-than-BR-allows-for-validity leaf certificates, I would say that policy 
should probably prevent this as even a potential incentive.

I would be interested to learn more of your perspective on "robust disclosure & 
revocation story".  What constitutes a robust disclosure?  For example, does 
that imply a mandatory timely publication in CCADB?  With respect to a revoked 
TCSC, does that require formalized submission to the root programs for 
distribution in their respective centralized revocation distribution mechanisms 
(OneCRL, etc.)? Which remaining features of a TCSC provide capabilities which 
might be mitigated by this level of disclosure versus mere mandatory 
publication to CT?

Thanks,

Matt
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla Policy and CCADB Disclosure scope

2017-05-22 Thread Ryan Sleevi via dev-security-policy
On Mon, May 22, 2017 at 12:50 PM, Gervase Markham via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On 22/05/17 16:43, Matthew Hardeman wrote:
> > Regarding specifically the risk of the holder of a technically
> > constrained subCA issuing a certificate with an SHA-1 signature or
> > other improper signature / algorithm, my belief at this time is that
> > with respect to the web PKI, we should be able to rely upon the
> > modern client software to exclude these certificates from
> > functioning.  My understanding was that IE / Edge was the last
> > holdout on that front but that it now distrusts SHA-1 signatures.
>
> So your proposal is that technical requirements should be enforced
> in-product rather than in-policy, and so effectively there's no need for
> policy for the EE certs under a TCSC.
>
> This is not an unreasonable position.
>

I think it may be based on an incomplete understanding of the evolution of
the Web PKI. While it's certainly correct that we've been able to
technically mitigate many risks, it's not been without issue. The historic
path to deprecation has been on the basis of establishing some form of
sunset date or requirements change, either within the CA/Browser Forum or
through policy, with the understanding and appreciation that, on or after
that sunset date, it can be technically enforced without any breakage (save
for misissued certificates).

TCSCs substantially change that dynamic, in a way that I believe would be
detrimental towards further evolution. This is already a concern when
thinking about requirements such as Certificate Transparency - despite the
majority of commercial CAs (and thus, equally, commercially-managed managed
CAs) - TCSCs that are in existence may be ill-prepared to handle such
transition. We saw this itself with the imposition of the Baseline
Requirements, which thankfully saw many enterprise-managed CAs become
commercially-managed CAs, due to their inability to abide by the
requirements, so we can reasonably conclude that future requirements will
also be challenging for enterprise-managed CAs, which TCSCs effectively are.

Consider, on one extreme, if every of the Top 1 sites used TCSCs to
issue their leaves. A policy, such as deprecating SHA-1, would be
substantially harder, as now there's a communication overhead of O(1 +
every root CA) rather than O(# of root store CAs).

It may be that the benefits of TCSCs are worth such risk - after all, the
Web Platform and the evolution of its related specs (URL, Fetch, HTML)
deals with this problem routinely. But it's also worth noting the
incredible difficulty and friction of deprecating insecure, dangerous APIs
- and the difficulty in SHA-1 (or commonNames) for "enterprise" PKIs - and
as such, may represent a significant slowdown in progress, and a
corresponding significant increase in user-exposed risk.

This is why it may be more useful to take a principled approach, and to, on
a case by case basis, evaluate the risk of reducing requirements for TCSCs
(which are already required to abide by the BRs, and simply exempted from
auditing requirements - and this is independent of any Mozilla
dispensations), both in the short-term and in the "If every site used this"
long-term.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla Policy and CCADB Disclosure scope

2017-05-22 Thread Ryan Sleevi via dev-security-policy
On Mon, May 22, 2017 at 1:41 PM, Matthew Hardeman via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Monday, May 22, 2017 at 11:50:59 AM UTC-5, Gervase Markham wrote:
> > > How do the various validation routines in the field today validate a
> > > scenario in which a leaf certificate's validity period exceeds a
> > > validity period constraint upon the chosen trust path?  Is the
> > > certificate treated as trusted, but only to the extent that the
> > > present time is within the most restrictive view of the validity
> > > period in the chain, or is the certificate treated as invalid
> > > regardless for failure to fully conform to the technical policy
> > > constraints promulgated by the chain?
> >
> > Good question. I think the former, but Ryan Sleevi might have more info,
> > because I seem to remember him discussing this scenario and its compat
> > constraints recently.
> >
> > Either way, it's a bad idea, because the net effect is that your cert
> > suddenly stops working before the end date in it, and so you are likely
> > to be caught short.
>
> Here I would concur that it would be bad practice for precisely the reason
> you indicate.  I was mostly academically interested in the specifics of
> that topic.  I would agree that extending the certificate lifecycle to some
> period beyond the max EE validity period would alleviate the need.  Having
> said that, I can still envision workable scenarios and value cases for such
> technically constrained CA certificates even if it were deemed unacceptable
> to extend their validity period.
>

As Gerv notes, clients behave inconsistently with respect to this.

With respect to what is specified in RFC 5280, the critical requirement is
that all certificates in the chain be valid at the time of evaluation. This
allows, for example, the replacement of an intermediate certificate to
'extend' the lifetime of the leaf to its originally expressed value.

However, some clients require that the validity periods be 'nested'
appropriately - and, IIRC, at one time Mozilla NSS equally required this.

So the need exists to define some upper-bound for the TCSC relative to the
risk.

One approach is to make an argument that the upper-bound for a certificate
is bounded on the validity period of an equivalently issued leaf
certificate - that is, say, 825 days.
Another approach is to make an argument that since a CA can validate a
domain at T=0, issue a certificate at T=0 with a validity period of 825
days, then issue a certificate at T=824 with a validity period of 825 days,
the 'net' validity period of a domain validation is T=(825 days * 2) - 1
second.
(Here, I'm using 825 as shorthand for the cascading dates)

Another approach is to make an argument that such validations are already
accounted for in the EV Guidelines, in which a certificate may be issued
for 27 months, but for which the domain must be revalidated at 13 months.
In this case, the TCSC might be issued for an 'extended' period (perhaps on
an order of many-years), with the expectation that the CA will revoke the
TCSC if the domain does not (periodically) revalidate.

Each of these approaches are with their own tradeoffs in design,
complexity, and risk, and argue more or less strongly for disclosure
because of it.

My own take is that I would prefer to see TCSCs uniquely named (e.g.
through the use of dnQualifier), limited to the validity period permitted
of leaf certs (since they're, effectively, "ultra" wildcards), with some
robust diclosure & revocation story. I'm concerned that extending the
period of time is to incentivize such certs, which introduce additional
risks to the evolution of the Web PKI.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla Policy and CCADB Disclosure scope

2017-05-22 Thread Matthew Hardeman via dev-security-policy
On Monday, May 22, 2017 at 11:50:59 AM UTC-5, Gervase Markham wrote:

> So your proposal is that technical requirements should be enforced
> in-product rather than in-policy, and so effectively there's no need for
> policy for the EE certs under a TCSC.
> 
> This is not an unreasonable position.
> 

That is a correct assessment of my position.  If we are able to unambiguously 
enforce a policy matter by technological means -- and most especially where 
such technological means already exist and are deployed -- that we should be 
able to rely upon those technology constraints to relieve the administrative 
burden of auditing and enforcing compliance through business process.

> > How do the various validation routines in the field today validate a
> > scenario in which a leaf certificate's validity period exceeds a
> > validity period constraint upon the chosen trust path?  Is the
> > certificate treated as trusted, but only to the extent that the
> > present time is within the most restrictive view of the validity
> > period in the chain, or is the certificate treated as invalid
> > regardless for failure to fully conform to the technical policy
> > constraints promulgated by the chain?
> 
> Good question. I think the former, but Ryan Sleevi might have more info,
> because I seem to remember him discussing this scenario and its compat
> constraints recently.
> 
> Either way, it's a bad idea, because the net effect is that your cert
> suddenly stops working before the end date in it, and so you are likely
> to be caught short.

Here I would concur that it would be bad practice for precisely the reason you 
indicate.  I was mostly academically interested in the specifics of that topic. 
 I would agree that extending the certificate lifecycle to some period beyond 
the max EE validity period would alleviate the need.  Having said that, I can 
still envision workable scenarios and value cases for such technically 
constrained CA certificates even if it were deemed unacceptable to extend their 
validity period.

> 
> > I submit, then, that the real questions become further analysis and
> > feedback of the risk(s) followed by specification and guidance on
> > what specific constraints would form up the certificate profile which
> > would have the reduced CP/CPS, audit, and disclosure burdens.  As a
> > further exercise, it seems likely that to truly create a market in
> > which an offering of this nature from CAs would grow in prevalence,
> > someone would need to carry the torch to see such guidance (or at
> > least the relevant portions) make way into the baseline requirements
> > and other root programs.  Is that a reasonable assessment?
> 
> Well, it wouldn't necessarily need to make its way into other places.
> Although that's always nice.

Agreed.  I primarily made mention of the other rules, etc. because it occurs to 
me that part of the same standards of what might qualify for preferential / 
different treatment of technically constrained subCAs with respect to 
disclosure might also neatly align with issuance policy as might pertain in, 
for example, your separate thread titled "Policy 2.5 Proposal: Fix definition 
of constraints for id-kp-emailProtection"

The question of audit & disclosure requirements pertaining to technically 
constrained subCAs seems to be ripe for discussion.  I note that Doug Beattie 
recently sought clarification regarding this question on the matter of a name 
constrained subCA with the emailProection eku only several days ago in the 
thread "Next CA Communication"

Thanks,

Matt
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Plan for Symantec posted

2017-05-22 Thread Jakob Bohm via dev-security-policy

On 22/05/2017 18:33, Gervase Markham wrote:

On 19/05/17 21:04, Kathleen Wilson wrote:

- What validity periods should be allowed for SSL certs being issued
in the old PKI (until the new PKI is ready)?


Symantec is required only to be issuing in the new PKI by 2017-08-08 -
in around ten weeks time. In the mean time, there is no restriction
beyond the normal one on the length they can issue. This makes sense,
because if certs issued yesterday will expire 39 months from yesterday,
then certs issued in 10 weeks will only expire 10 weeks after that - not
much difference.



Note that the plan (at least as I read it), involves two major phases:

1. The transition "Managed SubCAs", these will continue to chain to the
  old PKI during the transition, but it is possible for clients and root
  programs to limit the trust to those specific "Managed SubCAs" instead
  of the sprawling old certificate trees.  This does not involve CT
  checking in clients, just trust decisions.

2. The truly "new infrastructure", built properly to modern standards
  will not be ready until some time has passed, and will be a new root
  program applicant with new root CA certs.  Once those roots become
  accepted by multiple root programs (at least Google and Mozilla), the
  new root CAs can begin to issue via "new infrastructure" SubCAs that
  are signed by both "new root CAs" (for updated clients) and old root
  CAs (for old clients).


I prefer that this be on
the order of 13 months, and not on the order of 3 years, so that we
can hope to distrust the old PKI as soon as possible. I prefer to not
have to wait 3 years to stop trusting the old PKI for SSL, because a
bunch of 3-year SSL certs get issued this year.


If we want to distrust the old PKI as soon as possible, then instead of
trying to limit issuance period now, we should simply set a date after
which we are doing this, and require Symantec to have moved all of their
customers across to the new PKI by that time.

Google are doing a phased distrust of old certs, but they have not set a
date in their plan for total distrust of the old PKI. We should ask them
what their plans are for that.



I understood certs issued by the old systems (except the listed Managed
SubCAs) will be trusted only if issued and CT logged between 2016-06-01
and 2017-08-08, and will be subject to the BR lifetime requirements for
such certs.  Thus no such certs will remain trusted after approximately
2020-08-08 plus the slack in the BRs.

Clients without SCT checking (NSS ?) cannot check the presence of SCTs, 
but can still check the limited range of notBefore dates.



- I'm not sold on the idea of requiring Symantec to use third-party
CAs to perform validation/issuance on Symantec's behalf. The most
serious concerns that I have with Symantec's old PKI is with their
third-party subCAs and third-party RAs. I don't have particular
concern about Symantec doing the validation/issuance in-house. So, I
think it would be better/safer for Symantec to staff up to do the
validation/re-validation in-house rather than using third parties. If
the concern is about regaining trust, then add auditing to this.


Of course, if we don't require something but Google do (or vice versa)
then Symantec will need to do it anyway. But I will investigate in
discussions whether some scheme like this might be acceptable to both
the other two sides and might lead to a quicker migration timetable to
the new PKI.

Gerv




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Plan for Symantec posted

2017-05-22 Thread Jakob Bohm via dev-security-policy

Comments inline

On 20/05/2017 16:49, Michael Casadevall wrote:

Comments inline.

On 05/19/2017 05:10 PM, Jakob Bohm wrote:

Suggested trivial changes relative to the proposal for Mozilla use:

3. All non-expired Symantec issued certificates of any kind (including
SubCAs and revoked certificates) shall be CT logged as modified by #4
below.  All Symantec referenced OCSP responders shall return SCTs for
all such certificates, if possible even for revoked certificates.  This
also applies to expired certificates that were intended for use with
validity extending timestamping, such as the code signing certificate
issued to Mozilla Corporation with serial number 25 cc 37 35 e9 ec 1f
c9 71 67 0e 73 e3 69 c7 91.  Independent parties or root stores may at
their option use this data to generate public trust whitelists.

   Necessity: Whitelists in various forms based on such CT log entries,
as well as the SCTs in OCSP responses can provide an alternative for
relying parties checking current certificates even if the cleanup at
Symantec reveals a catastrophic breach during the past 20+ years.

   Proportionality: This should be relatively easy for the legitimate
certificates issued by Symantec, since the underlying data is still
used for OCSP response generation.



Sanity check here, but I thought that OCSP-CT-Stapling required SCTs to
be created at the time of issuance. Not sure if there's a way to
backdate this requirement. If this is only intended for the new roots
then just a point of clarification.


4. All stated requirements shall also apply to S/MIME certificates.




I *really* like this since it solves the problem of S/MIME + CT, but I
think this has to get codified into a specification. My second thought
here though is that there's no way to independently check if the CT logs
correspond to reality unless you have the public certificate since the
hashed fields would cause the signature to break.

I'd love to see this go somewhere but probably needs a fair bit of
thought and possible use of a different CT log vs. the primarily webPKI
ones.



The ideas here are:

1. To establish a temporary ad-hoc solution that can be handled by
existing CT log software logging the redacted precertificates.  This is
so solving the Symantec problem won't have to wait for general
standardization, which has stalled on this issue.  A standardized form
would be more compact and involve at least one "CT Extension" attribute.

2. By definition, any redaction would prevent CT log watchers from
checking if the unredacted cert signatures are valid.  This is
unavoidable, but not a problem for any known good uses of CT logs.

3. The design is intended to ensure that any process seeing an actual
cert can check it against SCTs obtained in any way (e.g. present in
cert, present in OCSP response, direct CT query, ...) by forming at most
one candidate redacted form, using mostly code likely to be already
present in such processes.

4. The design is intended to prevent recovering redacted data by
dictionary attacks (= guess and check).  This means that for existing
certs without a strong nonce attribute, logging the signature over the
unredacted final cert is also out of the question, such old certs need
to be logged as precerts only.




5. All stated requirements shall also apply to SubCA certificates other
than the specially blessed "Managed CA" SubCAs.  These shall never be
redacted.  As a special exception, the root programs may unanimously on
a one-by-one bases authorize the signing of additional Managed SubCAs
and/or new infrastructure cross certificates, subject to full
validation and signing ceremonies.  The root programs will authorize
enough new infrastructure cross signatures if and when they include the
roots of the new infrastructure.



Believe this was already covered by the PKI concerns that Symantec would
not be allowed to use third-party validation. Not sure if we can
realistically do a technical measure here since if we put a NotBefore
check in NSS, we have no easy way to change it in the future if it
becomes necessary for a one-off.



This would be an administrative requirement not checked by client
software directly, except that client software can check for the
presence of SCTs in any new SubCAs, and root programs can check the logs
for non-approved SubCA issuance.



6. All stated requirements except premature expiry and the prohibition
against later issuance shall apply to delegated OCSP signing, CRL
signing and other such revocation/validation related signatures made
by the existing Symantec CAs and SubCAs, but after the first deadline
(ultimo August), such shall only be used for the continued provision of
revocation information, and shall have corresponding EKUs.  This
corresponds to standard rules for dead CA certs, but adds CT logging of
any delegated revocation signing certificates.  These shall never be
redacted.



I think this can be more easily put as "intermediate certificates
restricted via EKU for use 

Re: Google Plan for Symantec posted

2017-05-22 Thread Kurt Roeckx via dev-security-policy
On Mon, May 22, 2017 at 05:33:26PM +0100, Gervase Markham via 
dev-security-policy wrote:
> Google are doing a phased distrust of old certs, but they have not set a
> date in their plan for total distrust of the old PKI. We should ask them
> what their plans are for that.

My understanding is that Google will rely on CT for it and
don't need to distrust anything. Either it's in CT and we
can check what they did, or it's not and it's not trusted.


Kurt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla Policy and CCADB Disclosure scope

2017-05-22 Thread Gervase Markham via dev-security-policy
On 22/05/17 16:43, Matthew Hardeman wrote:
> Regarding specifically the risk of the holder of a technically
> constrained subCA issuing a certificate with an SHA-1 signature or
> other improper signature / algorithm, my belief at this time is that
> with respect to the web PKI, we should be able to rely upon the
> modern client software to exclude these certificates from
> functioning.  My understanding was that IE / Edge was the last
> holdout on that front but that it now distrusts SHA-1 signatures.

So your proposal is that technical requirements should be enforced
in-product rather than in-policy, and so effectively there's no need for
policy for the EE certs under a TCSC.

This is not an unreasonable position.

> How do the various validation routines in the field today validate a
> scenario in which a leaf certificate's validity period exceeds a
> validity period constraint upon the chosen trust path?  Is the
> certificate treated as trusted, but only to the extent that the
> present time is within the most restrictive view of the validity
> period in the chain, or is the certificate treated as invalid
> regardless for failure to fully conform to the technical policy
> constraints promulgated by the chain?

Good question. I think the former, but Ryan Sleevi might have more info,
because I seem to remember him discussing this scenario and its compat
constraints recently.

Either way, it's a bad idea, because the net effect is that your cert
suddenly stops working before the end date in it, and so you are likely
to be caught short.

> I submit, then, that the real questions become further analysis and
> feedback of the risk(s) followed by specification and guidance on
> what specific constraints would form up the certificate profile which
> would have the reduced CP/CPS, audit, and disclosure burdens.  As a
> further exercise, it seems likely that to truly create a market in
> which an offering of this nature from CAs would grow in prevalence,
> someone would need to carry the torch to see such guidance (or at
> least the relevant portions) make way into the baseline requirements
> and other root programs.  Is that a reasonable assessment?

Well, it wouldn't necessarily need to make its way into other places.
Although that's always nice.

Gerv

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Plan for Symantec posted

2017-05-22 Thread Gervase Markham via dev-security-policy
On 21/05/17 19:37, userwithuid wrote:
> With the new proposal, the "minimal disruption" solution for Firefox
> will require keeping the legacy stuff around for another 3.5-4 years
> and better solutions will now be a lot harder to sell without the
> leverage provided by Google.

Why so? In eight months' time, if Chrome is no longer trusting certs
issued before 2016-06-01, why would it be a problem for Firefox to stop
trusting them shortly thereafter?

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Plan for Symantec posted

2017-05-22 Thread Gervase Markham via dev-security-policy
On 19/05/17 22:10, Jakob Bohm wrote:
>   Necessity: Whitelists in various forms based on such CT log entries,
> as well as the SCTs in OCSP responses can provide an alternative for
> relying parties checking current certificates even if the cleanup at
> Symantec reveals a catastrophic breach during the past 20+ years.

Do you know anyone who would consider shipping such a whitelist? I
suspect size considerations would rule it out, given that this was the
concern raised for much smaller lists of certs. And if we did want to
ship it, we would just ask Symantec for a list of certificates - no need
for all this.

>   Necessity: The Mozilla root program also cares about S/MIME
> certificates, so those should get the same measures as WebPKI
> certificates.

That sems a very weak justification for requiring something which would
be a ton of work and require us to invent a new CT redaction scheme for
S/MIME certs. None of the issues raised related to S/MIME.

>   Proportionality: This is a natural consequence of the overall plan,
> and simply formalizes what is otherwise implied, namely that Symantec
> doesn't issue new certs from the old infrastructure except as strictly
> necessary.

That is not an implied outcome. Symantec can issue as many certs as they
want from the old infrastructure; it's just that browsers will no longer
trust them. I'm totally certain Symantec's existing PKI will keep
running for many years to come to support non-publicly-trusted use cases.

> 7. All stated requirements except the premature expiry shall apply to
> time stamping signatures and certificates for timestamps certifying a
> time prior to the first deadline.

Mozilla does not care about such certificates.

> 9. Symantec shall be allowed and obliged to continue operation of the
> special "managed signing" services for which it has in the past been
> granted a technically enforced monopoly by various platform vendors,

Mozilla does not care about such certificates.

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Plan for Symantec posted

2017-05-22 Thread Gervase Markham via dev-security-policy
On 19/05/17 21:04, Kathleen Wilson wrote:
> - What validity periods should be allowed for SSL certs being issued
> in the old PKI (until the new PKI is ready)? 

Symantec is required only to be issuing in the new PKI by 2017-08-08 -
in around ten weeks time. In the mean time, there is no restriction
beyond the normal one on the length they can issue. This makes sense,
because if certs issued yesterday will expire 39 months from yesterday,
then certs issued in 10 weeks will only expire 10 weeks after that - not
much difference.

> I prefer that this be on
> the order of 13 months, and not on the order of 3 years, so that we
> can hope to distrust the old PKI as soon as possible. I prefer to not
> have to wait 3 years to stop trusting the old PKI for SSL, because a
> bunch of 3-year SSL certs get issued this year.

If we want to distrust the old PKI as soon as possible, then instead of
trying to limit issuance period now, we should simply set a date after
which we are doing this, and require Symantec to have moved all of their
customers across to the new PKI by that time.

Google are doing a phased distrust of old certs, but they have not set a
date in their plan for total distrust of the old PKI. We should ask them
what their plans are for that.

> - I'm not sold on the idea of requiring Symantec to use third-party
> CAs to perform validation/issuance on Symantec's behalf. The most
> serious concerns that I have with Symantec's old PKI is with their
> third-party subCAs and third-party RAs. I don't have particular
> concern about Symantec doing the validation/issuance in-house. So, I
> think it would be better/safer for Symantec to staff up to do the
> validation/re-validation in-house rather than using third parties. If
> the concern is about regaining trust, then add auditing to this.

Of course, if we don't require something but Google do (or vice versa)
then Symantec will need to do it anyway. But I will investigate in
discussions whether some scheme like this might be acceptable to both
the other two sides and might lead to a quicker migration timetable to
the new PKI.

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla Policy and CCADB Disclosure scope

2017-05-22 Thread Matthew Hardeman via dev-security-policy
On Monday, May 22, 2017 at 3:41:50 AM UTC-5, Gervase Markham wrote:
> On 19/05/17 20:40, Matthew Hardeman wrote:
> > Not speaking as to the status quo, but rather in terms of
> > updates/changes which might be considered for incorporation into
> > policy would be to recognize the benefit of name constrained
> > intermediates and allow a reduction in burden to entities holding and
> > utilizing name constrained intermediates, both in SSL Server
> > Authentication, and Email Protection.  (Probably also allow that OCSP
> > signing, client authentication, certain encrypted storage extended
> > key usages, etc, be allowed).
> 
> This is certainly a question worth considering. I think a careful
> comparative risk analysis is in order, and so thank you for starting
> that process.

I'm a long time lurker of the list and happy to contribute what thoughts that I 
may.  I was uncertain whether the question(s) that I raised were within the 
scope of this particular thread, but as you have engaged in dialogue I'll 
proceed under the assumption that they are unless otherwise directed.

> 
> The issue with excluding any certificate or group of certificates from
> the entire scope of the policy is that the issuer would then be free to
> issue SHA-1 certs, certs with bad or unpermitted algorithms, and so on.
> Are you suggesting that EE certs issued from such an intermediate be
> entirely unregulated, or that we should strip down the regulation to
> merely technical requirements, ignoring requirements on audit, CP/CPS,
> revocation etc.?
> 

I am suggesting that regulations pertaining to issuance from these technically 
constrained subCAs be stripped down to purely technically enforced requirements 
if and only if this can be done without disproportionate risk to the Web PKI.  
My belief is that compelling product offerings could arise in this space if 
there exist guidelines for the issuance of the technically constrained subCAs 
and if proper issuance and management of the lifecycle of these subCA 
certificates can be construed in a way that does not burden the root programs 
and their member CAs with significant manual audit duties AND which still 
preserves the integrity of the web PKI.

Regarding specifically the risk of the holder of a technically constrained 
subCA issuing a certificate with an SHA-1 signature or other improper signature 
/ algorithm, my belief at this time is that with respect to the web PKI, we 
should be able to rely upon the modern client software to exclude these 
certificates from functioning.  My understanding was that IE / Edge was the 
last holdout on that front but that it now distrusts SHA-1 signatures.

> >> From a perspective of risk to the broader web PKI, it would appear
> >> that a properly name constrained intermediate with (for example)
> >> only the Server and Client TLS authentication ekus with name
> >> constraints limited to particular validated domains (via dnsName
> >> constraint along with excluding wildcard IP/netmask for IPv4 and
> >> IPv6)  is really no substantively more risky than a multi-SAN
> >> wildcard certificate with the same domains.
> 
> I currently agree this is broadly true, with the exception of the
> lifetime issue which you raise in a later message.
> 
> There would be little point in such a TCSC having a max lifetime equal
> to the max lifetime of an EE cert, because then after day 1, the EE
> certs it issues couldn't have the max lifetime (because the EE cert
> can't last longer than the intermediate!). So perhaps max lifetime of EE
> + 1 year, so the issuing TCSC needs to be replaced once a year in order
> for the organization to continue issuing max length certs?

While I certainly think it would be fine to extend the life of a technically 
constrained subCA significantly beyond that of an EE certificate, as long as 
the risks are balanced, I do have a question here:

How do the various validation routines in the field today validate a scenario 
in which a leaf certificate's validity period exceeds a validity period 
constraint upon the chosen trust path?  Is the certificate treated as trusted, 
but only to the extent that the present time is within the most restrictive 
view of the validity period in the chain, or is the certificate treated as 
invalid regardless for failure to fully conform to the technical policy 
constraints promulgated by the chain?

The reason that I raise this question pertains to the lifecycle of EE 
certificates issued subordinate to a technically constrained subCA such that 
the until date of the leaf certificate exceeds the until date on the parent 
subCA.  If we contemplate a subsequent renewal of the technically constrained 
subCA with same SPKI, it occurs to me that the subCA can issue a certificate 
which has an until date which exceeds the subCA until date and then merely 
change what subCA certificate is distributed to build the trust path, thus 
allowing the certificate to remain valid as long as the subCA is renewed 

Re: Symantec: Update

2017-05-22 Thread Gervase Markham via dev-security-policy
On 20/05/17 15:26, Michael Casadevall wrote:
> However, for Mozilla's purposes, is there a case where having a SCT in
> certificate would either break something, or otherwise be undesirable?

I believe we turned the checking on and discovered performance issues,
so we turned it off. I'm not sure if those have since been solved. JC?

> Well, at least with the current state of webpki, mandating an embedded
> SCT is probably not practical for everyone. I actually forgot about the
> OCSP stapling mechanism for SCTs, though my concern here is not everyone
> turns on OCSP stapling. Since both OCSP CT stapling and embedded SCTs
> require that the cert be submitting to a log at issuance, 

That's not so. OCSP CT stapling doesn't require the cert be submitted to
 a log at issuance. You only need to do it at some point before you
start using it. The same is true of the SSL handshake method.

>  - By default, Symantec shall issue certificates with embedded SCTs
> (soft-fail for failure to validate SCT information)

Given that Chrome is requiring CT for all Symantec certificates, one
could argue there's minimal value in Mozilla coming up with its own
CT-related requirements, particularly as Mozilla has not (yet?) deployed
SCT checking in Firefox.

Gerv


___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla Policy and CCADB Disclosure scope

2017-05-22 Thread Gervase Markham via dev-security-policy
On 19/05/17 20:40, Matthew Hardeman wrote:
> Not speaking as to the status quo, but rather in terms of
> updates/changes which might be considered for incorporation into
> policy would be to recognize the benefit of name constrained
> intermediates and allow a reduction in burden to entities holding and
> utilizing name constrained intermediates, both in SSL Server
> Authentication, and Email Protection.  (Probably also allow that OCSP
> signing, client authentication, certain encrypted storage extended
> key usages, etc, be allowed).

This is certainly a question worth considering. I think a careful
comparative risk analysis is in order, and so thank you for starting
that process.

The issue with excluding any certificate or group of certificates from
the entire scope of the policy is that the issuer would then be free to
issue SHA-1 certs, certs with bad or unpermitted algorithms, and so on.
Are you suggesting that EE certs issued from such an intermediate be
entirely unregulated, or that we should strip down the regulation to
merely technical requirements, ignoring requirements on audit, CP/CPS,
revocation etc.?

>> From a perspective of risk to the broader web PKI, it would appear
>> that a properly name constrained intermediate with (for example)
>> only the Server and Client TLS authentication ekus with name
>> constraints limited to particular validated domains (via dnsName
>> constraint along with excluding wildcard IP/netmask for IPv4 and
>> IPv6)  is really no substantively more risky than a multi-SAN
>> wildcard certificate with the same domains.

I currently agree this is broadly true, with the exception of the
lifetime issue which you raise in a later message.

There would be little point in such a TCSC having a max lifetime equal
to the max lifetime of an EE cert, because then after day 1, the EE
certs it issues couldn't have the max lifetime (because the EE cert
can't last longer than the intermediate!). So perhaps max lifetime of EE
+ 1 year, so the issuing TCSC needs to be replaced once a year in order
for the organization to continue issuing max length certs?

The sub-subdomain issue is also a difference, but my current view is
that it doesn't have much of an effect on the risk profile in practice.

> As to disclosure of these name constrained intermediates, I should
> think that if they became popular, even among largish enterprises,
> there might arise quite a lot of such intermediates.  Perhaps rather
> than in CCADB, these name constrained intermediates should be
> required as a matter of policy to be submitted to CT logs (to an
> acceptable number of logs, with an acceptable number of those under
> separate administrative control).

If we exempt the certs they issue from CP/CPS and audit requirements,
the need for such TCSCs to be disclosed in CCADB is much reduced.

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Clarify requirement for multi-factor auth

2017-05-22 Thread Gervase Markham via dev-security-policy
On 19/05/17 15:52, Carl Mehner wrote:
> Should we specify somewhere that multi-factor auth encompasses two 
> _different_ factors and not simply multiple authenticators?

I appreciate your desire to cover all the angles, but I think the
standard definition of the term encompasses this.

I think that if there was a problem, and a CA said "we have multi-factor
authentication - two passwords", the resulting hilarity and shame would
be... extensive. And recall, Mozilla has full discretion over who we
include, so rules-lawyering is ineffective and, in fact, counter-productive.

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy