Re: Mozilla Policy and CCADB Disclosure scope

2017-05-23 Thread Ryan Sleevi via dev-security-policy
On Tue, May 23, 2017 at 3:45 PM Matthew Hardeman via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Tuesday, May 23, 2017 at 12:39:05 PM UTC-5, Ryan Sleevi wrote:
>
> > Setting aside even the 'damage' aspect, consider the ecosystem impact.
> > Assume a wildwest - we would not have been able to effectively curtail
> and
> > sunset SHA-1. We would not have been able to deploy and require
> Certificate
> > Transparency. We would not have been able to raise the minimum RSA key
> > size. That's because all of these things, at the time the efforts began,
> > were at significantly high rates to cause breakages. Even with the
> Baseline
> > Requirements, even with ample communications and PR blitzes, these
> changes
> > still were razor thin in terms of the breakages vendors would be willing
> to
> > tolerate. Microsoft and Apple, for example, weren't able to tolerate the
> > initial SHA-1 pain, and relied on Chrome and Firefox to push the
> ecosystem
> > forward in that respect.
> >
>
> I don't disagree with the ecosystem impact concept to which you have
> referred.  Where I diverge is in my belief that we already do have a wild
> west situation.  There are LOTS of Root CA members and lots of actual roots
> and way way more unconstrained intermediates.  So many that SHA-1 was
> already a nightmare to deprecate and move forward on.
>
> As a brief aside, let's talk about SHA-1 migration and the lessons that
> should have been learned earlier and how they weren't and how I don't see
> anything to suggest that it will be better next time, regardless of whether
> my humble proposal even got consideration -- much less that someone should
> take up the torch and carry it to adoption.  History already provided a
> great example of urgent need for deprecation of a hash algorithm in the Web
> PKI.  The MD5 deprecation.  Not having been a participant other than as an
> end-enterprise in either of these slow moving processes, I can not say for
> certain...  but...  A few Google searches don't make me believe that the
> SHA-1 migration was any smoother or more efficient than the MD5 migration.
> As I read, it appears to be arguable that the SHA-1 migration to SHA-256
> was even slower and messier.


I don't think that is a reasonable conclusion. The MD5 transition took 5
years from active exploit. SHA-1 was dead the same week of the shattered.it
work. Way more middleboxes were prepared for the transition - and browsers
had much smoother transitions.

Was it ideal? No.
Was it significantly better? Yes. In part because of the BRs banning
issuance.

>
> The point I come around to is that in most ecosystems, there's a
> "criticality" of size at which everything gets harder to coordinate
> changes.  In many such ecosystems, once you cross that boundary, increased
> size of that ecosystem and number of unique participants has a diminishing
> effect on the overall difficulty of coordinating changes.
>
> What rational basis makes you believe that the next hash algorithm
> migration will be better than this most recent one?


See above. The CA/Browser Forum continues to discuss the lessons learned,
but it's certainly gotten better.

But more importantly - there are plenty of incremental changes - like CT -
that don't require wholesale replacements. For the next five years, I'm
particularly concerned with improving OCSP Stapling and CT support - and
those certainly don't suffer (from the CA side) of the limits you describe.

The way I see it, absent some incredible new mitigating circumstances, the
> next time a rotation to a new hash algorithm is needed, the corpus of Root
> CA participants and Root CA Certificates / Issuance systems will be larger
> than it was this time.  It seems to get larger all the time, as a trend.


I disagree. I believe we're getting better, in time.

At this point, I feel I should back away.  I feel I've made a fairly
> compelling case (at least, I shall say, the best case for it that I could
> make) for the limited impact that the specific changes as to Mozilla policy
> pertaining to audit & disclosure for TCSCs compliant to certain guidelines
> would have.  I also accept that this isn't really the place to lobby for
> baseline requirements changes.  A CA will have to carry that torch, if any
> are interested.


Oh, I would say this is absolutely the place (although perhaps in a forked
thread) for that discussion. The baselines are reflective of what browser
baselines are, and if you want to change browser baselines, there is no
greater place for that public discussion than Mozilla.

To be clear: I'm critical of the goal in large part because I used to argue
the same position you're now arguing, with many of the same arguments. The
experiences in enacting meaningful change, and the challenges therein, as
well as lots of time spent contemplating the economic incentives for the
various ecosystem actors to support change, have me far more concerned
about the potential harm :)
___

Re: Mozilla Policy and CCADB Disclosure scope

2017-05-23 Thread Matthew Hardeman via dev-security-policy
On Tuesday, May 23, 2017 at 12:39:05 PM UTC-5, Ryan Sleevi wrote:

> Setting aside even the 'damage' aspect, consider the ecosystem impact.
> Assume a wildwest - we would not have been able to effectively curtail and
> sunset SHA-1. We would not have been able to deploy and require Certificate
> Transparency. We would not have been able to raise the minimum RSA key
> size. That's because all of these things, at the time the efforts began,
> were at significantly high rates to cause breakages. Even with the Baseline
> Requirements, even with ample communications and PR blitzes, these changes
> still were razor thin in terms of the breakages vendors would be willing to
> tolerate. Microsoft and Apple, for example, weren't able to tolerate the
> initial SHA-1 pain, and relied on Chrome and Firefox to push the ecosystem
> forward in that respect.
> 

I don't disagree with the ecosystem impact concept to which you have referred.  
Where I diverge is in my belief that we already do have a wild west situation.  
There are LOTS of Root CA members and lots of actual roots and way way more 
unconstrained intermediates.  So many that SHA-1 was already a nightmare to 
deprecate and move forward on.

As a brief aside, let's talk about SHA-1 migration and the lessons that should 
have been learned earlier and how they weren't and how I don't see anything to 
suggest that it will be better next time, regardless of whether my humble 
proposal even got consideration -- much less that someone should take up the 
torch and carry it to adoption.  History already provided a great example of 
urgent need for deprecation of a hash algorithm in the Web PKI.  The MD5 
deprecation.  Not having been a participant other than as an end-enterprise in 
either of these slow moving processes, I can not say for certain...  but...  A 
few Google searches don't make me believe that the SHA-1 migration was any 
smoother or more efficient than the MD5 migration.  As I read, it appears to be 
arguable that the SHA-1 migration to SHA-256 was even slower and messier.

The point I come around to is that in most ecosystems, there's a "criticality" 
of size at which everything gets harder to coordinate changes.  In many such 
ecosystems, once you cross that boundary, increased size of that ecosystem and 
number of unique participants has a diminishing effect on the overall 
difficulty of coordinating changes.

What rational basis makes you believe that the next hash algorithm migration 
will be better than this most recent one?

The way I see it, absent some incredible new mitigating circumstances, the next 
time a rotation to a new hash algorithm is needed, the corpus of Root CA 
participants and Root CA Certificates / Issuance systems will be larger than it 
was this time.  It seems to get larger all the time, as a trend.

My argument is: as probability of smooth transition asymptotically approaches 
0, taking actions which ensure that the probability still more closely 
approaches 0 will have increasingly lower practical cost, as we can just admit 
it's not going to be a smooth transition.

At this point, I feel I should back away.  I feel I've made a fairly compelling 
case (at least, I shall say, the best case for it that I could make) for the 
limited impact that the specific changes as to Mozilla policy pertaining to 
audit & disclosure for TCSCs compliant to certain guidelines would have.  I 
also accept that this isn't really the place to lobby for baseline requirements 
changes.  A CA will have to carry that torch, if any are interested.

I have very much enjoyed this dialogue and hope that I've contributed some 
useful thoughts to the discussion.

Thanks,

Matt
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Plan for Symantec posted

2017-05-23 Thread Ryan Sleevi via dev-security-policy
On Mon, May 22, 2017 at 12:33 PM, Gervase Markham via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On 19/05/17 21:04, Kathleen Wilson wrote:
> > - I'm not sold on the idea of requiring Symantec to use third-party
> > CAs to perform validation/issuance on Symantec's behalf. The most
> > serious concerns that I have with Symantec's old PKI is with their
> > third-party subCAs and third-party RAs. I don't have particular
> > concern about Symantec doing the validation/issuance in-house. So, I
> > think it would be better/safer for Symantec to staff up to do the
> > validation/re-validation in-house rather than using third parties. If
> > the concern is about regaining trust, then add auditing to this.
>
> Of course, if we don't require something but Google do (or vice versa)
> then Symantec will need to do it anyway. But I will investigate in
> discussions whether some scheme like this might be acceptable to both
> the other two sides and might lead to a quicker migration timetable to
> the new PKI.
>

(Wearing a Google Hat)

This requirement is born directly out of Issues C, D and N, and indirectly
out of Issues B, F, L, P, Q, T, V, W, Y.

The appropriateness of validation controls depends on the policies and
procedures that are established by Management, the day to day execution of
this by Validation Specialists, and the technical controls and designs to
detect or prevent any human error from being introduced.

Understandably and obviously, domain validation represents a critical
function, and the evidence and disclosures have made it clear that domain
validation was not consistently followed, either from the system design or
by validation specialists.

Similarly, the indirect issues highlight issues with overall process design
and documentation - an issue explicitly called out in the remediation of
Issue D and subsequently Issue W - that raise concerns about the validation
processes.

To allow validation to continue as a Delegated Third Party, which is what
would be necessary to permit what was described, is to bring in all of the
issues raised with aspects of both oversight (now with respect to the
Managed CA overseeing Symantec’s validation operations) and execution, both
of which would both create opportunity for new issues and incompletely
resolve existing issues.

Given the nature of the integration here, we do not believe it would
reasonably speed up any migration to allow what is proposed. That is, the
initial efforts with respect to establishing the Managed CA infrastructure
are orthogonal to the question of validation, and reflect API integrations
and business contracting. This is why, as part of our proposal, issuance
can proceed without forcing an immediate transition to revalidation.

However, by requiring revalidation, phased in over time, there is an
objective and quantifiable level of security improvement, reflected through
the independence of the operation and the robust technical controls - that
provides a clear and objective manner of re-establishing trust. These are
incredibly important concerns for Google, as we seek to ensure that
solutions to restore trust in CAs are appropriate for the nature of the
concerns and reusable, in the event another CA should have issue. This
represents the only way we have identified to reliably provide assurance
that the validation issues have been concretely resolved, and that the
policies fully reflect the Baseline Requirements both now and going
forward, and with robust controls to ensure that.

We are, of course, interested in if there are technical means to achieve
this same result - that validations are sufficiently documented in
policies, consistently executed on both a technical and procedural level,
and appropriately overseen through both technical and procedural controls -
in a manner that is both objective and transparent, thus reusable, and
which suitably meets the needs of the broader ecosystem. We welcome any
ideas that can establish this without relying solely on audits, which are
demonstrably insufficient, as evidenced by the issues with respect to
Delegated Third Parties, their operation, and their overall supervision.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Plan for Symantec posted

2017-05-23 Thread Ryan Sleevi via dev-security-policy
On Sat, May 20, 2017 at 11:12 AM, Michael Casadevall via
dev-security-policy  wrote:

> On 05/19/2017 05:43 PM, Kurt Roeckx wrote:
> >>From the mail about Chrome's plan, I understand that Chrome's plan
> > is to only allow certificates from the old PKI if they qualify for
> > their CT requirements. They plan to only allow certificates issued
> > after 2016-06-01 because that's the date when they required CT
> > from Symantec. It seems that Symantec can still issue new certificates
> > using the old PKI up to 2017-08-08 that are still valid for 3
> > years.
> >
> > I'm a little concerned that Firefox and Chrome will have different
> > certificates they don't trust, and would hope that you can come to
> > some agreement about when which one would get distrusted.
> >
>
> This was likely unavoidable due to the simple fact that the
> Google-Symantec discussions happened behind closed doors. Unless we can
> influence Google's final policy, then this is likely going to be the
> case no matter what.
>

(Wearing a Google Hat)
As noted in the blink-dev posting,

“While the plan is not final, we believe it is converging on one that
strikes a good balance of addressing security risk and mitigating
interruption. We still welcome any feedback about it, as prior feedback has
been valuable in helping shape this plan.”

>> - I'm not sold on the idea of requiring Symantec to use third-party CAs
> to perform validation/issuance on Symantec's behalf. The most serious
> concerns that I have with Symantec's old PKI is with their third-party
> subCAs and third-party RAs. I don't have particular concern about Symantec
> doing the validation/issuance in-house. So, I think it would be
> better/safer for Symantec to staff up to do the validation/re-validation
> in-house rather than using third parties. If the concern is about regaining
> trust, then add auditing to this.
> >
>
> The current proposal is more complicated than that since it talks about
> reusing part of the original validations and OIDs to control the max
> length of the certificate. I rather dislike that since its both complex,
> and introduces the trust issues from the old hierarchy into the new one
> which moots the point of spinning up a new root in the first place.
>

The Chrome plan outlined attempts to minimize disruption to site operators,
as disruptions to sites are reflected as disproportionate disruptions to
users, by virtue of seeing security errors. Both in discussions with
Symantec and within the broadly understood operation of the Web PKI, many
sites - particularly those that are engaged in automated issuance through
the use of APIs - routinely replace certificates. Introducing a blocking
step - the reverification of information - into obtaining a certificate,
can end up creating situations where certificates are expired and not
revalidated in a timely fashion.



While the long-term solution for this is to require the use of standardized
issuance APIs - such as the work on ACME being developed within the IETF
 - and to reduce both the
lifetime of certificates and the reuse of validation responses - so that
the difficulty in revalidating is greatly reduced, by virtue of it becoming
routine and thus automated as well - these solutions are not yet widely
deployed by site operators, and thus not reliable for these immediate
purposes.



The solution outlined attempts to find a technical solution to allow a
variety of relying party applications to make trust decisions appropriate
for their community, while also providing sufficient technical guidance,
both as a matter of policy and expressed in the certificate, that can allow
more robust controls.



For example, relying party applications could choose to fully trust the
existing certificate set. They could distrust those prior to 2016-06-01,
and simply implicitly rely on herd immunity by virtue of Chrome’s support
for CT. They could fully implement CT, and have more robust protections,
such as the ability to reject redacted certificates or require the use of
trusted CT logs (and not merely the presence of an SCT extension).



Similarly, they can simply accept all certificates from the new hierarchy.
They could accept certificates only up to the timelines proposed. They
could implement different timelines entirely - although, I note, if
products feel that need, we, the Chrome team, would be interested in
understanding this as part of our overall effort to find an interoperable
solution, if possible. For that matter, clients could decide that the risk
from previous domain validations and previous organizational validations
may be so large that they only accept certificates that have been fully
revalidated - and the proposal provides a means and method for them to
determine such certificates, in a way compatible with RFC 5280.


> So they should just create new root CAs and ask them to be
> > included in the root store?
> >
>
> Honestly, we got into this mess in 

Re: Mozilla Policy and CCADB Disclosure scope

2017-05-23 Thread Ryan Sleevi via dev-security-policy
On Tue, May 23, 2017 at 12:33 PM, Matthew Hardeman via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> I just think there's no need to concern themselves if someone quite clever
> (whatever that means) decides to ASN.1 encode a Trollface GIF and roll that
> into an EE cert subordinate to their corporate TCSC.  No need to report
> that as a BR violation.  No need for the sponsoring public CA to be
> concerned if they discover that upon audit, because I think there's no need
> for said audit.  Because anything that audit could have found could have
> been discovered by browser validation code, with the judgement rendered
> instantly and with proportionate consequence: (i.e. this is garbage, not a
> certificate, I'm going with the untrusted interstitial error).


I think it may be that you're looking at this issue from a per-site matter,
rather than an ecosystem issue.

I agree that, in theory, the most 'damage' you could do is to a single site
(although there are TCSCs with dozens or hundreds of domains). But from an
ecosystem perspective, it's incredibly damaging - the ability to reject
trollface GIFs used to exploit users, for example, is now no longer a
matter of contacting CAs / updating the BRs, but a coordinated change
across the entire ecosystem, and where turning off support can easily break
sites (and thus cause users more pain)

Even if we start with a maximally strict model in clients (which, for what
it's worth, RFC 5280 specifically advises against - and thankfully so,
otherwise something like CT could never have been deployed), as we change
the ecosystem, we'll need to deprecate things.

Consider this: There is nothing stopping a CA from making a "TCSC in a
box". I am quite certain that, as proposed, it would be far more economical
for CAs to spin up a TCSC for every one of their customers, and then allow
complete and total issuance from it. This is already on the border of
possibility in today's world, due a loophole in intermediate key generation
ceremony text. By posting it here, I'm sure some enterprising CA will
realize this new opportunity :)

The mitigation, however, has been that it's not "wild west" of PKI (the
very thing the BRs set out to stop), and instead a constrained profile.

Setting aside even the 'damage' aspect, consider the ecosystem impact.
Assume a wildwest - we would not have been able to effectively curtail and
sunset SHA-1. We would not have been able to deploy and require Certificate
Transparency. We would not have been able to raise the minimum RSA key
size. That's because all of these things, at the time the efforts began,
were at significantly high rates to cause breakages. Even with the Baseline
Requirements, even with ample communications and PR blitzes, these changes
still were razor thin in terms of the breakages vendors would be willing to
tolerate. Microsoft and Apple, for example, weren't able to tolerate the
initial SHA-1 pain, and relied on Chrome and Firefox to push the ecosystem
forward in that respect.

It's in this holistic picture we should be mindful of the risk of these
changes - the ability to make meaningful change, in a timely fashion, while
minimizing breakage. And while it's easy to say that "Oh, the site's wrong,
interstitial" - that just acculturates users to errors, inducing warning
fatigue and undermining the value of having errors at all. It also
undermines the security assurances of HTTPS itself - because now it's
harder to ensure it meets whatever minimum bar deemed necessary to ensure
users confidentiality, privacy, and integrity.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla Policy and CCADB Disclosure scope

2017-05-23 Thread Jakob Bohm via dev-security-policy

On 23/05/2017 18:18, Ryan Sleevi wrote:

On Tue, May 23, 2017 at 11:52 AM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


Note as this is about a proposed future policy, this is about validation
code updated if and when such a policy is enacted.  Current validation
code has no reason to check a non-existent policy.



Mozilla strives, to the best possible way, to be interoperable with other
vendors, and not introduce security risks that would affect others, nor
unduly require things that would inhibit others.

In this aspect, the proposal of TCSCs - and the rest of the radical changes
you propose - are incompatible with many other libraries.



NOT my proposal, I was trying to help out with technical details of
Matthew's proposal, that's all.


While you're true that Mozilla could change their code at any point, much
of the Web Platform's evolution - and in particular, TLS - has been
achieved through multi-vendor collaboration.



Which I repeatedly referred to, in my latest e-mail I phrased it as "If
and when" such a policy would be enacted.


This is why it's important, when making proposals, to not simply work on a
blank canvas and attempt to sketch something, but to be aware of the lines
in the ecosystem that exist and the opportunities for collaboration - and
the times in which it's important to "go it alone".



I fully agree with that, and wrote so.


What part of "Has DNS/e-mail name constraints to at least second-level

domains or TLDs longer than 3 chars", "Has DN name constraints that
limit at least O and C", "Has EKU limitations that exclude AnyEKU and
anything else problematic", "Has lifetime and other general constraints
within the limits of EE certs" AND "Has a CTS" cannot be detected
programmatically?



These are not things that can be reliably implemented across the ecosystem,
nor would they be reasonable costs to bear for the proposed benefits, no.



You seem keen to be reject things out of hand, with no explanation.
Good luck convincing Matthew or others that way.




Or could this be solved by require such "TCSC light" SubCA certs to
carry a specific CAB/F policy OID with CT-based community enforcement
that all SubCA certs with this policy OID comply with the more stringent
non-computable requirements likely to be in such a policy (if passed)?



No.



I am trying to limit the scope of this to the kind of TCSC (Technically
Constrained SubCA) that Matthew was advocating for.  Thus none of this
applies to long lived or public SubCAs.

If an organization wants ongoing TCSC availability, they may subscribe
to getting a fresh TCSC halfway through the lifetime of the previous
one, to provide a constantly overlapping chain of SubCAs.



Except this doesn't meaningfully address the "day+1" issuance problem that
was highlighted, unless you proposed that the non-nesting constraints that
I mentioned aren't relevant.


The idea would be: TCSC issued for BR maximum period (N years plus M
months), fresh TCSC issued every M months, customer can always issue up
to at least N years.

I do realize the M months in the BRs are for another business purpose
related to renewal payments, but because TCSCs issue to non-paying
internal users, they don't need those months for the payment use case.





It would more be like disclaimer telling their customers that if they
issue a SHA-1 cert after 2016-01-01 from their SHA-256 TCSC, it probably
won't work in a lot of browsers, please for your own protection, issue
only SHA-256 or stronger certs.  So the incentive for the issuing CA is
to minimize tech support calls and angry customers.

If the CA fails to inform their customers, the customer will get angry,
but the WebPKI will be unaffected.



And I'm trying to tell you that your model of the incentives is wrong, and
it does not work like that, as can be shown by every other real world
deprecation.

If they made the disclaimer, and yet still 30% of sites had these, browsers
would not turn it off. As such, the disclaimer would be pointless - the
incentive structure is such that browsers aren't going to start throwing
users under the bus.

When the browser makes the change, the issuing CA does not get the calls.
The site does not get the calls. The browser gets the anger. This is
because "most recent to change is first to blame" - and it was the browser,
not the CA, that made the most recent change.

This is how it has worked out for every change in the past. And while I
appreciate your optimism that it would work with TCSCs, there's nothing in
this proposal that would change that incentive structure, such as to ensure
that you don't have 30% of the Internet doing "Whatever thing will be
deprecated", and as a consequence, _it will not be deprecate_.



OK, that is a sad state of affairs, that someone will have to solve for
this to fly.



One could also add a requirement that certain occasional messages,

prewritten by the CAB/F shall be forwarded verbatim to all TCSC holder

Re: Mozilla Policy and CCADB Disclosure scope

2017-05-23 Thread Matthew Hardeman via dev-security-policy
On Tuesday, May 23, 2017 at 10:53:03 AM UTC-5, Jakob Bohm wrote:

> 
> Or could this be solved by require such "TCSC light" SubCA certs to
> carry a specific CAB/F policy OID with CT-based community enforcement
> that all SubCA certs with this policy OID comply with the more stringent
> non-computable requirements likely to be in such a policy (if passed)?
> 

I wish to clarify a couple points of what I proposed.

With respect to the topic of this thread -- the certificate policy & disclosure 
scope at Mozilla, I have proposed that particular categories of intermediate 
certificate (name constrained subCAs with particular features) might be 
reasonably subjected to a lower burden, requiring no formal disclosure to 
Mozilla beyond that their existence and issuance be CT logged.  Also, I 
proposed that further subCAs and EEs issued descending from those constrained 
subCAs be regarded as entirely beyond the scope of the Mozilla Policy and 
disclosure.

I maintain that I've not seen presented a compelling technical reason that 
would suggest that such change to Mozilla policy would reduce security in the 
Web PKI if adopted.  If this is the case, reducing requirement for disclosure 
to CCADB and attendant audit statements, etc, for these TCSCs would seem to 
reduce work burden on Mozilla as well as the public CAs.

Quite separately, I would personally like to see some BR changes similarly in 
line with the above, but I am not positioned to make such a request, as I am 
not a CA.  Further, I acknowledge that this thread is probably not the 
appropriate forum for that particular case to be pleaded.

Having said all of that, I wish to make clear that I have not proposed that the 
technological burdens of certificate issuance by an entity utilizing a 
technically constrained subCA should be lightened in actual issuance practice:

Specifically, I am a supporter of Certificate Transparency.  I see no reason, 
for example, why an EE certificate issued subordinate to a TCSC should be 
exempted from Chrome's CT Policy, etc.  An enterprise PKI utilizing a TCSC 
could certainly submit the certificates they issue to CT logging.  Those same 
certificates do, in fact, chain to trusted roots.  I can think of no reason 
that a CT log would reject those submissions.

I wish to clarify that my position is that EE certificates issued subordinate 
to a name constrained CA need be of no concern to Mozilla and the other 
programs from a monitoring perspective relies upon the quite limited scope of 
effect the EE certificate can have after accounting for the regulations in the 
TCSC.

In short, I believe that the need to enforce audits, etc, over what an 
enterprise who have been issued a proper TCSC actually does with that TCSC is 
unnecessary, because anything they could do would be limited in scope to their 
own operations.  This includes issuing certificates which don't comply with CT 
logging, etc.  I fully believe the same standards of technical constraint 
applied to certificates of a public CA would also apply to trust in 
certificates issued subordinate to a TCSC.

I just think there's no need to concern themselves if someone quite clever 
(whatever that means) decides to ASN.1 encode a Trollface GIF and roll that 
into an EE cert subordinate to their corporate TCSC.  No need to report that as 
a BR violation.  No need for the sponsoring public CA to be concerned if they 
discover that upon audit, because I think there's no need for said audit.  
Because anything that audit could have found could have been discovered by 
browser validation code, with the judgement rendered instantly and with 
proportionate consequence: (i.e. this is garbage, not a certificate, I'm going 
with the untrusted interstitial error).

> >> * If TCSCs are limited, by requirements on BR-complient unconstrained
> >>   SubCAs, to lifetimes that are the BR maximum of N years + a few months
> >>   (e.g. 2 years + a few months for the latest CAB/F requirements), then
> >>   any new CAB/F requirements on the algorithms etc. in SubCAs will be
> >>   phased in as quickly as for EE certs.
> >>
> > 
> > I'm not sure what you're trying to say here, but the limits of lifetime to
> > EE certs are different than that of unconstrained subCAs (substantially)
> 
> I am trying to limit the scope of this to the kind of TCSC (Technically
> Constrained SubCA) that Matthew was advocating for.  Thus none of this
> applies to long lived or public SubCAs.
> 
> If an organization wants ongoing TCSC availability, they may subscribe
> to getting a fresh TCSC halfway through the lifetime of the previous
> one, to provide a constantly overlapping chain of SubCAs.
> 
> > 
> > 
> >> * If TCSCs cannot be renewed with the same public key, then TCSC issued
> >>   EEs are also subject to the same phase in deadlines as regular EEs.
> >>
> > 
> > Renewing with the same public key is a problematic practice that should be
> > stopped.
> > 
> 
> Some other people seem to disagree, however

Re: Mozilla Policy and CCADB Disclosure scope

2017-05-23 Thread Ryan Sleevi via dev-security-policy
On Tue, May 23, 2017 at 11:52 AM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:
>
> Note as this is about a proposed future policy, this is about validation
> code updated if and when such a policy is enacted.  Current validation
> code has no reason to check a non-existent policy.
>

Mozilla strives, to the best possible way, to be interoperable with other
vendors, and not introduce security risks that would affect others, nor
unduly require things that would inhibit others.

In this aspect, the proposal of TCSCs - and the rest of the radical changes
you propose - are incompatible with many other libraries.

While you're true that Mozilla could change their code at any point, much
of the Web Platform's evolution - and in particular, TLS - has been
achieved through multi-vendor collaboration.

This is why it's important, when making proposals, to not simply work on a
blank canvas and attempt to sketch something, but to be aware of the lines
in the ecosystem that exist and the opportunities for collaboration - and
the times in which it's important to "go it alone".

What part of "Has DNS/e-mail name constraints to at least second-level
> domains or TLDs longer than 3 chars", "Has DN name constraints that
> limit at least O and C", "Has EKU limitations that exclude AnyEKU and
> anything else problematic", "Has lifetime and other general constraints
> within the limits of EE certs" AND "Has a CTS" cannot be detected
> programmatically?
>

These are not things that can be reliably implemented across the ecosystem,
nor would they be reasonable costs to bear for the proposed benefits, no.


> Or could this be solved by require such "TCSC light" SubCA certs to
> carry a specific CAB/F policy OID with CT-based community enforcement
> that all SubCA certs with this policy OID comply with the more stringent
> non-computable requirements likely to be in such a policy (if passed)?
>

No.


> I am trying to limit the scope of this to the kind of TCSC (Technically
> Constrained SubCA) that Matthew was advocating for.  Thus none of this
> applies to long lived or public SubCAs.
>
> If an organization wants ongoing TCSC availability, they may subscribe
> to getting a fresh TCSC halfway through the lifetime of the previous
> one, to provide a constantly overlapping chain of SubCAs.
>

Except this doesn't meaningfully address the "day+1" issuance problem that
was highlighted, unless you proposed that the non-nesting constraints that
I mentioned aren't relevant.


> It would more be like disclaimer telling their customers that if they
> issue a SHA-1 cert after 2016-01-01 from their SHA-256 TCSC, it probably
> won't work in a lot of browsers, please for your own protection, issue
> only SHA-256 or stronger certs.  So the incentive for the issuing CA is
> to minimize tech support calls and angry customers.
>
> If the CA fails to inform their customers, the customer will get angry,
> but the WebPKI will be unaffected.


And I'm trying to tell you that your model of the incentives is wrong, and
it does not work like that, as can be shown by every other real world
deprecation.

If they made the disclaimer, and yet still 30% of sites had these, browsers
would not turn it off. As such, the disclaimer would be pointless - the
incentive structure is such that browsers aren't going to start throwing
users under the bus.

When the browser makes the change, the issuing CA does not get the calls.
The site does not get the calls. The browser gets the anger. This is
because "most recent to change is first to blame" - and it was the browser,
not the CA, that made the most recent change.

This is how it has worked out for every change in the past. And while I
appreciate your optimism that it would work with TCSCs, there's nothing in
this proposal that would change that incentive structure, such as to ensure
that you don't have 30% of the Internet doing "Whatever thing will be
deprecated", and as a consequence, _it will not be deprecate_.


One could also add a requirement that certain occasional messages,
> prewritten by the CAB/F shall be forwarded verbatim to all TCSC holders.
> For example a notice about the SHA-1 deprecation (historic example).
>

The CA/Browser Forum did not do such documentation, but we also have ample
evidence that the notices were disregarded, not forwarded to the right
people, went to people whose mailboxes were turned off (since it was 3
years since they last got a cert), etc.

Again, I appreciate your optimism that it would work, but I'm speaking from
experience and evidence to say it does not. That's the core of the problem
here - TCSCs being 'unrestricted' mean that the existing problems in making
evolutionary changes amplify, the number of parties to update grows, and
the ability to make change significantly slows.

It may be that unrestricted TCSCs are 'so amazing' that they justify this
cost to the ecosystem. If that's the case, it's a far more productive
avenue to discus

Re: Mozilla Policy and CCADB Disclosure scope

2017-05-23 Thread Jakob Bohm via dev-security-policy

On 23/05/2017 16:22, Ryan Sleevi wrote:

On Tue, May 23, 2017 at 9:45 AM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


* TCSCs can, by their existing definition, be programmatically
  recognized by certificate validation code e.g. in browsers and other
  clients.



In theory, true.
In practice, not even close.




Note as this is about a proposed future policy, this is about validation
code updated if and when such a policy is enacted.  Current validation
code has no reason to check a non-existent policy.

What part of "Has DNS/e-mail name constraints to at least second-level
domains or TLDs longer than 3 chars", "Has DN name constraints that
limit at least O and C", "Has EKU limitations that exclude AnyEKU and
anything else problematic", "Has lifetime and other general constraints
within the limits of EE certs" AND "Has a CTS" cannot be detected
programmatically?

Or could this be solved by require such "TCSC light" SubCA certs to
carry a specific CAB/F policy OID with CT-based community enforcement
that all SubCA certs with this policy OID comply with the more stringent
non-computable requirements likely to be in such a policy (if passed)?


* If TCSCs are limited, by requirements on BR-complient unconstrained
  SubCAs, to lifetimes that are the BR maximum of N years + a few months
  (e.g. 2 years + a few months for the latest CAB/F requirements), then
  any new CAB/F requirements on the algorithms etc. in SubCAs will be
  phased in as quickly as for EE certs.



I'm not sure what you're trying to say here, but the limits of lifetime to
EE certs are different than that of unconstrained subCAs (substantially)


I am trying to limit the scope of this to the kind of TCSC (Technically
Constrained SubCA) that Matthew was advocating for.  Thus none of this
applies to long lived or public SubCAs.

If an organization wants ongoing TCSC availability, they may subscribe
to getting a fresh TCSC halfway through the lifetime of the previous
one, to provide a constantly overlapping chain of SubCAs.





* If TCSCs cannot be renewed with the same public key, then TCSC issued
  EEs are also subject to the same phase in deadlines as regular EEs.



Renewing with the same public key is a problematic practice that should be
stopped.



Some other people seem to disagree, however in this case I am
constraining the discussion to a specific case where this would be
forbidden (And enforced via CT logging of the TCSC certs).  Thus no
debate on that particular issue.




* When issuing new/replacement TCSCs, CA operators should (by policy) be
  required to inform the prospective TCSC holders which options in EE
  certs (such as key strengths) will not be accepted by relying parties
  after certain phase-out dates during the TCSC lifetime.  It would then
  be foolish (and of little consequence to the WebPKI as a whole) if any
  TCSC holders ignore those restrictions.



This seems to be operating on an ideal world theory, not a real world
incentives theory.

First, there's the obvious problem that "required to inform" is
fundamentally problematic, and has been pointed out to you by Gerv in the
past. CAs were required to inform for a variety of things - but that
doesn't change market incentives. For that matter, "required to inform" can
be met by white text on a white background, or a box that clicks through,
or a default-checked "opt-in to future communications" requirement. The
history of human-computer interaction (and the gamification of regulatory
action) shows this is a toothless and not meaningful action.

I understand your intent is to be like "Surgeon General's Warning" on
cigarettes (in the US), or more substantive warnings in other countries,
and while that is well-intentioned as a deterrent - and works for some
cases - is to otherwise ignore the public health risk or to try to sweep it
under the rug under the auspices of "doing something".



It would more be like disclaimer telling their customers that if they
issue a SHA-1 cert after 2016-01-01 from their SHA-256 TCSC, it probably
won't work in a lot of browsers, please for your own protection, issue
only SHA-256 or stronger certs.  So the incentive for the issuing CA is
to minimize tech support calls and angry customers.

If the CA fails to inform their customers, the customer will get angry,
but the WebPKI will be unaffected.


Similarly, the market incentives are such that the warning will ultimately
be ineffective for some segment of the population. Chrome's own warnings
with SHA-1 - warnings that CAs felt were unquestionably 'too much' - still
showed how many sites were ill-prepared for the SHA-1 breakage (read: many).

Warnings feel good, but they don't do (enough) good. So the calculus comes
down to those making the decision - Gerv and Kathleen on behalf of Mozilla,
or folks like Andrew and I on behalf of Google - of whether or not to
'break' sites that worked yesterday, and which won't work tomorrow. When
that breakag

Re: Mozilla Policy and CCADB Disclosure scope

2017-05-23 Thread Ryan Sleevi via dev-security-policy
On Tue, May 23, 2017 at 9:45 AM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> * TCSCs can, by their existing definition, be programmatically
>  recognized by certificate validation code e.g. in browsers and other
>  clients.
>

In theory, true.
In practice, not even close.


> * If TCSCs are limited, by requirements on BR-complient unconstrained
>  SubCAs, to lifetimes that are the BR maximum of N years + a few months
>  (e.g. 2 years + a few months for the latest CAB/F requirements), then
>  any new CAB/F requirements on the algorithms etc. in SubCAs will be
>  phased in as quickly as for EE certs.
>

I'm not sure what you're trying to say here, but the limits of lifetime to
EE certs are different than that of unconstrained subCAs (substantially)


> * If TCSCs cannot be renewed with the same public key, then TCSC issued
>  EEs are also subject to the same phase in deadlines as regular EEs.
>

Renewing with the same public key is a problematic practice that should be
stopped.


> * When issuing new/replacement TCSCs, CA operators should (by policy) be
>  required to inform the prospective TCSC holders which options in EE
>  certs (such as key strengths) will not be accepted by relying parties
>  after certain phase-out dates during the TCSC lifetime.  It would then
>  be foolish (and of little consequence to the WebPKI as a whole) if any
>  TCSC holders ignore those restrictions.
>

This seems to be operating on an ideal world theory, not a real world
incentives theory.

First, there's the obvious problem that "required to inform" is
fundamentally problematic, and has been pointed out to you by Gerv in the
past. CAs were required to inform for a variety of things - but that
doesn't change market incentives. For that matter, "required to inform" can
be met by white text on a white background, or a box that clicks through,
or a default-checked "opt-in to future communications" requirement. The
history of human-computer interaction (and the gamification of regulatory
action) shows this is a toothless and not meaningful action.

I understand your intent is to be like "Surgeon General's Warning" on
cigarettes (in the US), or more substantive warnings in other countries,
and while that is well-intentioned as a deterrent - and works for some
cases - is to otherwise ignore the public health risk or to try to sweep it
under the rug under the auspices of "doing something".

Similarly, the market incentives are such that the warning will ultimately
be ineffective for some segment of the population. Chrome's own warnings
with SHA-1 - warnings that CAs felt were unquestionably 'too much' - still
showed how many sites were ill-prepared for the SHA-1 breakage (read: many).

Warnings feel good, but they don't do (enough) good. So the calculus comes
down to those making the decision - Gerv and Kathleen on behalf of Mozilla,
or folks like Andrew and I on behalf of Google - of whether or not to
'break' sites that worked yesterday, and which won't work tomorrow. When
that breakage is low, it can fit within the acceptable tolerances -
https://www.chromium.org/blink/removing-features and
https://www.chromium.org/blink try to spell out how we do this in the Web
Platform - but too large, and it becomes a game of chicken.

So even though you say "it would be foolish," every bit of history suggests
it will be done. And since we know this, we also have to consider what the
impact will be afterwards. Breaking a ton of sites is something no browser
manufacturer - or its employees, more specifically - wake up each morning
and say "Gee, I wonder what I can break today!", and so we shouldn't
trivialize the significant risk it would impose.


> * With respect to initiatives such as CT-logging, properly written
>  certificate validation code should simply not impose this below TCSCs.
>

"properly written"? What makes it properly written? It just means what you
want as the new policy.


> With the above and similar measures (mostly) already in place, I see no
> good reason to subject TCSCs to any of the administrative burdens
> imposed on public SubCAs.


While I hope I've laid them out for you in a way that can convince you, I
also suspect that the substance will be disregarded because of the source.
That said, the risk of breaking something is not done lightly, and while
you may feel it's the site operators fault - and perhaps even rightfully so
- the cost is not born by the site operator (even when users can't get to
their site!) or the CA (who didn't warn "hard enough"), but by the user.
And systems that externalize cost onto the end user are not good systems.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Email sub-CAs

2017-05-23 Thread Gervase Markham via dev-security-policy
Hi Doug,

On 18/05/17 12:03, Doug Beattie wrote:
> I'm still looking for audit guidance on subordinate CAs that have EKU
> of Server auth and/or Secure Mail along with name constraints.  Do
> these need to be audited?
> 
> I'm looking at this:
> https://github.com/mozilla/pkipolicy/blob/master/rootstore/policy.md
> 
> Section 1.1, item #2 implies yes, that these CAs are in scope of this
> policy and thus must be audited - correct me if I'm wrong if being in
> the policy means they need to be audited.

Being in scope of the policy means that you need to read the rest of the
policy as applicable. It doesn't necessarily mean they need to be
audited - whether they do or not depends on what the Audit section says
about what needs to be audited. If these certs weren't in the scope of
the policy, then whatever the Audit section said would be irrelevant.

> Section 5.3.1 and 5.3.2 imply no audit is needed

At the moment, if a server-auth intermediate is properly
name-constrained according to the BRs, it's a TCSC and does not require
an audit. As you know, there's a bug in the latest version of the policy
regarding email intermediates, but the intent is that is an email
intermediate is properly rfc822name-constrained, with the constraints
being domain-ownership-validated to be owned by your customer, it also
doesn't require an audit, otherwise it does.

> Prior versions of the policy (at least 1.3 and before), did not
> require audits for technically constrained CAs like the ones
> referenced above.  Further, it used to be OK if the "Name
> Constraints" applied for Secure Mail CAs was done via contractual
> methods, vs. in the CA certificate at a technical NC.  We have one
> remaining customer with a CA like this and we're not sure on how new
> policy requirements apply to this existing customer.  Your guidance
> is appreciated.

Contractual constraints are not considered sufficient under the current
version of the policy.

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla Policy and CCADB Disclosure scope

2017-05-23 Thread Jakob Bohm via dev-security-policy

On 22/05/2017 19:41, Matthew Hardeman wrote:

On Monday, May 22, 2017 at 11:50:59 AM UTC-5, Gervase Markham wrote:


So your proposal is that technical requirements should be enforced
in-product rather than in-policy, and so effectively there's no need for
policy for the EE certs under a TCSC.

This is not an unreasonable position.



That is a correct assessment of my position.  If we are able to unambiguously 
enforce a policy matter by technological means -- and most especially where 
such technological means already exist and are deployed -- that we should be 
able to rely upon those technology constraints to relieve the administrative 
burden of auditing and enforcing compliance through business process.


How do the various validation routines in the field today validate a
scenario in which a leaf certificate's validity period exceeds a
validity period constraint upon the chosen trust path?  Is the
certificate treated as trusted, but only to the extent that the
present time is within the most restrictive view of the validity
period in the chain, or is the certificate treated as invalid
regardless for failure to fully conform to the technical policy
constraints promulgated by the chain?


Good question. I think the former, but Ryan Sleevi might have more info,
because I seem to remember him discussing this scenario and its compat
constraints recently.

Either way, it's a bad idea, because the net effect is that your cert
suddenly stops working before the end date in it, and so you are likely
to be caught short.


Here I would concur that it would be bad practice for precisely the reason you 
indicate.  I was mostly academically interested in the specifics of that topic. 
 I would agree that extending the certificate lifecycle to some period beyond 
the max EE validity period would alleviate the need.  Having said that, I can 
still envision workable scenarios and value cases for such technically 
constrained CA certificates even if it were deemed unacceptable to extend their 
validity period.




I submit, then, that the real questions become further analysis and
feedback of the risk(s) followed by specification and guidance on
what specific constraints would form up the certificate profile which
would have the reduced CP/CPS, audit, and disclosure burdens.  As a
further exercise, it seems likely that to truly create a market in
which an offering of this nature from CAs would grow in prevalence,
someone would need to carry the torch to see such guidance (or at
least the relevant portions) make way into the baseline requirements
and other root programs.  Is that a reasonable assessment?


Well, it wouldn't necessarily need to make its way into other places.
Although that's always nice.


Agreed.  I primarily made mention of the other rules, etc. because it occurs to me that 
part of the same standards of what might qualify for preferential / different treatment 
of technically constrained subCAs with respect to disclosure might also neatly align with 
issuance policy as might pertain in, for example, your separate thread titled 
"Policy 2.5 Proposal: Fix definition of constraints for id-kp-emailProtection"

The question of audit & disclosure requirements pertaining to technically constrained 
subCAs seems to be ripe for discussion.  I note that Doug Beattie recently sought 
clarification regarding this question on the matter of a name constrained subCA with the 
emailProection eku only several days ago in the thread "Next CA Communication"



Maybe there is a simpler, less onerous way to sanely impose new CAB/F or 
other policy requirements on TCSC without having them operate as full 
fledged public CAs with related complexities.


How about this:

* TCSCs can, by their existing definition, be programmatically
 recognized by certificate validation code e.g. in browsers and other
 clients.

* If TCSCs are limited, by requirements on BR-complient unconstrained
 SubCAs, to lifetimes that are the BR maximum of N years + a few months
 (e.g. 2 years + a few months for the latest CAB/F requirements), then
 any new CAB/F requirements on the algorithms etc. in SubCAs will be
 phased in as quickly as for EE certs.

* If TCSCs cannot be renewed with the same public key, then TCSC issued
 EEs are also subject to the same phase in deadlines as regular EEs.

* When issuing new/replacement TCSCs, CA operators should (by policy) be
 required to inform the prospective TCSC holders which options in EE
 certs (such as key strengths) will not be accepted by relying parties
 after certain phase-out dates during the TCSC lifetime.  It would then
 be foolish (and of little consequence to the WebPKI as a whole) if any
 TCSC holders ignore those restrictions.

* With respect to initiatives such as CT-logging, properly written
 certificate validation code should simply not impose this below TCSCs.

With the above and similar measures (mostly) already in place, I see no
good reason to subject TCSCs to any of the administrative 

Re: Mozilla Policy and CCADB Disclosure scope

2017-05-23 Thread Ryan Sleevi via dev-security-policy
On Mon, May 22, 2017 at 9:34 PM, Matthew Hardeman via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> I even concede that that alone does create a potential for compatibility
> issues should a need arise to make a global web pki-wide change to
> certificate issuance (say, for example, sudden need to deprecate SHA-256
> signatures in favor of NGHA-1 [ostensibly the Next Great Hash Algorithm
> Instance #1]).  For mitigation of that matter, I firmly believe that any
> research and development in the area of improved techniques for demonizing
> server administrators would be most beneficial.
>

Note that global pki-wide changes are made regularly - as evidence by the
CA/Browser Forum BRs :)


> If I understand correctly, then, the issue is that you wish to minimize
> the growth of distinct issuing systems wherever they may occur in the PKI
> hierarchy, not TCSCs in particular.
>

That's perhaps asserting an intent, which is more than I said. I simply
highlighted that security is significantly improved through the limited
number of distinct issuing systems, and would be harmed by the introducing
of TCSCs that introduced new and distinct issuing infrastructures.

I would not seek to limit - simply, to highlight where the existing
controls serve a significant and valuable security purpose, and reducing
those controls would undermine that security.


> > But I'm responding in the context of the desired goal, and not simply
> > today's reality, since it is the goal that is far more concerning.
>
> If I understand correctly, your position is that full disclosure and
> indexing of the TCSCs is to be desired principally because the extra effort
> of doing so may discourage their prevalence in deployment?


My position is that disclosure and indexing of TCSCs, and the further
requirements that they be operated in accordance with the BRs (and simply
audited by the Issuing CA), serves a valuable security function, for which
any situation that seeks to remove those requirements should strive to
provide an equivalent security function.

Note that I'm being very careful in what I'm saying, for obvious
non-technical reasons, and hopefully it's clear that the existing
requirements serve an objective and measurable security function, and are
not there to limit growth.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Fix definition of constraints for id-kp-emailProtection

2017-05-23 Thread Gervase Markham via dev-security-policy
On 19/05/17 14:12, Gervase Markham wrote:
> Updated language:
> 
> "If the certificate includes the id-kp-emailProtection extended key
> usage, it MUST include the Name Constraints X.509v3 extension with
> constraints on rfc822Name, with at least one name in permittedSubtrees,
> each such name having its ownership validated according to section
> 3.2.2.4 of the BRs."

I've adopted this version, to solve the clear and present problem. If
constraints are also required on dirName (above and beyond the general
requirement that information in a cert must be accurate) then we can
consider that separately.

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Require all CAs to have appropriate network security

2017-05-23 Thread Gervase Markham via dev-security-policy
On 23/05/17 04:18, Peter Kurrasch wrote:
> I think the term "industry best practices" is too nebulous. For
> example, if I patch some of my systems but not all of them I could
> still make a claim that I am following best practices even though my
> network has plenty of other holes in it.

I'm not sure that "patching half my systems" would be generally accepted
as "industry best practice". But regardless, unless we are planning to
write our own network security document, which we aren't, can you
suggest more robust wording?

> I assume the desire is to hold CA's to account for the security of
> their networks and systems, is that correct? If so, I think we should
> have something with more meat to it. If not, the proposal as written
> is probably just fine (although, do you mean the CABF's "Network
> Security Requirements" spec or is there another guidelines doc?).

Yes, that's the doc I mean (for all its flaws).

> For consideration: ‎Mozilla can--and perhaps should--require that all
> CA's adopt and document a cybersecurity risk management framework for
> their networks and systems (perhaps this is already mandated
> somewhere?). I would expect that the best run CA's will already have
> something like this in place (or something better) but other CA's
> might not. There are pros and cons to such frameworks but at a
> minimum it can demonstrate that a particular CA has at least
> considered the cybersecurity risks that are endemic to their
> business.

If we are playing "too nebulous", I would point out that to meet this
requirement, I could just write my own (very lax) cybersecurity risk
management framework and then adopt it.

Any requirement which is only a few sentences is always going to be
technically gameable. I just want to write something which is not easily
gameable without failing the "laugh test".

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Plan for Symantec posted

2017-05-23 Thread userwithuid via dev-security-policy
On Monday, May 22, 2017 at 4:46:16 PM UTC, Gervase Markham wrote:
> On 21/05/17 19:37, userwithuid wrote:
> > With the new proposal, the "minimal disruption" solution for Firefox
> > will require keeping the legacy stuff around for another 3.5-4 years
> > and better solutions will now be a lot harder to sell without the
> > leverage provided by Google.
> 
> Why so? In eight months' time, if Chrome is no longer trusting certs
> issued before 2016-06-01, why would it be a problem for Firefox to stop
> trusting them shortly thereafter?

A)

It wouldn't. Specifically, for all certs under current Symantec roots, to sync 
with Google we can check:

1. If the chain contains a whitelisted intermediate (= "Managed CA"), don't 
impose further notBefore restrictions
2. For all other intermediates, only trust certs with notBefore between 
2016-06-01 and 2017-08-08

This is a whole lot better than no restrictions and should definitely be done. 
(Also, Jakob explained this above as well, I'm just repeating it).

My point was about the fringe parts of Symantec's current PKI. In 8 months, 
Chrome will _actually_ eliminate all of those using both CT-enforcement and 
notBefore => the "old" PKI is now WYSIWYG and quite fresh from their point of 
view. No more unknowns or waiting for Symantec to draw their map and discover 
SSP2 while forgetting SSP3. :-P 

For Firefox, this doesn't hold true. Not all certs under Symantec roots issued 
since 2016-06-01 are or need to be CT-enabled, so that date doesn't really 
align with better issuance practices for everything Mozilla actually trusts.

At the end of the day it's gonna be fine of course: Most of the legacy will 
also be disabled in practice with the notBefore restrictions in Firefox. And 
anyway: What good is a cert if it isn't trusted by half of all web users? We 
can ride that Chrome market share wave. Still, Google's solution w.r.t. to the 
old PKI is just technically superior.



B)

Anyway, I don't want to derail the discussion and get back to what to do with 
the intermediate PKI ("Managed CA" under old roots) and new PKI (new roots, 
2018?), that is also very important.

One suggestion on that: Let's define the date on which we remove the old roots 
from NSS and disallow the use of any "Managed CA" via policy so that the 
intermediate PKI can't tranform into a de-facto long-term PKI. This is meant to 
force Symantec to focus their efforts on their own new roots. I don't think 
they should have that intermediate PKI as a comfy fallback when things with the 
new roots don't turn out that well or get delayed (they seem to have a tendency 
for delaying things...).

One such date would be 2021-06-01, after the last 39 months cert issued by the 
Managed CA on 2018-02-28 expires. That allows the Managed CA to operate 
unrestricted (only BR-compliant) until 2019-02-28 at least, then enter a sunset 
period with shorter lifetimes. Lots of time to get the new roots fully 
operational.

We could shave off up to 7 months on both dates (purge and sunset) if we don't 
allow the Managed CA to issue 39mo certs in the first place, only 27. Mozilla's 
first proposal was 13 months for new certs, so 27 - which will become mandatory 
soon anyway - sounds quite reasonable to me.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy