Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Matt Palmer via dev-security-policy
On Sat, Jul 04, 2020 at 07:42:12PM -0700, Peter Bowen wrote:
> On Sat, Jul 4, 2020 at 7:12 PM Matt Palmer via dev-security-policy
>  wrote:
> >
> > On Sat, Jul 04, 2020 at 08:42:03AM -0700, Mark Arnott via 
> > dev-security-policy wrote:
> > > I was informed yesterday that I would have to replace just over 300
> > > certificates in 5 days because my CA is required by rules from the CA/B
> > > forum to revoke its subCA certificate.
> >
> > The possibility of such an occurrence should have been made clear in the
> > subscriber agreement with your CA.  If not, I encourage you to have a frank
> > discussion with your CA.
> >
> > > In the CIA triad Availability is as important as Confidentiality.  Has
> > > anyone done a threat model and a serious risk analysis to determine what a
> > > reasonable risk mitigation strategy is?
> >
> > Did you do a threat model and a serious risk analysis before you chose to
> > use the WebPKI in your application?
> 
> I think it is important to keep in mind that many of the CA
> certificates that were identified are constrained to not issue TLS
> certificates.  The certificates they issue are explicitly excluded
> from the Mozilla CA program requirements.

Yes, I'm aware of that.

> I don't think it is reasonable to assert that everyone impacted by
> this should have been aware of the possibly of revocation

At the limits, I agree with you.  However, to whatever degree that there is
complaining to be done, it should be directed at the CA(s) which sold a
product that, it is now clear, was not fit for whatever purpose it has been
put to, and not at Mozilla.

> it is completely permissible under all browser programs to issue
> end-entity certificates with infinite duration that guarantee that they
> will never be revoked, even in the case of full key compromise, as long as
> the certificate does not assert a key purpose in the EKU that is covered
> under the policy.  The odd thing in this case is that the subCA
> certificate itself is the certificate in question.

And a sufficiently[1] thorough threat modelling and risk analysis exercise
would have identified the hazard of a subCA certificate that needed to be
revoked, assessed the probability of that hazard occurring, and either
accepted the risk (and thus have no reasonable cause for complaint now), or
would have controlled the risk until it was acceptable.

That there are people cropping up now demanding that Mozilla do a risk
analysis for them indicates that they themselves didn't do the necessary
risk analysis beforehand, which pegs my irony meter.

I wonder how these Masters of Information Security have "threat modelled"
the possibility that their chosen CA might get unceremoniously removed from
trust stores.  Show us yer risk register!

- Matt

[1] one might also substitute "impossibly" for "sufficiently" here; I've
done enough "risk analysis" to know that trying to enumerate all possible
threats is an absurd notion.  The point I'm trying to get across is
that someone asking Mozilla to do what they can't is not the iron-clad,
be-all-and-end-all argument that some appear to believe it is.

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Ryan Sleevi via dev-security-policy
On Sat, Jul 4, 2020 at 10:42 PM Peter Bowen via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> As several others have indicated, WebPKI today is effectively a subset
> of the more generic shared PKI. It is beyond time to fork the WebPKI
> from the general PKI and strongly consider making WebPKI-only CAs that
> are subordinate to the broader PKI; these WebPKI-only CAs can be
> carried by default in public web browsers and operating systems, while
> the broader general PKI roots can be added locally (using centrally
> managed policies or local configuration) by those users who what a
> superset of the WebPKI.
>

+1.  This is the only outcome that, long term, balances the tradeoffs
appropriately.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Peter Bowen via dev-security-policy
On Sat, Jul 4, 2020 at 7:12 PM Matt Palmer via dev-security-policy
 wrote:
>
> On Sat, Jul 04, 2020 at 08:42:03AM -0700, Mark Arnott via dev-security-policy 
> wrote:
> > I was informed yesterday that I would have to replace just over 300
> > certificates in 5 days because my CA is required by rules from the CA/B
> > forum to revoke its subCA certificate.
>
> The possibility of such an occurrence should have been made clear in the
> subscriber agreement with your CA.  If not, I encourage you to have a frank
> discussion with your CA.
>
> > In the CIA triad Availability is as important as Confidentiality.  Has
> > anyone done a threat model and a serious risk analysis to determine what a
> > reasonable risk mitigation strategy is?
>
> Did you do a threat model and a serious risk analysis before you chose to
> use the WebPKI in your application?

I think it is important to keep in mind that many of the CA
certificates that were identified are constrained to not issue TLS
certificates.  The certificates they issue are explicitly excluded
from the Mozilla CA program requirements.

The issue at hand is caused by a lack of standardization of the
meaning of the Extended Key Usage certificate extension when included
in a CA-certificate.  This has resulted in some software developers
taking certain EKUs in CA-certificates to act as a constraint (similar
to Name Constraints), some to take it as the purpose for which the
public key may be used, and some to simultaneously take both
approaches - using the former for id-kp-serverAuth key purpose and the
latter for the id-kp-OCSPSigning key purpose.

I don't think it is reasonable to assert that everyone impacted by
this should have been aware of the possibly of revocation - it is
completely permissible under all browser programs to issue end-entity
certificates with infinite duration that guarantee that they will
never be revoked, even in the case of full key compromise, as long as
the certificate does not assert a key purpose in the EKU that is
covered under the policy.  The odd thing in this case is that the
subCA certificate itself is the certificate in question.

As several others have indicated, WebPKI today is effectively a subset
of the more generic shared PKI. It is beyond time to fork the WebPKI
from the general PKI and strongly consider making WebPKI-only CAs that
are subordinate to the broader PKI; these WebPKI-only CAs can be
carried by default in public web browsers and operating systems, while
the broader general PKI roots can be added locally (using centrally
managed policies or local configuration) by those users who what a
superset of the WebPKI.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Matt Palmer via dev-security-policy
On Sat, Jul 04, 2020 at 08:42:03AM -0700, Mark Arnott via dev-security-policy 
wrote:
> I was informed yesterday that I would have to replace just over 300
> certificates in 5 days because my CA is required by rules from the CA/B
> forum to revoke its subCA certificate.

The possibility of such an occurrence should have been made clear in the
subscriber agreement with your CA.  If not, I encourage you to have a frank
discussion with your CA.

> In the CIA triad Availability is as important as Confidentiality.  Has
> anyone done a threat model and a serious risk analysis to determine what a
> reasonable risk mitigation strategy is?

Did you do a threat model and a serious risk analysis before you chose to
use the WebPKI in your application?

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Matt Palmer via dev-security-policy
On Sat, Jul 04, 2020 at 12:51:32PM -0700, Mark Arnott via dev-security-policy 
wrote:
> I think that the lack of fairness comes from the fact that the CA/B forum
> only represents the view points of two interests - the CAs and the Browser
> vendors.  Who represents the interests of industries and end users? 
> Nobody.

CAs claim that they represent what I assume you mean by "industries" (that
is, the entities to which WebPKI certificates are issued).  If you're
unhappy with the way which your interests are being represented by your CA,
I would encourage you to speak with them.  Alternately, anyone can become an
"Interested Party" within the CA/B Forum, which a brief perusal of the CA/B
Forum website will make clear.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Ryan Sleevi via dev-security-policy
On Sat, Jul 4, 2020 at 9:41 PM Peter Gutmann 
wrote:

> Ryan Sleevi  writes:
>
> >And they are accomodated - by using something other than the Web PKI.
>
> That's the HTTP/2 "let them eat cake" response again.  For all intents and
> purposes, PKI *is* the Web PKI.  If it wasn't, people wouldn't be worrying
> about having to reissue/replace certificates that will never be used in a
> web
> context because of some Web PKI requirement that doesn't apply to them.
>

Thanks Peter, but I fail to see how you're making your point.

The problem that "PKI *is* the Web PKI" is the problem to be solved. That's
not a desirable outcome, and exactly the kind of thing we'd expect to see
as part of a CA transition.

PKI is a technology, much like HTTP/2 is a protocol. Unlike your example,
of HTTP/2 not being considerate of SCADA devices, PKI is an abstract
technology fully capable of addressing the SCADA needs. The only
distinction is that, by design and rather intentionally, it doesn't mean
that the billions of devices out there, in their default configuration, can
or should expect to talk to SCADA servers. I'm would hope you recognize why
that's undesirable, just like it would be if your phone were to ship with a
SCADA client. At the end of the day, this is something that should require
a degree of intentionality. Whether it's HL7 or SCADA, these are limited
use cases that aren't part of a generic and interoperable Web experience,
and it's not at all unreasonable to think they may require additional,
explicit configuration to support.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Peter Gutmann via dev-security-policy
Ryan Sleevi  writes:

>And they are accomodated - by using something other than the Web PKI.

That's the HTTP/2 "let them eat cake" response again.  For all intents and
purposes, PKI *is* the Web PKI.  If it wasn't, people wouldn't be worrying
about having to reissue/replace certificates that will never be used in a web
context because of some Web PKI requirement that doesn't apply to them.

Peter.





 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Ryan Sleevi via dev-security-policy
On Sat, Jul 4, 2020 at 9:21 PM Peter Gutmann 
wrote:

> So the problem isn't "everyone should do what the Web PKI wants, no matter
> how
> inappropriate it is in their environment", it's "CAs (and protocol
> designers)
> need to acknowledge that something other than the web exists and
> accommodate
> it".


And they are accomodated - by using something other than the Web PKI.

Your examples of SCADA are apt: there's absolutely no reason to assume a
default phone device, for example, should be able to manage a SCADA device.
Of course we'd laugh at that and say "Oh god, who would do something that
stupid?"

Yet that's what happens when one tries to make a one-size fits-all PKI.

Of course the PKI technologies accommodate these scenarios: you use locally
trusted anchors, specific to your environment, and hope that the OS vendor
does not remove support for your use case in a subsequent update. Yet it
would be grossly negligent if we allowed SCADA, in your example, to hold
back the evolution of the Web. As you yourself note, it's something other
than the Web. And it can use its own PKI.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Peter Gutmann via dev-security-policy
Eric Mill via dev-security-policy  
writes:

>This is a clear, straightforward statement of perhaps the single biggest core
>issue that limits the agility and security of the Web PKI

That's not the biggest issue by a long shot.  The biggest issue is that the
public PKI (meaning public/commercial CAs, not sure what the best collective
noun for that is) assumes that the only possible use for certificates is the
web.  For all intents and purposes, public PKI = Web PKI.  For example for
embedded systems, SCADA devices, anything on an RFC 1918 LAN, and much more,
the only appropriate expiry date for a certificate is never.  However, since
the Web PKI has decided that certificates should constantly expire because
$reasons, everything that isn't the web has to deal with this, or more usually
suffer under it.

The same goes for protocols like HTTP and TLS, the current versions (HTTP/2 /3
and TLS 1.3) are designed for efficient content delivery by large web service
providers above everything else.  When some SCADA folks requested a few minor
changes to the SCADA-hostile HTTP/2 from the WG, not mandatory but just
negotiable options to make it more usable in a SCADA environment, the response
was "let them eat HTTP/1.1".  In other words they'd explicitly forked HTTP,
there was HTTP/2 for the web and HTTP/1.1 for the rest of them.

So the problem isn't "everyone should do what the Web PKI wants, no matter how
inappropriate it is in their environment", it's "CAs (and protocol designers)
need to acknowledge that something other than the web exists and accommodate
it".

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Buschart, Rufus via dev-security-policy

From: Eric Mill 
Sent: Sonntag, 5. Juli 2020 00:55
To: Buschart, Rufus (SOP IT IN COR) 
Cc: mozilla-dev-security-policy 
; r...@sleevi.com; 
mark.arno...@gmail.com
Subject: Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous 
Delegated Responder Cert


On Sat, Jul 4, 2020 at 3:15 PM Buschart, Rufus via dev-security-policy 
mailto:dev-security-policy@lists.mozilla.org>>
 wrote:
...especially since many of those millions of certificates are not even TLS 
certificates and their consumers never expected the hard revocation deadlines 
of the BRGs to be of any relevance for them. And therefore they didn't design 
their infrastructure to be able to do an automated mass-certificate exchange.

This is a clear, straightforward statement of perhaps the single biggest core 
issue that limits the agility and security of the Web PKI: certificate 
customers (particularly large enterprises) don't seem to actually expect they 
may have to revoke many certificates on short notice, despite it being 
extremely realistic that they may need to do so. We're many years into the Web 
PKI now, and there have been multiple mass-revocation events along the way. At 
some point, these expectations have to change and result in redesigns that 
match them.

[>] Maybe I wasn’t able to bring over my message: those 700 k certificates that 
are hurting us most, have never been “WebPKI” certificates. They are from 
technically constrained issuing CAs that are limited to S/MIME and client 
authentication. We are just ‘collateral damage’ from a compliance point of view 
(of course not in a security pov). In the upcoming BRGs for S/MIME I hope that 
the potential technical differences between TLS certificates (nearly all stored 
as P12 files in on-line server) and S/MIME certificates (many of them stored 
off-line on smart-cards or other tokens) will be reflected also in the 
revocation requirements. For WebPKI (aka TLS) certificates, we are getting 
better based on the lessons learned of the last mass exchanges.

It's extremely convenient and cost-effective for organizations to rely on the 
WebPKI for non-public-web needs, and given that the WebPKI is still 
(relatively) more agile than a lot of private PKIs, there will likely continue 
to be security advantages for organizations that do so. But if the security and 
agility needs of the WebPKI don't align with an organization's needs, using an 
alternate PKI is a reasonable solution that reduces friction on both sides of 
the equation.

[>] But we are talking in S/MIME also about “public needs”: It’s about the 
interchange of signed and encrypted emails between different entities that 
don’t share a private PKI.

--
Eric Mill
617-314-0966 | 
konklone.com
 | 
@konklone



With best regards,
Rufus Buschart

Siemens AG
Siemens Operations
Information Technology
Value Center Core Services
SOP IT IN COR
Freyeslebenstr. 1
91058 Erlangen, Germany
Tel.: +49 1522 2894134
mailto:rufus.busch...@siemens.com
www.twitter.com/siemens
www.siemens.com/ingenuityforlife
[cid:image001.gif@01D6526A.82B24320]
Siemens Aktiengesellschaft: Chairman of the Supervisory Board: Jim Hagemann 
Snabe; Managing Board: Joe Kaeser, Chairman, President and Chief Executive 
Officer; Roland Busch, Klaus Helmrich, Cedrik Neike, Ralf P. Thomas; Registered 
offices: Berlin and Munich, Germany; Commercial registries: Berlin 
Charlottenburg, HRB 12300, Munich, HRB 6684; WEEE-Reg.-No. DE 23691322

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Matthew Hardeman via dev-security-policy
Just chiming in as another subscriber and relying party, with a view to
speaking to the other subscribers on this topic.

To the extent that your use case is not specifically the WebPKI as pertains
to modern browsers, it was clear to me quite several years ago and gets
clearer every day: the WebPKI is not for you, us, or anyone outside that
very particular scope.

Want to pin server cert public keys in an app?  Have a separate TLS
endpoint for that with an industry or org specific private PKI behind it.

Make website endpoints that need to face broad swathes of public users’ web
browsers participate in the WebPKI.  Get client certs and API endpoints out
of it.

That was the takeaway I had quite some years ago and I’ve been saved much
grief for having moved that way.

On Saturday, July 4, 2020, Ryan Sleevi via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Sat, Jul 4, 2020 at 5:32 PM Mark Arnott via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
> > Why aren't we hearing more from the 14 CAs that this affects.  Correct me
> > if I am wrong, but the CA/B form has something like 23 members??  An
> issue
> > that affects 14 CAs indicates a problem with the way the forum
> collaborates
> > (or should I say 'fails to work together')  Maybe this incident should
> have
> > followed a responsible disclosure process and not been fully disclosed
> > right before holidays in several nations.
>
>
> This was something disclosed 6 months ago and 6 years ago. This is not
> something “new”. The disclosure here is precisely because CAs failed, when
> engaged privately, to understand both the compliance failure and the
> security risk.
>
> Unfortunately, debates about “responsible” disclosure have existed for as
> long as computer security has been an area of focus, and itself was a term
> that was introduced as way of having the language favor the vendor, not the
> reporter. We have a security risk introduced by a compliance failure, which
> has been publicly known for some time, and which some CAs have dismissed as
> not an issue. Transparency is an essential part of bringing attention and
> understanding. This is, in effect, a “20-year day”. It’s not some new
> surprise.
>
> Even if disclosed privately, the CAs would still be under the same 7 day
> timeline. The mere act of disclosure triggers this obligation, whether
> private or public. That’s what the BRs obligate CAs to do.
>
>
> > Thank you for explaining that.  We need to hear the official position
> from
> > Google.  Ryan Hurst are you out there?
>
>
> Just to be clear: Ryan Hurst does not represent Google/Chrome’s decisions
> on certificates. He represents the CA, which is accountable to
> Google/Chrome just as it is to Mozilla/Firefox or Apple/Safari.
>
> In the past, and when speaking on behalf of Google/Chrome, it’s been
> repeatedly emphasized: Google/Chrome does not grant exceptions to the
> Baseline Requirements. In no uncertain terms, Google/Chrome does not give
> CAs blank checks to ignore violations of the Baseline Requirements.
>
> Ben’s message, while seeming somewhat self-contradictory in messaging,
> similarly reflects Mozilla’s long-standing position that it does not grant
> exceptions to the BRs. They treat violations as incidents, as Ben’s message
> emphasized, including the failure to revoke, and as Peter highlighted, both
> Google and Mozilla work through a public post-mortem process that seeks to
> understand the facts and nature of the CA’s violations and how the
> underlying systemic issues are being addressed. If a CA demonstrates poor
> judgement in handling these incidents, they may be distrusted, as some have
> in the past. However, CAs that demonstrate good judgement and demonstrate
> clear plans for improvement are given the opportunity to do so.
> Unfortunately, because some CAs think that the exact same plan should work
> for everyone, it’s necessary to repeatedly make it clear that there are no
> exceptions, and that each situation is case-by-case.
>
> This is not a Google/Mozilla issue either: as Mozilla reminds CAs at
> https://wiki.mozilla.org/CA/Responding_To_An_Incident#Revocation , delayed
> revocation issues affect everyone, and CAs need to directly communicate
> with all of the root programs that they have made representations to.
> WISeKey knows how to do this, but they also know what the expectation and
> response will be, which is aligned with the above.
>
> Some CAs have had a string of failures, and around matters like this, and
> thus know that they’re at risk of being seen as CAs that don’t take
> security seriously, which may lead to distrust. Other CAs recognize that
> security, while painful, is also a competitive advantage, and so look to be
> leaders in an industry of followers and do the right thing, especially when
> this leadership can help ensure greater flexibility if/when they do have an
> incident. Other CAs may be in uniquely difficult positions where 

Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Eric Mill via dev-security-policy
On Sat, Jul 4, 2020 at 3:15 PM Buschart, Rufus via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> ...especially since many of those millions of certificates are not even
> TLS certificates and their consumers never expected the hard revocation
> deadlines of the BRGs to be of any relevance for them. And therefore they
> didn't design their infrastructure to be able to do an automated
> mass-certificate exchange.
>

This is a clear, straightforward statement of perhaps the single biggest
core issue that limits the agility and security of the Web PKI: certificate
customers (particularly large enterprises) don't seem to actually expect
they may have to revoke many certificates on short notice, despite it being
extremely realistic that they may need to do so. We're many years into the
Web PKI now, and there have been multiple mass-revocation events along the
way. At some point, these expectations have to change and result in
redesigns that match them.

As Ryan [Sleevi] said, neither Mozilla nor Google employ some binary
unthinking process where either all the certs are revoked or all the CAs
who don't do it are instantly cut loose. If a CA makes a decision to not
revoke, citing systemic barriers to meeting the security needs of the
WebPKI that end users rely on, their incident reports are expected to
describe how the CA will work towards systemic solutions to those barriers
- to project a persuasive vision of why these sorts of events will not
result in a painful crucible going forward.

It's extremely convenient and cost-effective for organizations to rely on
the WebPKI for non-public-web needs, and given that the WebPKI is still
(relatively) more agile than a lot of private PKIs, there will likely
continue to be security advantages for organizations that do so. But if the
security and agility needs of the WebPKI don't align with an organization's
needs, using an alternate PKI is a reasonable solution that reduces
friction on both sides of the equation.

-- 
Eric Mill
617-314-0966 | konklone.com | @konklone 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Ryan Sleevi via dev-security-policy
On Sat, Jul 4, 2020 at 5:32 PM Mark Arnott via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Why aren't we hearing more from the 14 CAs that this affects.  Correct me
> if I am wrong, but the CA/B form has something like 23 members??  An issue
> that affects 14 CAs indicates a problem with the way the forum collaborates
> (or should I say 'fails to work together')  Maybe this incident should have
> followed a responsible disclosure process and not been fully disclosed
> right before holidays in several nations.


This was something disclosed 6 months ago and 6 years ago. This is not
something “new”. The disclosure here is precisely because CAs failed, when
engaged privately, to understand both the compliance failure and the
security risk.

Unfortunately, debates about “responsible” disclosure have existed for as
long as computer security has been an area of focus, and itself was a term
that was introduced as way of having the language favor the vendor, not the
reporter. We have a security risk introduced by a compliance failure, which
has been publicly known for some time, and which some CAs have dismissed as
not an issue. Transparency is an essential part of bringing attention and
understanding. This is, in effect, a “20-year day”. It’s not some new
surprise.

Even if disclosed privately, the CAs would still be under the same 7 day
timeline. The mere act of disclosure triggers this obligation, whether
private or public. That’s what the BRs obligate CAs to do.


> Thank you for explaining that.  We need to hear the official position from
> Google.  Ryan Hurst are you out there?


Just to be clear: Ryan Hurst does not represent Google/Chrome’s decisions
on certificates. He represents the CA, which is accountable to
Google/Chrome just as it is to Mozilla/Firefox or Apple/Safari.

In the past, and when speaking on behalf of Google/Chrome, it’s been
repeatedly emphasized: Google/Chrome does not grant exceptions to the
Baseline Requirements. In no uncertain terms, Google/Chrome does not give
CAs blank checks to ignore violations of the Baseline Requirements.

Ben’s message, while seeming somewhat self-contradictory in messaging,
similarly reflects Mozilla’s long-standing position that it does not grant
exceptions to the BRs. They treat violations as incidents, as Ben’s message
emphasized, including the failure to revoke, and as Peter highlighted, both
Google and Mozilla work through a public post-mortem process that seeks to
understand the facts and nature of the CA’s violations and how the
underlying systemic issues are being addressed. If a CA demonstrates poor
judgement in handling these incidents, they may be distrusted, as some have
in the past. However, CAs that demonstrate good judgement and demonstrate
clear plans for improvement are given the opportunity to do so.
Unfortunately, because some CAs think that the exact same plan should work
for everyone, it’s necessary to repeatedly make it clear that there are no
exceptions, and that each situation is case-by-case.

This is not a Google/Mozilla issue either: as Mozilla reminds CAs at
https://wiki.mozilla.org/CA/Responding_To_An_Incident#Revocation , delayed
revocation issues affect everyone, and CAs need to directly communicate
with all of the root programs that they have made representations to.
WISeKey knows how to do this, but they also know what the expectation and
response will be, which is aligned with the above.

Some CAs have had a string of failures, and around matters like this, and
thus know that they’re at risk of being seen as CAs that don’t take
security seriously, which may lead to distrust. Other CAs recognize that
security, while painful, is also a competitive advantage, and so look to be
leaders in an industry of followers and do the right thing, especially when
this leadership can help ensure greater flexibility if/when they do have an
incident. Other CAs may be in uniquely difficult positions where they see
greater harm resulting, due to specific decisions made in the past that
were not properly thought through: but the burden falls to them to
demonstrate that uniqueness, that burden, and both what steps the CA is
taking to mitigate that risk **and the cost to the ecosystem** and what
steps they’re taking to prevent that going forward. Each CA is different
here, which is why blanket statements aren’t a one-size fits all solution.

I’m fully aware there are some CAs who are simply not prepared to rotate
intermediates within a week, despite them promising they were capable of
doing so. Those CAs need to have a plan to establish that capability, they
need to truly make sure this is exceptional and not just a continuance of a
pattern of problematic behavior, and they need to be transparent about all
of this. That’s consistent with all of my messages to date, and consistent
with Ben’s message regarding Mozilla’s expectations. They are different
ways of saying the same thing: you can’t sweep this under the rug, you 

RE: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Buschart, Rufus via dev-security-policy
Dear Mark!

> -Original Message-
> From: dev-security-policy  On 
> Behalf Of Ryan Sleevi via dev-security-policy
> Sent: Samstag, 4. Juli 2020 20:06
> 
> On Sat, Jul 4, 2020 at 12:52 PM mark.arnott1--- via dev-security-policy < 
> dev-security-policy@lists.mozilla.org> wrote:
> 
> > This is insane!
> > Those 300 certificates are used to secure healthcare information
> > systems at a time when the global healthcare system is strained by a
> > global pandemic. 

Thank you for bringing in your perspective as a certificate consumer. We at 
Siemens - as a certificate consumer - also have ~ 700 k affected personal 
S/MIME certificates out in the field, all of them stored on smart cards (+ code 
signing and TLS certificates ...). You can imagine, that rekeying them on short 
notice would be a total nightmare.

> To be clear; "the issue" we're talking about is only truly 'solved' by the 
> rotation and key destruction. Anything else, besides that, is just
> a risk calculation, and the CA is responsible for balancing that. Peter's 
> highlighting how the fix for the *compliance* issued doesn't fix
> the *security* issue, as other CAs, like DigiCert, have also noted.

Currently, I'm not convinced, that the underlying security issue (whose 
implication I of course fully understand and don't want to downplay) can only 
be fixed by revoking the issuing CAs and destructing the old keys. Sadly, all 
the brilliant minds on this mailing list are discussing compliance issues and 
the interpretation of RFCs, BRGs and 15-year-old Microsoft announcements, but 
it seems nobody is trying to find (or at least publicly discuss) a solution 
that can solve the security issue, is BRG / RFC compliant and doesn't require 
the replacement of millions of certificates - especially since many of those 
millions of certificates are not even TLS certificates and their consumers 
never expected the hard revocation deadlines of the BRGs to be of any relevance 
for them. And therefore they didn't design their infrastructure to be able to 
do an automated mass-certificate exchange.

With best regards,
Rufus Buschart

Siemens AG
Siemens Operations
Information Technology
Value Center Core Services
SOP IT IN COR
Freyeslebenstr. 1
91058 Erlangen, Germany 
Tel.: +49 1522 2894134
mailto:rufus.busch...@siemens.com
www.twitter.com/siemens

www.siemens.com/ingenuityforlife

Siemens Aktiengesellschaft: Chairman of the Supervisory Board: Jim Hagemann 
Snabe; Managing Board: Joe Kaeser, Chairman, President and Chief Executive 
Officer; Roland Busch, Klaus Helmrich, Cedrik Neike, Ralf P. Thomas; Registered 
offices: Berlin and Munich, Germany; Commercial registries: Berlin 
Charlottenburg, HRB 12300, Munich, HRB 6684; WEEE-Reg.-No. DE 23691322
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Key-destruction audit web-trust vs. ETSI (RE: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert)

2020-07-04 Thread Ryan Sleevi via dev-security-policy
Indeed, you’re welcome to do so, but I also don’t think these are easily
adjusted for or corrected. ETSI ESI is trying to solve a different need and
use case, and it’s structure and design reflect that.

And that’s ok! There’s nothing inherently wrong with that. They are trying
to develop a set of standards suitable for their community of users, which
generally are government regulators. As browsers, we have different needs
and expectations, reflecting the different trust frameworks. This is why I
stand by my assertion that it’s almost certainly better to move off ETSI
ESI, having spent a number of years trying, and failing, to highlight the
areas of critical concern and importance.

If the ETSI ESI liaisons have not been communicating the risk, clearly
communicated for several years, that a failure to address these will
ultimately lead to market rejection of using such audits as the basis for
browsers’ trust frameworks, I can only say that highlights an ongoing
systemic failure for said liaisons to inform both communities about
developments. If the answer is “as you know, it takes time, we have many
members” (as the response to these concerns frequently is answered), well,
it’s taking too long and it’s time to move on.

Luckily, audits are something that, like many other compliance or
contracting schemes, don’t inherentl conflict. An approach that has a CA
getting a WebTrust audit to satisfy browser needs and, if appropriate, an
ETSI ESI to satisfy others’ needs, doesn’t seem an unreasonable thing. I
can understand why it may not be desirable for a CA, but the goal is to
make sure browsers have the assurance they need.

On Sat, Jul 4, 2020 at 5:29 PM Buschart, Rufus 
wrote:

> Thank you Ryan for spending your 4th of July weekend answering my
> questions! From my purely technical understanding, without knowing too much
> about the history in the discussion between the ETSI community and you nor
> about the “Überbau” of the audit schemes, I would believe that most of the
> points you mentioned could be easily fixed, especially since they don’t
> seem to be unreasonable. Of course, I can’t speak for ETSI but since
> Siemens is a long-standing member of ETSI I’ll forward your email to the
> correct working group and try to make sure that you will receive a
> constructive answer.
>
>
>
> With best regards,
> Rufus Buschart
>
> Siemens AG
> Siemens Operations
> Information Technology
> Value Center Core Services
> SOP IT IN COR
> Freyeslebenstr. 1
> 
> 91058 Erlangen, Germany
> 
> Tel.: +49 1522 2894134
> mailto:rufus.busch...@siemens.com 
> www.twitter.com/siemens
> www.siemens.com/ingenuityforlife 
>
>
> Siemens Aktiengesellschaft: Chairman of the Supervisory Board: Jim
> Hagemann Snabe; Managing Board: Joe Kaeser, Chairman, President and Chief
> Executive Officer; Roland Busch, Klaus Helmrich, Cedrik Neike, Ralf P.
> Thomas; Registered offices: Berlin and Munich, Germany; Commercial
> registries: Berlin Charlottenburg, HRB 12300, Munich, HRB 6684;
> WEEE-Reg.-No. DE 23691322
>
> *From:* Ryan Sleevi 
> *Sent:* Samstag, 4. Juli 2020 16:37
> *To:* Buschart, Rufus (SOP IT IN COR) 
> *Cc:* Peter Bowen ;
> mozilla-dev-security-pol...@lists.mozilla.org; r...@sleevi.com
> *Subject:* Re: Key-destruction audit web-trust vs. ETSI (RE: SECURITY
> RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder
> Cert)
>
>
>
>
>
>
>
> On Sat, Jul 4, 2020 at 9:17 AM Buschart, Rufus 
> wrote:
>
> Dear Ryan!
>
> > From: dev-security-policy 
> On Behalf Of Ryan Sleevi via dev-security-policy
> > Sent: Freitag, 3. Juli 2020 23:30
> > To: Peter Bowen 
> > Cc: Ryan Sleevi ; Pedro Fuentes ;
> mozilla-dev-security-pol...@lists.mozilla.org
> > Subject: Re: SECURITY RELEVANT FOR CAs: The curious case of the
> Dangerous Delegated Responder Cert
> >
> > On Fri, Jul 3, 2020 at 4:19 PM Peter Bowen  wrote:
> > > I agree that we cannot make blanket statements that apply to all CAs,
> > > but these are some examples where it seems like there are alternatives
> > > to key destruction.
> > >
> >
> > Right, and I want to acknowledge, there are some potentially viable
> paths specific to WebTrust, for which I have no faith with respect
> > to ETSI precisely because of the nature and design of ETSI audits, that,
> in an ideal world, could provide the assurance desired.
>
> Could you elaborate a little bit further, why you don't have "faith in
> respect to ETSI"? I have to admit, I never totally understood your concerns
> with ETSI audits because a simple comparison between WebTrust test
> requirements and ETSI test requirements don't show a lot of differences. If
> requirements are missing, we should discuss them with ETSI representatives
> to have them included in one of the next updates.
>
>
>
> ETSI ESI 

Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Mark Arnott via dev-security-policy
On Saturday, July 4, 2020 at 3:01:34 PM UTC-4, Peter Bowen wrote:
> On Sat, Jul 4, 2020 at 11:06 AM Ryan Sleevi via dev-security-policy
>  wrote:

> One of the challenges is that not everyone in the WebPKI ecosystem has
> aligned around the same view of incidents as learning opportunities.
> This makes it very challenging for CAs to find a path that suits all
> participants and frequently results in hesitancy to use the blameless
> post-mortem version of incidents.
> 
Why aren't we hearing more from the 14 CAs that this affects.  Correct me if I 
am wrong, but the CA/B form has something like 23 members??  An issue that 
affects 14 CAs indicates a problem with the way the forum collaborates (or 
should I say 'fails to work together')  Maybe this incident should have 
followed a responsible disclosure process and not been fully disclosed right 
before holidays in several nations.

> To clarify what Ryan is saying here: he is pointing out that he is not
> representing the position of Google or Alphabet, rather he is stating
> he is acting as an independent party.

> As you can see from earlier messages, Mozilla has clearly stated that
> they are NOT requiring revocation in 7 days in this case, as they
> judge the risk from revocation greater than the risks from not
> revoking on that same timeframe. Ben Wilson, who does represent
> Mozilla, stated:

> If Google were to officially state something similar to Mozilla, then
> this thread would likely resolve itself quickly.  Yes, there are other
> trust stores to deal with, but they have historically not engaged in
> this Mozilla forum, so discussion here is not helpful for them.

Thank you for explaining that.  We need to hear the official position from 
Google.  Ryan Hurst are you out there?
 
> For the future, HL7 probably would be well served by working to create
> a separate PKI that meets their needs.  This would enable a different
> risk calculation to be used - one that is specific to the HL7 health
> data interoperability world.  I don't know if you or your organization
> has influence in HL7, but it is something worth pushing if you can.

This has been discussed in the past and abandoned, but this incident will 
probably restart that discussion.

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Mark Arnott via dev-security-policy
On Saturday, July 4, 2020 at 2:06:53 PM UTC-4, Ryan Sleevi wrote:
> On Sat, Jul 4, 2020 at 12:52 PM mark.arnott1--- via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
> 
> 
> As part of this, you should re-evaluate certificate pinning. As one of the
> authors of that specification, and indeed, my co-authors on the
> specification agree, certificate pinning does more harm than good, for
> precisely this reason.
> 
I agree that certificate pinning is a bad practice, but it is not a decision 
that I made or that I can reverse quickly.  It will take time to convince 
several different actors that this needs to change.

> I realize you're new here, and so I encourage you to read
> https://wiki.mozilla.org/CA/Policy_Participants for context about the
> nature of participation.

Thank you for helping me understand who the participants in this discussion are 
and what roles they fill.

> I'm very familiar with the implications of applying these rules, both
> personally and professionally. This is why policies such as
> https://wiki.mozilla.org/CA/Responding_To_An_Incident#Revocation exist.
> This is where such information is shared, gathered, and considered, as
> provided by the CA. It is up to the CA to demonstrate the balance of
> equities, but also to ensure that going forward, they actually adhere to
> the rules they agreed to as a condition of trust. Simply throwing out
> agreements and contractual obligations when it's inconvenient,
> *especially *when
> these were scenarios contemplated when they were written and CAs
> acknowledged they would take steps to ensure they're followed, isn't a
> fair, equitable, or secure system.

I think that the lack of fairness comes from the fact that the CA/B forum only 
represents the view points of two interests - the CAs and the Browser vendors.  
Who represents the interests of industries and end users?  Nobody.


___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Key-destruction audit web-trust vs. ETSI (RE: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert)

2020-07-04 Thread Buschart, Rufus via dev-security-policy
Thank you Ryan for spending your 4th of July weekend answering my questions! 
From my purely technical understanding, without knowing too much about the 
history in the discussion between the ETSI community and you nor about the 
“Überbau” of the audit schemes, I would believe that most of the points you 
mentioned could be easily fixed, especially since they don’t seem to be 
unreasonable. Of course, I can’t speak for ETSI but since Siemens is a 
long-standing member of ETSI I’ll forward your email to the correct working 
group and try to make sure that you will receive a constructive answer.

With best regards,
Rufus Buschart

Siemens AG
Siemens Operations
Information Technology
Value Center Core Services
SOP IT IN COR
Freyeslebenstr. 1
91058 Erlangen, Germany
Tel.: +49 1522 2894134
mailto:rufus.busch...@siemens.com
www.twitter.com/siemens
www.siemens.com/ingenuityforlife
[cid:image001.gif@01D6525A.EA305A60]
Siemens Aktiengesellschaft: Chairman of the Supervisory Board: Jim Hagemann 
Snabe; Managing Board: Joe Kaeser, Chairman, President and Chief Executive 
Officer; Roland Busch, Klaus Helmrich, Cedrik Neike, Ralf P. Thomas; Registered 
offices: Berlin and Munich, Germany; Commercial registries: Berlin 
Charlottenburg, HRB 12300, Munich, HRB 6684; WEEE-Reg.-No. DE 23691322

From: Ryan Sleevi 
Sent: Samstag, 4. Juli 2020 16:37
To: Buschart, Rufus (SOP IT IN COR) 
Cc: Peter Bowen ; 
mozilla-dev-security-pol...@lists.mozilla.org; r...@sleevi.com
Subject: Re: Key-destruction audit web-trust vs. ETSI (RE: SECURITY RELEVANT 
FOR CAs: The curious case of the Dangerous Delegated Responder Cert)



On Sat, Jul 4, 2020 at 9:17 AM Buschart, Rufus 
mailto:rufus.busch...@siemens.com>> wrote:
Dear Ryan!

> From: dev-security-policy 
> mailto:dev-security-policy-boun...@lists.mozilla.org>>
>  On Behalf Of Ryan Sleevi via dev-security-policy
> Sent: Freitag, 3. Juli 2020 23:30
> To: Peter Bowen mailto:pzbo...@gmail.com>>
> Cc: Ryan Sleevi mailto:r...@sleevi.com>>; Pedro Fuentes 
> mailto:pfuente...@gmail.com>>; 
> mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous 
> Delegated Responder Cert
>
> On Fri, Jul 3, 2020 at 4:19 PM Peter Bowen 
> mailto:pzbo...@gmail.com>> wrote:
> > I agree that we cannot make blanket statements that apply to all CAs,
> > but these are some examples where it seems like there are alternatives
> > to key destruction.
> >
>
> Right, and I want to acknowledge, there are some potentially viable paths 
> specific to WebTrust, for which I have no faith with respect
> to ETSI precisely because of the nature and design of ETSI audits, that, in 
> an ideal world, could provide the assurance desired.

Could you elaborate a little bit further, why you don't have "faith in respect 
to ETSI"? I have to admit, I never totally understood your concerns with ETSI 
audits because a simple comparison between WebTrust test requirements and ETSI 
test requirements don't show a lot of differences. If requirements are missing, 
we should discuss them with ETSI representatives to have them included in one 
of the next updates.

ETSI ESI members, especially the vice chairs, often like to make this claim of 
“simple comparison”, but that fails to take into account the holistic picture 
of how the audits are designed, operated, and their goals to achieve.

For example, you will find nothing to the detail of say the AICPA Professional 
Standards (AT-C) to provide insight into the obligations about how the audit is 
performed, methodological requirements such as sampling design, professional 
obligations regarding statements being made which can result in censure or loss 
of professional qualification. You have clear guidelines on reporting and 
expectations which can be directly mapped into the reports produced. You also 
have a clear recognition by WebTrust auditors about the importance of 
transparency. They are not a checklist of things to check, but an entire set of 
“assume the CA is not doing this” objectives. And even if all this fails, the 
WebTrust licensure and review process provides an incredibly valuable check on 
shoddy auditors, because it’s clear they harm the “WebTrust brand”

ETSI ESI-based audits lack all of that. They are primarily targeted at a 
different entity - the Supervisory Body within a Member State - and ETSI 
auditors fail to recognize that browsers want, and expect, as much detail as 
provided to the SB and more. We see the auditors, and the TC, entirely 
dismissive to the set of concerns regarding the lack of consistency and 
transparency. There is similarly no equivalent set of professional standards 
here: this is nominally handled by the accreditation process for the CAB by the 
NAB, except that the generic nature upon which ETSI ESI audits are designed 
means there are few normative 

Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Peter Bowen via dev-security-policy
On Sat, Jul 4, 2020 at 11:06 AM Ryan Sleevi via dev-security-policy
 wrote:
>
> On Sat, Jul 4, 2020 at 12:52 PM mark.arnott1--- via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
> > This is insane!
> > Those 300 certificates are used to secure healthcare information systems
> > at a time when the global healthcare system is strained by a global
> > pandemic.  I have to coordinate with more than 30 people to make this
> > happen.  This includes three subsidiaries and three contract partner
> > organizations as well as dozens of managers and systems engineers.  One of
> > my contract partners follows the guidance of an HL7 specification that
> > requires them to do certificate pinning.  When we replace these
> > certificates we must give them 30 days lead time to make the change.
> >
>
> As part of this, you should re-evaluate certificate pinning. As one of the
> authors of that specification, and indeed, my co-authors on the
> specification agree, certificate pinning does more harm than good, for
> precisely this reason.
>
> Ultimately, the CA is responsible for following the rules, as covered in
> https://wiki.mozilla.org/CA/Responding_To_An_Incident#Revocation . If
> they're not going to revoke, such as for the situation you describe,
> they're required to treat this as an incident and establish a remediation
> plan to ensure it does not happen again. In this case, a remediation plan
> certainly involves no longer certificate pinning (it is not safe to do),
> and also involves implementing controls so that it doesn't require 30
> people, across three subsidiaries, to replace "only" 300 certificates. The
> Baseline Requirements require those certificates to be revoked in as short
> as 24 hours, and so you need to design your systems robustly to meet that.

One of the things that can be very non-obvious to many people is that
"incident" as Ryan describes it is not a binary thing.  When Ryan says
"treat this as an incident" it is not necessarily the same kind of
incident system where there is a goal to have zero incidents forever.
In some environments the culture is that any incident is a career
limiting event or has financial impacts - for example, a factory might
pay out bonuses to employees for every month in which zero incidents
are reported.  This does not align with what Ryan speaks about.
Instead, based on my experience working with Ryan, incidents are the
trigger for blameless postmortems which are used to teach.  Google
documented this in their SRE book
(https://landing.google.com/sre/sre-book/chapters/postmortem-culture/
) and AWS includes this as part of their well-architected framework
(https://wa.aws.amazon.com/wat.concept.coe.en.html ).

One of the challenges is that not everyone in the WebPKI ecosystem has
aligned around the same view of incidents as learning opportunities.
This makes it very challenging for CAs to find a path that suits all
participants and frequently results in hesitancy to use the blameless
post-mortem version of incidents.

> > After wading through this very long chain of messages I see little
> > discussion of the impact this will have on end users.  Ryan Sleevi, in the
> > name of Google, is purporting to speak for the end users, but it is obvious
> > that Ryan does not understand the implication of applying these rules.
> >
>
> I realize you're new here, and so I encourage you to read
> https://wiki.mozilla.org/CA/Policy_Participants for context about the
> nature of participation.

To clarify what Ryan is saying here: he is pointing out that he is not
representing the position of Google or Alphabet, rather he is stating
he is acting as an independent party.

As you can see from earlier messages, Mozilla has clearly stated that
they are NOT requiring revocation in 7 days in this case, as they
judge the risk from revocation greater than the risks from not
revoking on that same timeframe. Ben Wilson, who does represent
Mozilla, stated:

"Mozilla does not need the certificates that incorrectly have the
id-kp-OCSPSigning EKU to be revoked within the next 7 days, as per
section 4.9.1.2 of the BRs. We want to work with CAs to identify a
path forward, which includes determining a reasonable timeline and
approach to replacing the certificates that incorrectly have the
id-kp-OCSPSigning EKU (and performing key destruction for them)."

The reason this discussion is ongoing is that Ryan does work for
Google and it is widely understood that: 1) certificates that are not
trusted by the Google Chrome browser  in its default configuration
(e.g. install on a home version of Windows with no further
configuration) or not trusted on widely used Android devices by
default are not commercially viable as they do not meet the needs of
many organizations and individuals who request certificates and 2)
Ryan appears to be highly influential in Chrome and Android decision
making about what certificates to trust.

If Google were to officially state something similar to 

Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Ryan Sleevi via dev-security-policy
On Sat, Jul 4, 2020 at 12:52 PM mark.arnott1--- via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> This is insane!
> Those 300 certificates are used to secure healthcare information systems
> at a time when the global healthcare system is strained by a global
> pandemic.  I have to coordinate with more than 30 people to make this
> happen.  This includes three subsidiaries and three contract partner
> organizations as well as dozens of managers and systems engineers.  One of
> my contract partners follows the guidance of an HL7 specification that
> requires them to do certificate pinning.  When we replace these
> certificates we must give them 30 days lead time to make the change.
>

As part of this, you should re-evaluate certificate pinning. As one of the
authors of that specification, and indeed, my co-authors on the
specification agree, certificate pinning does more harm than good, for
precisely this reason.

Ultimately, the CA is responsible for following the rules, as covered in
https://wiki.mozilla.org/CA/Responding_To_An_Incident#Revocation . If
they're not going to revoke, such as for the situation you describe,
they're required to treat this as an incident and establish a remediation
plan to ensure it does not happen again. In this case, a remediation plan
certainly involves no longer certificate pinning (it is not safe to do),
and also involves implementing controls so that it doesn't require 30
people, across three subsidiaries, to replace "only" 300 certificates. The
Baseline Requirements require those certificates to be revoked in as short
as 24 hours, and so you need to design your systems robustly to meet that.

There are proposals to the Baseline Requirements which would ensure this is
part of the legal agreement you make with the CA, to make sure you
understand these risks and expectations. It's already implicitly part of
the agreement you made, and you're expected to understand the legal
agreements you enter into. It's unfortunate that this is the first time
you're hearing about them, because the CA is responsible for making sure
their Subscribers know about this.


> After wading through this very long chain of messages I see little
> discussion of the impact this will have on end users.  Ryan Sleevi, in the
> name of Google, is purporting to speak for the end users, but it is obvious
> that Ryan does not understand the implication of applying these rules.
>

I realize you're new here, and so I encourage you to read
https://wiki.mozilla.org/CA/Policy_Participants for context about the
nature of participation.

I'm very familiar with the implications of applying these rules, both
personally and professionally. This is why policies such as
https://wiki.mozilla.org/CA/Responding_To_An_Incident#Revocation exist.
This is where such information is shared, gathered, and considered, as
provided by the CA. It is up to the CA to demonstrate the balance of
equities, but also to ensure that going forward, they actually adhere to
the rules they agreed to as a condition of trust. Simply throwing out
agreements and contractual obligations when it's inconvenient,
*especially *when
these were scenarios contemplated when they were written and CAs
acknowledged they would take steps to ensure they're followed, isn't a
fair, equitable, or secure system.

This is the unfortunate nature of PKI: as a system, the cost of revocation
is often not properly accounted for when CAs or Subscribers are designing
their systems, and so folks engage in behaviours that increase risk, such
as lacking automation or certificate pinning. For lack of a better analogy,
it's like a contract that was agreed, a service rendered, and then refusing
to pay the invoice because it turns out, it's actually more than you can
pay. We wouldn't accept that within businesses, so why should we accept
here? CAs warrant to the community that they understand the risks and have
designed their systems, as they feel appropriate, to account for that. That
some have failed to do is unfortunate, but that highlights poor design by
the CA, not the one sending the metaphorical invoice for what was agreed to.

Just like with invoices that someone can't pay, sometimes it makes sense to
work on payment plans, collaboratively. But now that means the person who
was expecting the money similarly may be short, and that can quickly
cascade into deep instability, so has to be done with caution. That's what
https://wiki.mozilla.org/CA/Responding_To_An_Incident#Revocation is about

However, if someone is regularly negligent in paying their bills, and have
to continue to work on payment agreements, eventually, you'll stop doing
business with them, because you realize that they are a risk. That's the
same as when we talk about distrust.

Peter Bowen says
> > ... simply revoking doesn't solve the issue; arguably it makes it
> >  worse than doing nothing.
>
> You are absolutely right, Peter.  Doctors will not be able to communicate
> with each 

Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Mark Arnott via dev-security-policy
On Friday, July 3, 2020 at 5:30:47 PM UTC-4, Ryan Sleevi wrote:
> On Fri, Jul 3, 2020 at 4:19 PM Peter Bowen wrote:
> 
I feel compelled to respond here for the first time even though I have never 
participated in CA/B forum proceeding and have never read through a single one 
of the 55 BRs that have been published over the last 8 years.
 
I was informed yesterday that I would have to replace just over 300 
certificates in 5 days because my CA is required by rules from the CA/B forum 
to revoke its subCA certificate.
 
This is insane!
Those 300 certificates are used to secure healthcare information systems at a 
time when the global healthcare system is strained by a global pandemic.  I 
have to coordinate with more than 30 people to make this happen.  This includes 
three subsidiaries and three contract partner organizastions as well as dozens 
of managers and systems engineers.  One of my contract partners follows the 
guidance of an HL7 specification that requires them to do certificate pinning.  
When we replace these certificates we must give them 30 days lead time to make 
the change.
 
After wading through this very long chain of messages I see little discussion 
of the impact this will have on end users.  Ryan Sleevi, in the name of Google, 
is purporting to speak for the end users, but it is obvious that Ryan does not 
understand the implication of applying these rules.
 
Peter Bowen says
> ... simply revoking doesn't solve the issue; arguably it makes it
>  worse than doing nothing.
 
You are absolutely right, Peter.  Doctors will not be able to communicate with 
each other effectively and people could die if the CA/B forum continues to 
blindly follow its rules without consideration for the greater impact this will 
have on the security of the internet.
 
In the CIA triad Availability is as important as Confidentiality.  Has anyone 
done a threat model and a serious risk analysis to determine what a reasonable 
risk mitigation strategy is?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread mark.arnott1--- via dev-security-policy
On Friday, July 3, 2020 at 5:30:47 PM UTC-4, Ryan Sleevi wrote:
> On Fri, Jul 3, 2020 at 4:19 PM Peter Bowen wrote:
> 

I feel compelled to respond here for the first time even though I have never 
participated in CA/B forum proceeding and have never read through a single one 
of the 55 BRs that have been published over the last 8 years.

I was informed yesterday that I would have to replace just over 300 
certificates in 5 days because my CA is required by rules from the CA/B forum 
to revoke its subCA certificate.

This is insane!
Those 300 certificates are used to secure healthcare information systems at a 
time when the global healthcare system is strained by a global pandemic.  I 
have to coordinate with more than 30 people to make this happen.  This includes 
three subsidiaries and three contract partner organizations as well as dozens 
of managers and systems engineers.  One of my contract partners follows the 
guidance of an HL7 specification that requires them to do certificate pinning.  
When we replace these certificates we must give them 30 days lead time to make 
the change.

After wading through this very long chain of messages I see little discussion 
of the impact this will have on end users.  Ryan Sleevi, in the name of Google, 
is purporting to speak for the end users, but it is obvious that Ryan does not 
understand the implication of applying these rules.

Peter Bowen says
> ... simply revoking doesn't solve the issue; arguably it makes it
>  worse than doing nothing.

You are absolutely right, Peter.  Doctors will not be able to communicate with 
each other effectively and people could die if the CA/B forum continues to 
blindly follow its rules without consideration for the greater impact this will 
have on the security of the internet.

In the CIA triad Availability is as important as Confidentiality.  Has anyone 
done a threat model and a serious risk analysis to determine what a reasonable 
risk mitigation strategy is?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Question about the issuance of OCSP Responder Certificates by technically constrained CAs

2020-07-04 Thread Tofu Kobe via dev-security-policy

Dear Mr. Wilson,

Could you please share the risk assessment that you have received from 
Mr. Sleevi?
I believe it would be very useful for the CAs to understand the gravity 
of the issue.


Sincerely yours,
T.K. (No hat)

On 7/4/2020 12:23 PM, Ryan Sleevi via dev-security-policy wrote:

On Fri, Jul 3, 2020 at 10:49 PM Corey Bonnell via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


I don’t understand why you’re making a distinction as to CA certificates,

which are irrelevant with respect to the Delegated Responder profile. That
is, you’re trying to find a way that it’s compliant, but this introduction
of the CA bit as somehow special doesn’t have any basis, as far as I can
tell.

The argument you're asserting is akin to someone saying that a CA
certificate with serverAuth EKU is mis-issued if the subject Common Name is
"Super Duper High Assurance Issuing CA", which is not a hostname in
preferred DNS syntax. After all, the EKU definition for serverAuth in RFC
5280 states that certificates expressing that EKU value are used for TLS
server authentication, and clearly that is a malformed hostname so the
certificate can't be used for its intended purpose. In essence, the
argument you're presenting applies the end-entity profile definition to the
CA certificate profile for that EKU, which doesn't make sense.


I appreciate the comparison, but we both know that it's a flawed one.
You're inventing a distinction - the CA bit - that doesn't exist, within
the BRs or the RFCs.

Nothing in the RFCs specifies the profile you describe for commonName.
That's something the BRs do, and they explicitly do it contingent upon the
CA bit (Subordinate CA vs Subscriber Cert)

You're inventing a fiction that, no doubt, reflected a belief of some CAs
when they did it. But no such distinction exists within the profiles, and
it was, as far as I can tell, only Mozilla, and only in one of the three
verifiers (i.e. not applying to Thunderbird/S/MIME) that took the step to
profile things further.

The distinction you make, about KU, falls apart when you realize the
distinction you're making here, about CAs, is also made up.



Where? It seems you’re reading this as inferred through omission of a

prohibition in Section 5.3, but you’re using it in the remainder of your
argument to argue why it’s proactively allowed.

Ben's email from last evening [1] clearly stated that Mozilla has allowed
ocspSigning alongside other EKUs and the concrete example given was a CA
certificate that expresses the serverAuth and ocspSigning EKUs [2].
Notably, this certificate also lacks the digitalSignature KU bit.


I think this is dangerously close to divination. Mozilla did not chase down
non-compliance. That didn't mean it wasn't non-compliant, just that Mozilla
didn't enforce it, just like Microsoft didn't enforce their policy. That
doesn't mean it was/is allowed by the BRs, and it doesn't mean that the
non-compliance doesn't pose serious security risk.



But I can’t get the view that says, even in 2020, that a Thing is Not a

Thing, which in this case is a Delegated Responder. Just like I can’t
understand folks who say a Sub-CA is not a Sub-CA if it’s a
Cross-Certificate.

I think there's a world of difference between these two cases. In the
former case, there are technical controls for certificates used in a RFC
5280 PKI such that a CA certificate that expresses the ocspSigning EKU but
no digitalSignature KU bit will not have any OCSP responses created by the
CA trusted by clients. In the latter case, I entirely agree with you: the
only assurance that a "Cross-Certificate" can't be used to issue end-entity
certificates is pinky swears and promises and therefore must be treated as
a Sub-CA, because it technically is a Sub-CA.


"will not have any OCSP responses created by the CA trusted by clients". I
can literally show you the clients that do trust this, including Major
Ones. Even Mozilla doesn't check the Key Usage -
https://dxr.mozilla.org/mozilla-central/source/security/nss/lib/mozpkix/lib/pkixocsp.cpp#127


We've already shown, however, that your argument about the "invalid" key
usage is still a Mozilla Program Violation, in that it violates what you're
declaring a defense mechanism (RFC 5280) by including an EKU not consistent
with the extensions. This isn't a question about the BRs allowing such
violations: they make no such statements.

However, I'm more concerned by your latest message, in which you dismiss
the very real security concern. That's something I think truly is dangerous
here, especially for CAs following, because you're simply not correct to
dismiss the concern. I also don't think it's correct to come up with a
fiction that, because something not written in any spec (that responder
certs are CA:FALSE) makes it OK to include the EKU to facilitate another
behaviour not written in any spec (EKU chaining), and to do so ignoring
both the RFC's profile (requiring KU) and the BRs profile (requiring
nocheck). 

Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Pedro Fuentes via dev-security-policy
Ryan,
I'm moving our particular discussions to Bugzilla.

I just want to clarify, again, that I'm not proposing to delay the revocation 
of the offending CA certificate, what I'm proposing is to give more time to the 
key destruction. Our position right now, is that the certificate would be 
revoked in any case during the 7 day period.

Thanks,
Pedro

El sábado, 4 de julio de 2020, 17:10:51 (UTC+2), Ryan Sleevi  escribió:
> Pedro: I said I understood you, and I thought we were discussing in the
> abstract.
> 
> I encourage you to reread this thread to understand why such a response
> varies on a case by case basis. I can understand your *attempt* to balance
> things, but I don’t think it would be at all appropriate to treat your
> email as your incident response.
> 
> You still need to holistically address the concerns I raised. As I
> mentioned in the bug: either this is a safe space to discuss possible
> options, which will vary on a CA-by-CA basis based on a holistic set of
> mitigations, or this was having to repeatedly explain to a CA why they were
> failing to recognize a security issue.
> 
> I want to believe it’s the former, and I would encourage you, that before
> you decide to delay revocation, you think very carefully. Have you met the
> Mozilla policy obligations on a delay to revocation? Perhaps it’s worth
> re-reading those expectations, before you make a decision that will also
> fail to uphold community expectations.
> 
> 
> On Sat, Jul 4, 2020 at 10:22 AM Pedro Fuentes via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
> 
> > Thanks, Ryan.
> > I’m happy we are now in understanding to this respect.
> >
> > Then I’d change the literally ongoing plan. We should have the new CAs
> > hopefully today. Then I would do maybe also today the reissuance of the bad
> > ones and I’ll revoke the offending certificates during the period.
> >
> > Best.
> > ___
> > dev-security-policy mailing list
> > dev-security-policy@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-security-policy
> >

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Ryan Sleevi via dev-security-policy
Pedro: I said I understood you, and I thought we were discussing in the
abstract.

I encourage you to reread this thread to understand why such a response
varies on a case by case basis. I can understand your *attempt* to balance
things, but I don’t think it would be at all appropriate to treat your
email as your incident response.

You still need to holistically address the concerns I raised. As I
mentioned in the bug: either this is a safe space to discuss possible
options, which will vary on a CA-by-CA basis based on a holistic set of
mitigations, or this was having to repeatedly explain to a CA why they were
failing to recognize a security issue.

I want to believe it’s the former, and I would encourage you, that before
you decide to delay revocation, you think very carefully. Have you met the
Mozilla policy obligations on a delay to revocation? Perhaps it’s worth
re-reading those expectations, before you make a decision that will also
fail to uphold community expectations.


On Sat, Jul 4, 2020 at 10:22 AM Pedro Fuentes via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Thanks, Ryan.
> I’m happy we are now in understanding to this respect.
>
> Then I’d change the literally ongoing plan. We should have the new CAs
> hopefully today. Then I would do maybe also today the reissuance of the bad
> ones and I’ll revoke the offending certificates during the period.
>
> Best.
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Key-destruction audit web-trust vs. ETSI (RE: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert)

2020-07-04 Thread Ryan Sleevi via dev-security-policy
On Sat, Jul 4, 2020 at 9:17 AM Buschart, Rufus 
wrote:

> Dear Ryan!
>
> > From: dev-security-policy 
> On Behalf Of Ryan Sleevi via dev-security-policy
> > Sent: Freitag, 3. Juli 2020 23:30
> > To: Peter Bowen 
> > Cc: Ryan Sleevi ; Pedro Fuentes ;
> mozilla-dev-security-pol...@lists.mozilla.org
> > Subject: Re: SECURITY RELEVANT FOR CAs: The curious case of the
> Dangerous Delegated Responder Cert
> >
> > On Fri, Jul 3, 2020 at 4:19 PM Peter Bowen  wrote:
> > > I agree that we cannot make blanket statements that apply to all CAs,
> > > but these are some examples where it seems like there are alternatives
> > > to key destruction.
> > >
> >
> > Right, and I want to acknowledge, there are some potentially viable
> paths specific to WebTrust, for which I have no faith with respect
> > to ETSI precisely because of the nature and design of ETSI audits, that,
> in an ideal world, could provide the assurance desired.
>
> Could you elaborate a little bit further, why you don't have "faith in
> respect to ETSI"? I have to admit, I never totally understood your concerns
> with ETSI audits because a simple comparison between WebTrust test
> requirements and ETSI test requirements don't show a lot of differences. If
> requirements are missing, we should discuss them with ETSI representatives
> to have them included in one of the next updates.


ETSI ESI members, especially the vice chairs, often like to make this claim
of “simple comparison”, but that fails to take into account the holistic
picture of how the audits are designed, operated, and their goals to
achieve.

For example, you will find nothing to the detail of say the AICPA
Professional Standards (AT-C) to provide insight into the obligations about
how the audit is performed, methodological requirements such as sampling
design, professional obligations regarding statements being made which can
result in censure or loss of professional qualification. You have clear
guidelines on reporting and expectations which can be directly mapped into
the reports produced. You also have a clear recognition by WebTrust
auditors about the importance of transparency. They are not a checklist of
things to check, but an entire set of “assume the CA is not doing this”
objectives. And even if all this fails, the WebTrust licensure and review
process provides an incredibly valuable check on shoddy auditors, because
it’s clear they harm the “WebTrust brand”

ETSI ESI-based audits lack all of that. They are primarily targeted at a
different entity - the Supervisory Body within a Member State - and ETSI
auditors fail to recognize that browsers want, and expect, as much detail
as provided to the SB and more. We see the auditors, and the TC, entirely
dismissive to the set of concerns regarding the lack of consistency and
transparency. There is similarly no equivalent set of professional
standards here: this is nominally handled by the accreditation process for
the CAB by the NAB, except that the generic nature upon which ETSI ESI
audits are designed means there are few normative requirements on auditors,
such as sampling and reporting. Unlike WebTrust, where the report has
professional obligations on the auditor, this simply doesn’t exist with
ETSI: if it isn’t a checklist item on 319 403, then the auditor can say
whatever they want and have zero professional obligations or consequences.
At the end of the day, an ETSI audit, objectively, is just a checklist
review: 403 provides too little assurance as to anything else, and lacks
the substance that holistically makes a WebTrust audit.

It is a comparison of “paint by numbers” to an actual creative work of art,
and saying “I don’t understand, they’re both use paint and both are of a
house”. And while it’s true both involve some degree of creative judgement,
and it’s up to
https://mobile.twitter.com/artdecider to sort that out, one of those
paintings is more suited to the fridge than the mantelpiece.

The inclusion of the ETSI criteria, back in v1.0 of the Mozilla Root Store
Policy in 2005, wasn’t based on a deeply methodical examination of the
whole process. It was “Microsoft uses it, so they probably found it
acceptable”. And it’s continuance wasn’t based on it meeting needs, so much
as “it’d be nice to have an alternative to WebTrust for folks to use”. But
both of those statements misunderstood the value ETSI ESI audits provide
and the systemic issues, even if they were well-intentioned. The
contemporary discussions, at that time, both of Scott Perry’s review (and
acceptance) as an auditor independent of the WebTrust/ETSI duo and of the
CACert audit, provide ample insight into the expectations and needs.

I don’t dismiss ETSI ESI for what it is trying to do: serve a legal
framework set of objectives (eIDAS, which is itself neutral with respect to
ETSI ESI audit schemes, as we see from the currently notified schemes). But
that’s not what we’re trying to do, not what we need, and certainly lacks
the body of supporting documents that 

Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Pedro Fuentes via dev-security-policy
Thanks, Ryan. 
I’m happy we are now in understanding to this respect. 

Then I’d change the literally ongoing plan. We should have the new CAs 
hopefully today. Then I would do maybe also today the reissuance of the bad 
ones and I’ll revoke the offending certificates during the period. 

Best.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Ryan Sleevi via dev-security-policy
On Sat, Jul 4, 2020 at 6:22 AM Pedro Fuentes via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> El viernes, 3 de julio de 2020, 18:18:49 (UTC+2), Ryan Sleevi  escribió:
> > Pedro's option is to reissue a certificate for that key, which as you
> point
> > out, keeps the continuity of CA controls associated with that key within
> > the scope of the audit. I believe this is the heart of Pedro's risk
> > analysis justification.
>
> I didn't want to participate here for now and just learn from other's
> opinions, but as my name has been evoked, I'd like to make a clarification.
>
> My proposal was not JUST to reissue the certificate with the same key. My
> proposal was to reissue the certificate with the same key AND a short
> lifetime (3 months) AND do a proper key destruction after that period.
>
> As I said, this:
> - Removes the offending EKU
> - Makes the certificate short-lived, for its consideration as delegated
> responder
> - Ensures that the keys are destroyed for peace of mind of the community
>
> And all that was, of course, pondering the security risk based on the fact
> that the operator of the key is also operating the keys of the Root and is
> also rightfully operating the OCSP services for the Root.
>
> I don't want to start another discussion, but I just feel necessary making
> this clarification, in case my previous message was unclear.


Thanks! I really appreciate you clarifying, as I had actually missed that
you proposed key destruction at the end of this. I agree, this is a
meaningfully different proposal that tries to balance the risks of
compliance while committing to a clear transition date.

>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Key-destruction audit web-trust vs. ETSI (RE: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert)

2020-07-04 Thread Buschart, Rufus via dev-security-policy
Dear Ryan!

> From: dev-security-policy  On 
> Behalf Of Ryan Sleevi via dev-security-policy
> Sent: Freitag, 3. Juli 2020 23:30
> To: Peter Bowen 
> Cc: Ryan Sleevi ; Pedro Fuentes ; 
> mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous 
> Delegated Responder Cert
> 
> On Fri, Jul 3, 2020 at 4:19 PM Peter Bowen  wrote:
> > I agree that we cannot make blanket statements that apply to all CAs,
> > but these are some examples where it seems like there are alternatives
> > to key destruction.
> >
> 
> Right, and I want to acknowledge, there are some potentially viable paths 
> specific to WebTrust, for which I have no faith with respect
> to ETSI precisely because of the nature and design of ETSI audits, that, in 
> an ideal world, could provide the assurance desired.

Could you elaborate a little bit further, why you don't have "faith in respect 
to ETSI"? I have to admit, I never totally understood your concerns with ETSI 
audits because a simple comparison between WebTrust test requirements and ETSI 
test requirements don't show a lot of differences. If requirements are missing, 
we should discuss them with ETSI representatives to have them included in one 
of the next updates.

With best regards,
Rufus Buschart

Siemens AG
Siemens Operations
Information Technology
Value Center Core Services
SOP IT IN COR
Freyeslebenstr. 1
91058 Erlangen, Germany 
Tel.: +49 1522 2894134
mailto:rufus.busch...@siemens.com
www.twitter.com/siemens

www.siemens.com/ingenuityforlife

Siemens Aktiengesellschaft: Chairman of the Supervisory Board: Jim Hagemann 
Snabe; Managing Board: Joe Kaeser, Chairman, President and Chief Executive 
Officer; Roland Busch, Klaus Helmrich, Cedrik Neike, Ralf P. Thomas; Registered 
offices: Berlin and Munich, Germany; Commercial registries: Berlin 
Charlottenburg, HRB 12300, Munich, HRB 6684; WEEE-Reg.-No. DE 23691322

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Pedro Fuentes via dev-security-policy
El viernes, 3 de julio de 2020, 18:18:49 (UTC+2), Ryan Sleevi  escribió:
> Pedro's option is to reissue a certificate for that key, which as you point
> out, keeps the continuity of CA controls associated with that key within
> the scope of the audit. I believe this is the heart of Pedro's risk
> analysis justification.

I didn't want to participate here for now and just learn from other's opinions, 
but as my name has been evoked, I'd like to make a clarification.

My proposal was not JUST to reissue the certificate with the same key. My 
proposal was to reissue the certificate with the same key AND a short lifetime 
(3 months) AND do a proper key destruction after that period.

As I said, this:
- Removes the offending EKU
- Makes the certificate short-lived, for its consideration as delegated 
responder
- Ensures that the keys are destroyed for peace of mind of the community

And all that was, of course, pondering the security risk based on the fact that 
the operator of the key is also operating the keys of the Root and is also 
rightfully operating the OCSP services for the Root.

I don't want to start another discussion, but I just feel necessary making this 
clarification, in case my previous message was unclear.

Best.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy