Re: OCSP responder support for SHA256 issuer identifier info

2019-10-08 Thread Tomas Gustavsson via dev-security-policy


This prompted me to dig up more information of this old issue. Here is the 
issue in our tracker:
https://jira.primekey.se/browse/ECA-3149

Looking back in my records it's not only a local jurisdiction auditor that 
enforced SHA-256. We also received several request from Web PKI CAs to 
implement SHA256 support in CertID. Some sparked by the mozilla issue to accept 
SHA256 based CertID in responses.
https://bugzilla.mozilla.org/show_bug.cgi?id=663315

There was a discussion on the RFC compliance back then, noting the differences 
between RFC6960 and 5019. RFC5019 is from 2007 though, and we are no strangers 
here to extending old RFCs. I think it is a valid, deliberate, violation of 
RFC5019 to support new algorithms.

It is a fact that when supplying to many different jurisdictions, private PKIs 
as well as Web PKI. SHA-256 must be supported.
The current approach has worked flawless since 2014, and I think it's a 
reasonable approach. I agree with Curt that #1 is correct in a generic 
perspective, but #4 is also valid and compliant with RFC509 2.2.3. Even #5 as 
well.

Standardizing clients on SHA-1 is fine, as RFC5019 says. Servers should be 
allowed to support SHA-256 (or other algorithms) to not complicate 
configuration and infrastructure even further.

On Friday, October 4, 2019 at 8:38:41 PM UTC+2, Jeremy Rowley wrote:
> (And, for the record, none of that legacy infrastructure that would Ryan 
> mentions taking years to update exists anymore. Yay for shutting down legacy 
> systems!)
> 
> -Original Message-
> From: dev-security-policy  On 
> Behalf Of Jeremy Rowley via dev-security-policy
> Sent: Friday, October 4, 2019 12:35 PM
> To: Tomas Gustavsson ; 
> mozilla-dev-security-pol...@lists.mozilla.org
> Subject: RE: OCSP responder support for SHA256 issuer identifier info
> 
> The CAB forum specifies that OCSP responses MUST conform to RFC5019 or RFC 
> 6960.  The requirements do not specific which RFC to follow when processing 
> requests, but I think you can imply that either one is required, right?  
> 
> Section 2.1.1. specifies that:  
> Clients MUST use SHA1 as the hashing algorithm for the CertID.issuerNameHash 
> and the CertID.issuerKeyHash values. Anyone implementing the BRs would expect 
> SHA1 for both fields. Where does the requirement to support SHA256 come in? 
> As Ryan mentioned, there was some discussion, but it seems like there was 
> nothing settled. I'd support a ballot clarifying the profile, but I don't 
> understand the value of requiring both SHA1 and SHA2 signatures for OCSP. 
> Doesn't it just make OCSP more cumbersome? 
> 
> -Original Message-
> From: dev-security-policy  On 
> Behalf Of Tomas Gustavsson via dev-security-policy
> Sent: Friday, October 4, 2019 1:45 AM
> To: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Re: OCSP responder support for SHA256 issuer identifier info
> 
> I was pointed to this interesting discussion. We were forced to support 
> requests with SHA256 in CertID back in 2014. Not for any relevant security 
> reasons, just because some stubborn auditors saw a red flag on the mentioning 
> of SHA-1.
> 
> We've implemented it by having both hashes in the lookup table where we check 
> for issuer when a response comes in. 
> 
> What to have in the response was an interesting topic.
> 
> In the response we use the same certID that the client sent. I would expect 
> that any client if checking CertID in the response would expect it to match 
> what they send. 
> 
> I'm suspicious of adding two SingleResponse in the response, one for each 
> CertID. 
> - Clients are used to one response, they may fail verification if the first 
> one doesn't have the same CertID
> - When auditors that requiring SHA3, shall we add three? That approach does 
> not seem agile.
> - It increases the size of responses, we've been told before about the desire 
> to keep responses as small as possible (typically to fit in a single etehrnet 
> frame)
> 
> Regards,
> Tomas
> 
> On Thursday, September 19, 2019 at 7:45:10 PM UTC+2, Ryan Sleevi wrote:
> > Thanks for raising this!
> > 
> > There some some slight past discussion in the CA/B Forum on this - 
> > https://cabforum.org/pipermail/public/2013-November/002440.html - as 
> > well as a little during the SHA-1 deprecation discussions ( 
> > https://cabforum.org/pipermail/public/2016-November/008979.html ) and 
> > crypto agility discussions ( 
> > https://cabforum.org/pipermail/public/2014-September/003921.html ), 
> > but none really nailed it down to the level you have.
> > 
> > Broadly, it suggests the need for a much tighter profile of OCSP, 
> > either within policies or the BRs. Two years ago, I started work on 
> > such a thing -
> > https://github.com/sleevi/cabforum-docs/pull/2 - but a certain large 
> > CA suggested it would take them years to even implement that, and it 
> > wouldn't have covered this!
> > 
> > I can't see #3 being valid, but I can see and understand good 
> > arguments 

Re: CAs cross-signing roots whose subjects don't comply with the BRs

2019-10-08 Thread Corey Bonnell via dev-security-policy
On Monday, October 7, 2019 at 10:52:36 AM UTC-4, Ryan Sleevi wrote:
> I'm curious how folks feel about the following practice:
> 
> Imagine a CA, "Foo", that creates a new Root Certificate ("Root 1"). They
> create this Root Certificate after the effective date of the Baseline
> Requirements, but prior to Root Programs consistently requiring compliance
> with the Baseline Requirements (i.e. between 2012 and 2014). This Root
> Certificate does not comply with the BRs' rules on Subject: namely, it
> omits the Country field.
> 
> Later, in 2019, Foo takes their existing Root Certificate ("Root 2"),
> included within Mozilla products, and cross-signs the Subject. This now
> creates a cross-signed certificate, "Root 1 signed-by Root 2", which has a
> Subject field that does not comport with the Baseline Requirements.
> 
> To me, this seems like a clear-cut violation of the Baseline Requirements,
> and "Foo" could have pursued an alternative hierarchy to avoid needing to
> cross-sign. However, I thought it interesting to solicit others' feedback
> on this situation, before opening the CA incident for Foo.

It appears there was a few months’ time in between versions 1.0 and 1.1 of the 
BRs that apparently allowed for omitting the C RDN even if the O was included 
in the Subject. Having spent some time in Censys.io, it appears that this root 
in question was not issued during this period so the root certificate in 
question was mis-issued. However, I think there’s an additional issue that’s 
worth discussing along with the current topic: how to treat cross-signs for 
roots that, when originally issued, were compliant with the BRs and Mozilla 
Policy but now can no longer have their subjectDN embedded in cross-signs due 
to changes in policy.

Given that there is discussion about mandating the use of ISO3166 or other 
databases for location information, the profile of the subjectDN may change 
such that future cross-signs cannot be done without running afoul of policy.

With this issue and Ryan’s scenario in mind, I think there may need to be some 
sort of grandfathering allowed for roots so that cross-signs can be issued 
without running afoul of policy. What I’m less certain on, is to what extent 
this grandfathering clause would allow for non-compliance of the current 
policies, as that is a very slippery slope and hinders progress in creating a 
saner webPKI certificate profile. For the CA that Ryan brings up, I’m less 
inclined to allow for a “grandfathering” as the root certificate in question 
was originally mis-issued. But for a root certificate that was issued in 
compliance with the policy at the time but now no longer has a compliant 
subjectDN, perhaps a carve-out in Mozilla Policy to allow for a cross-sign 
(using the now non-compliant subjectDN) is warranted.

Thanks,
Corey
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-10-08 Thread carsten.mueller.gl--- via dev-security-policy
> But the target audience for phishing are uninformed people. People which have 
> no idea what a EV cert is. People who don't even blink if the English on the 
> phishing page is worse than a 5-year old could produce.
> 
> You cannot base the decision if a EV indication in the browser is useful on 
> those people.
> 
The discussions that many users don't even recognize the difference between 
EV/OV/DV certificates is unfortunately true, BUT forced by the browsers:

When EV certificates were introduced, each browser displayed a green address 
bar including the company name and the country abbreviation of the certificate 
applicant.
Gradually the green colouring of the address bar was removed and only the 
company name and country abbreviation were displayed in green.
To top it all off, the lock symbol of ALL certificates was displayed in green 
to make the confusion of the users perfect.
Google Chrome also removed the green color of the company name.

Each browser then had a different display of all certificate types at short 
intervals.


In the early days of EV certificates, it was easy for me to tell my mother and 
" uninformed" friends that they should pay attention to the green address bar 
and the company name displayed there, and if possible not make any purchases or 
data inputs at all on other sites.

It was so simple: green address bar + some intelligence > 99% security

Today: 
- no normal user can display the contents of certificates
- no normal user can recognize which certificate types are actually involved


Of course, you can never be 100% sure that when calling a website with an EV 
certificate:
- no one has stolen the certificate
- another company with a similar name operates a phishing site
However, the effort to do this is so much higher that it is hardly worth it, 
see below.


Also it is pointed out here again and again that EV certificates are so 
insecure, because e.g. a certificate for https://stripe.ian.sh was issued for 
Stripe, Inc located in Kentucky and was displayed by the browsers exactly like 
the EV certificate from Stripe, Inc.
This is not a reason for abolishing EV certificates, but rather a reason to 
talk about the UI of the known browsers.
Each EV certificate lists both the location of the company and the registry. 
Therefore, you can also display "Fima/State/Country" in the address bar of the 
browser.

In addition, it is still much more complicated to operate a fake website with 
an EV certificate (I come from Germany, therefore related to Germany):
- Foundation of a corporation (GmbH):
o min 15.000,- EUR
o Appearance of at least one person at a notary and verification of all data
o Verification of all data by commercial register
- Application for EV certificate

I would like to link to a study on the use of EV certificates for phishing:
https://sectigo.com/uploads/resources/Understanding-the-Role-of-Extended-Validation-Certificates-in-Internet-Abuse.pdf

If the formation of a corporation in other countries is faster/simpler/cheaper, 
it still does not contribute to abuse.


My opinion:
EV certificates are not 100% secure, BUT they increase security enormously.


Why do browsers want to make the Internet less secure? Instead of abolishing 
the EV indicators, they should rather be fully activated again, including the 
green address bar.

Carsten


Translated with www.DeepL.com/Translator
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CAs cross-signing roots whose subjects don't comply with the BRs

2019-10-08 Thread Jakob Bohm via dev-security-policy

On 08/10/2019 13:41, Corey Bonnell wrote:

On Monday, October 7, 2019 at 10:52:36 AM UTC-4, Ryan Sleevi wrote:

I'm curious how folks feel about the following practice:

Imagine a CA, "Foo", that creates a new Root Certificate ("Root 1"). They
create this Root Certificate after the effective date of the Baseline
Requirements, but prior to Root Programs consistently requiring compliance
with the Baseline Requirements (i.e. between 2012 and 2014). This Root
Certificate does not comply with the BRs' rules on Subject: namely, it
omits the Country field.

...

> ...


Given that there is discussion about mandating the use of ISO3166 or other 
databases for location information, the profile of the subjectDN may change 
such that future cross-signs cannot be done without running afoul of policy.

With this issue and Ryan’s scenario in mind, I think there may need to be some 
sort of grandfathering allowed for roots so that cross-signs can be issued 
without running afoul of policy. What I’m less certain on, is to what extent 
this grandfathering clause would allow for non-compliance of the current 
policies, as that is a very slippery slope and hinders progress in creating a 
saner webPKI certificate profile. For the CA that Ryan brings up, I’m less 
inclined to allow for a “grandfathering” as the root certificate in question 
was originally mis-issued. But for a root certificate that was issued in 
compliance with the policy at the time but now no longer has a compliant 
subjectDN, perhaps a carve-out in Mozilla Policy to allow for a cross-sign 
(using the now non-compliant subjectDN) is warranted.



Please note the situation explained in the first paragraph of Ryan's
scenario: The (hypothetical) Root 1 without a C element may have been
issued before Vrowser Policy made BR compliance mandatory.  In other
words, BR non-compliance may not have been actual non-compliance at
that time.




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CAs cross-signing roots whose subjects don't comply with the BRs

2019-10-08 Thread Ryan Sleevi via dev-security-policy
On Tue, Oct 8, 2019 at 10:04 AM Corey Bonnell via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Unless I found a root that Ryan isn’t referring to, Mozilla Policy 2.1 (
> https://wiki.mozilla.org/CA:CertificatePolicyV2.1) would have been in
> force when the root was first issued, so BR compliance would be mandatory
> from a Mozilla policy standpoint.


Correct. It sounds like you've identified the same (recently added) root,
which was issued during Policy 2.1. That is, the BR-violating self-signed
version was created 2014-12, added to Mozilla in 2018-10, and the
BR-violating cross-signs created 2019-02 and 2019-06.

As it sounds like there's at least a consistent view that this is BR
violating, I left a comment on the Inclusion Bug,
https://bugzilla.mozilla.org/show_bug.cgi?id=1390803#c27 , to ask Wayne and
Kathleen how they'd like to proceed.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla Policy Requirements CA Incidents

2019-10-08 Thread Ryan Sleevi via dev-security-policy
On Tue, Oct 8, 2019 at 2:44 PM Paul Walsh  wrote:

> Dear Ryan,
>
> It would help a great deal, if you tone down your constant insults towards
> the entire CA world. Questioning whether you should trust any CA is a
> bridge too far.


> Instead, why don’t you try to focus on specific issues with specific CAs,
> or specific issues with most CAs. I don’t think you have a specific issue
> with every CA in the world.


> If specific CAs fail to do what you think is appropriate for browser
> vendors, perhaps you need to implement new, or improve existing audits?
> Propose solutions, implement checks and execute better reviews. Then
> iterate until everyone gets it right.
>

Paul,

I appreciate your response, even if I believe it's largely off-topic,
deeply confused, and personally insulting.

This thread is acknowledging there are systemic issues, that it's not with
specific CAs, and that the solutions being put forward aren't working, and
so we need better solutions. It's also being willing to acknowledge that if
we can't find systemic fixes, it may be that we have a broken system, and
we should not be afraid of looking to improve or replace the system.

Perhaps you (incorrectly) read "CAs" to mean "Every CA in the world", when
it's just a plurality of "more than one CA". That's a bias on the reader's
part, and suggesting that every plurality be accompanied by a qualified
("Some", "most") is just tone policing rather than engaging on substance.

That said, it's entirely inappropriate to chastise me for highlighting
issues of non-compliance, and attempt to identify the systemic issue
underneath it. It's also entirely inappropriate to insist that I personally
solve the issue, especially when significant effort has been expended to do
address these issues so far, which continue to fail without much
explanation as to why they're failing. Suggesting that we should accept
regular failures and just deal with it, unfortunately, has no place in
reasonable or rational conversation about how to improve things. That's
because such a position is not interested in finding solutions, or
improving, but in accepting the status quo.

If you have suggestions on why these systemic issues are still happening,
despite years of effort to improve them, I welcome them. However, there's
no place for reasonable discussion if you don't believe we should have open
and frank conversations about issues, about the misaligned incentives, or
about how existing efforts to prevent these incidents by Browsers are
falling flat.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla Policy Requirements CA Incidents

2019-10-08 Thread Ryan Sleevi via dev-security-policy
Paul,

If you'd like to continue this conversation, might I respectfully ask you
take it elsewhere from this thread? It does not seem you're interested in
finding solutions for the issues, and you've continued to shift your
message, so perhaps it might be better to continue that discussion
elsewhere?

Thanks.

On Tue, Oct 8, 2019 at 3:21 PM Paul Walsh  wrote:

> Ryan,
>
> You just proved me right by saying I’m confused because I hold an opinion
> about how you conduct yourself when collaborating with industry
> stakeholders. My observations are the same across the board. I don’t think
> I’m confused. But you’re welcome to disagree with me. And, it’s not
> off-topic. We should be respectful when communicating in forums like this.
> I think your communication is sometimes disrespectful.
>
> You also tell people they are confused about bylaws and other documents
> when they’re in disagreement with you. It’s possible for someone to fully
> understand and appreciate specific guidelines and disagree with you at the
> same time.
>
> I’ve contributed to many W3C specifications over the years - I co-founded
> two, including the Mobile Web Initiative. I was also Chair of BIMA.co.uk
> for three years. My point is this, when contributing to industry
> initiatives, I learned that there will always be instances where
> individuals need to be reminded to show respect to others when
> communicating differences of opinion - especially when there is a strong
> chance of culture differences. I don’t mind being reminded from time to
> time. Nobody is perfect.
>
> You can take this feedback, or leave it. Your call.
>
> - Paul
>
>
>
>
> On Oct 8, 2019, at 12:09 PM, Ryan Sleevi  wrote:
>
>
>
> On Tue, Oct 8, 2019 at 2:44 PM Paul Walsh  wrote:
>
>> Dear Ryan,
>>
>> It would help a great deal, if you tone down your constant insults
>> towards the entire CA world. Questioning whether you should trust any CA is
>> a bridge too far.
>
>
>> Instead, why don’t you try to focus on specific issues with specific CAs,
>> or specific issues with most CAs. I don’t think you have a specific issue
>> with every CA in the world.
>
>
>> If specific CAs fail to do what you think is appropriate for browser
>> vendors, perhaps you need to implement new, or improve existing audits?
>> Propose solutions, implement checks and execute better reviews. Then
>> iterate until everyone gets it right.
>>
>
> Paul,
>
> I appreciate your response, even if I believe it's largely off-topic,
> deeply confused, and personally insulting.
>
> This thread is acknowledging there are systemic issues, that it's not with
> specific CAs, and that the solutions being put forward aren't working, and
> so we need better solutions. It's also being willing to acknowledge that if
> we can't find systemic fixes, it may be that we have a broken system, and
> we should not be afraid of looking to improve or replace the system.
>
> Perhaps you (incorrectly) read "CAs" to mean "Every CA in the world", when
> it's just a plurality of "more than one CA". That's a bias on the reader's
> part, and suggesting that every plurality be accompanied by a qualified
> ("Some", "most") is just tone policing rather than engaging on substance.
>
> That said, it's entirely inappropriate to chastise me for highlighting
> issues of non-compliance, and attempt to identify the systemic issue
> underneath it. It's also entirely inappropriate to insist that I personally
> solve the issue, especially when significant effort has been expended to do
> address these issues so far, which continue to fail without much
> explanation as to why they're failing. Suggesting that we should accept
> regular failures and just deal with it, unfortunately, has no place in
> reasonable or rational conversation about how to improve things. That's
> because such a position is not interested in finding solutions, or
> improving, but in accepting the status quo.
>
> If you have suggestions on why these systemic issues are still happening,
> despite years of effort to improve them, I welcome them. However, there's
> no place for reasonable discussion if you don't believe we should have open
> and frank conversations about issues, about the misaligned incentives, or
> about how existing efforts to prevent these incidents by Browsers are
> falling flat.
>
>
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla Policy Requirements CA Incidents

2019-10-08 Thread Matthew Hardeman via dev-security-policy
My apologies.  I messed up when trimming that down.  I was quoting Ryan
Sleevi there.

On Tue, Oct 8, 2019 at 2:55 PM Paul Walsh  wrote:

>
> On Oct 8, 2019, at 12:51 PM, Matthew Hardeman  wrote:
>
>
> On Tue, Oct 8, 2019 at 2:10 PM Ryan Sleevi via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
>> On Tue, Oct 8, 2019 at 2:44 PM Paul Walsh  wrote:
>>
>> so we need better solutions. It's also being willing to acknowledge that
>> if
>> we can't find systemic fixes, it may be that we have a broken system, and
>> we should not be afraid of looking to improve or replace the system.
>>
>
> Communication styles aside, I believe there's merit to far more serious
> community consideration of the notion that either the system overall or the
> standard for expectations of the system's performance are literally
> broken.  There's probably a better forum for that discussion than this
> thread, but I echo that I believe the notion has serious merit.
>
>
> [PW] It looks like I said those words above, but I didn’t :)
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla Policy Requirements CA Incidents

2019-10-08 Thread Ryan Sleevi via dev-security-policy
On the topic of root causes, there's also
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3425554 that was
recently published. I'm not sure if that was peer reviewed, but it does
provide an analysis of m.d.s.p and Bugzilla. I have some concerns about the
study methodology (for example, when incident reports became normalized is
relevant, as well as incident reporting where security researchers first
went to the CA), but I think it looks at root causes a bit holistically.

I recently shared on the CA/B Forum's mailing list another example of
"routine" violation:
https://cabforum.org/pipermail/servercert-wg/2019-October/001154.html

My concern is that, 7 years later, while I think that compliance has
marginally improved (largely due to things led by outside the CA ecosystem,
like CT and ZLint/Certlint), I think the answers/responses/explanations we
get are still falling into the same predictable buckets, and that concerns
me, because it's neither sustainable nor healthy for the ecosystem.


   - We misinterpreted the requirements. It said X, but we thought it meant
   Y (Often: even though there's nothing in the text to support Y, that's just
   how we used to do business, and we're CAs so we know more than browsers
   about what browsers expect from us)
   - We weren't paying attention to the updates. We've now assigned people
   to follow updates.
   - We do X by saying our staff should do X. In this case, they forgot.
   We've retrained our staff / replaced our staff / added more staff to
   correct this.
   - We had a bug. We did not detect the bug because we did not have tests
   for this. We've added tests.
   - We weren't sure if X was wrong, but since no one complained, we
   assumed it was OK.
   - Our auditor said it was OK
   - Our vendor said it was OK

and so forth.

And then, in the responses, we generally see:

   - These certificates are used in Very Important Systems, so even though
   we said we'd comply, we cannot comply.
   - We don't think X is actually bad. We think X should be OK, and it
   should be Browsers that reject X if they don't like X (implicit: But they
   should still trust our CA, even though we aren't doing what they want)
   - Our vendor is not able to develop a fix in time, so we need more time.
   - We agree that X is bad, and has always been prohibited, but we need
   more time to actually implement a fix (because we did not plan/budget/staff
   to actually handle issues of non-compliance)

and so forth.

It's tiring and exhausting because we're hearing the same stuff. The same
patterns that CAs were using when they'd issue MITM certs to companies:
"Oh, wait, you mean't DON'T issue MITM certs? We didn't realize THAT'S what
you meant" (recall, this was at least one CA's response when caught issuing
MITM certs).

I'm exasperated because we're seeing CAs do things like not audit sub-CAs,
but leaving all the risk to be accepted by browsers, because it's too
hard/complex to migrate. We're seeing things like CA's not follow policy
requirements, but then correcting those issues is risky because now they've
issued a bunch of certs and it's painful to have to replace them all.

If we go back to that classic Dan Geer talk,
https://cseweb.ucsd.edu/~goguen/courses/275f00/geer.html , every time a CA
issues a certificate, they've now externalized the risk onto browsers/root
stores for that certificate lifetime. It's left to the ecosystem to detect
and clean up the mess, while the CA/subscriber gets the full benefits of
the issuance. It's a system of incentives that is completely misaligned,
and we've seen it now for the past decade: The CA benefits from the
(mis)issuance, and extracts value until it's detected, and then the cost of
cleanup is placed on the browser/Root Program that expects CAs to actually
conform. If the Browser doesn't enforce, or consistently enforce, then we
get back to the "Race to the bottom" that plagued the CA industry, as
"Requirements" become "Suggestions" or "Nice ideas". Yet if the Browser
does enforce, they suffer the blame from the Subscriber, who is unhappy
that the thing they bought no longer works.

In all of this time, it doesn't seem like we're making much progress on
systemic understanding and prevention. If that's an unfair statement, then
it means that some CAs are progressing, and some aren't, so how do we help
the ones that aren't? At what point do we go from education to removal of
trust? Where is the line when the same set of responses have been used so
much that it's no longer reasonable? When this ecosystem moves at a snail's
pace, due to CAs' challenges in updating systems and the long lifetime of
certificates, the feedback loop is large, and CAs can exploit that
asymmetry until they're detected. That may sound like I'm ascribing
intentional malice, when I'm mainly just talking about the perverse
incentives here that are hindering meaningful improvement.

While I appreciate your suggestion of more transparency, and I'm notably
all 

Re: Mozilla Policy Requirements CA Incidents

2019-10-08 Thread Paul Walsh via dev-security-policy
Ryan,

You just proved me right by saying I’m confused because I hold an opinion about 
how you conduct yourself when collaborating with industry stakeholders. My 
observations are the same across the board. I don’t think I’m confused. But 
you’re welcome to disagree with me. And, it’s not off-topic. We should be 
respectful when communicating in forums like this. I think your communication 
is sometimes disrespectful. 

You also tell people they are confused about bylaws and other documents when 
they’re in disagreement with you. It’s possible for someone to fully understand 
and appreciate specific guidelines and disagree with you at the same time.

I’ve contributed to many W3C specifications over the years - I co-founded two, 
including the Mobile Web Initiative. I was also Chair of BIMA.co.uk for three 
years. My point is this, when contributing to industry initiatives, I learned 
that there will always be instances where individuals need to be reminded to 
show respect to others when communicating differences of opinion - especially 
when there is a strong chance of culture differences. I don’t mind being 
reminded from time to time. Nobody is perfect.

You can take this feedback, or leave it. Your call. 

- Paul




> On Oct 8, 2019, at 12:09 PM, Ryan Sleevi  wrote:
> 
> 
> 
> On Tue, Oct 8, 2019 at 2:44 PM Paul Walsh  > wrote:
> Dear Ryan,
> 
> It would help a great deal, if you tone down your constant insults towards 
> the entire CA world. Questioning whether you should trust any CA is a bridge 
> too far. 
> 
> Instead, why don’t you try to focus on specific issues with specific CAs, or 
> specific issues with most CAs. I don’t think you have a specific issue with 
> every CA in the world. 
> 
> If specific CAs fail to do what you think is appropriate for browser vendors, 
> perhaps you need to implement new, or improve existing audits? Propose 
> solutions, implement checks and execute better reviews. Then iterate until 
> everyone gets it right. 
> 
> Paul,
> 
> I appreciate your response, even if I believe it's largely off-topic, deeply 
> confused, and personally insulting.
> 
> This thread is acknowledging there are systemic issues, that it's not with 
> specific CAs, and that the solutions being put forward aren't working, and so 
> we need better solutions. It's also being willing to acknowledge that if we 
> can't find systemic fixes, it may be that we have a broken system, and we 
> should not be afraid of looking to improve or replace the system.
> 
> Perhaps you (incorrectly) read "CAs" to mean "Every CA in the world", when 
> it's just a plurality of "more than one CA". That's a bias on the reader's 
> part, and suggesting that every plurality be accompanied by a qualified 
> ("Some", "most") is just tone policing rather than engaging on substance.
> 
> That said, it's entirely inappropriate to chastise me for highlighting issues 
> of non-compliance, and attempt to identify the systemic issue underneath it. 
> It's also entirely inappropriate to insist that I personally solve the issue, 
> especially when significant effort has been expended to do address these 
> issues so far, which continue to fail without much explanation as to why 
> they're failing. Suggesting that we should accept regular failures and just 
> deal with it, unfortunately, has no place in reasonable or rational 
> conversation about how to improve things. That's because such a position is 
> not interested in finding solutions, or improving, but in accepting the 
> status quo.
> 
> If you have suggestions on why these systemic issues are still happening, 
> despite years of effort to improve them, I welcome them. However, there's no 
> place for reasonable discussion if you don't believe we should have open and 
> frank conversations about issues, about the misaligned incentives, or about 
> how existing efforts to prevent these incidents by Browsers are falling flat.

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla Policy Requirements CA Incidents

2019-10-08 Thread Paul Walsh via dev-security-policy

> On Oct 8, 2019, at 12:51 PM, Matthew Hardeman  wrote:
> 
> 
> On Tue, Oct 8, 2019 at 2:10 PM Ryan Sleevi via dev-security-policy 
>  > wrote:
> On Tue, Oct 8, 2019 at 2:44 PM Paul Walsh  > wrote:
> 
> so we need better solutions. It's also being willing to acknowledge that if
> we can't find systemic fixes, it may be that we have a broken system, and
> we should not be afraid of looking to improve or replace the system.
> 
> Communication styles aside, I believe there's merit to far more serious 
> community consideration of the notion that either the system overall or the 
> standard for expectations of the system's performance are literally broken.  
> There's probably a better forum for that discussion than this thread, but I 
> echo that I believe the notion has serious merit.

[PW] It looks like I said those words above, but I didn’t :)
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla Policy Requirements CA Incidents

2019-10-08 Thread Ryan Sleevi via dev-security-policy
To try and minimize some of the tone-policing ad hominem, arguments from
authority, and thread-jacking, especially on-list, let's circle back to the
subject of this thread, and hopefully you can offer constructive solutions
there.

Is my understanding correct that your concern is you don't believe it's
appropriate to discuss concerns about systemic patterns of misissuance, to
highlight specific CAs that have demonstrated misissuance despite every
reasonable effort to prevent it, and to suggest that it's reasonable to
consider solutions such as either distrusting CAs (If this is simply "a few
bad apples") or systemic changes (if this is "all CAs")? Before you veered
well off-topic into tone policing, it did seem that the gist of your
argument was that you don't think it's reasonable or appropriate to suggest
that removing trust in CAs might be an appropriate remediation to sustained
patterns of failure?

In the spirit of finding productive solutions, rather than hijacking
threads, perhaps you could offer suggestions on what you believe could or
should have been done to prevent the issues like we saw. As noted in the
original message, Mozilla sent a CA communication reminding CAs of the
upcoming change, and requiring they positively confirm that they would
abide by it. However, that still failed. This was not a new requirement
Mozilla was introducing, but one introduced by Microsoft some time ago.
Every one of the CAs responded that they understood the requirement and
would abide by it.

What, in your opinion, could or should have been done to prevent this?

If your view is that nothing can prevent it, then yes, we'll disagree, and
a position of accepting those flaws without attempting to prevent them is
likely to find no purchase here.
If your view is that something could have been done, but wasn't, then it'd
be useful to understand what was missing.

It's unclear if you had thoughts to share on the topic, but if you'd like
to suggest it's inappropriate to distrust CAs, or to question whether there
are systemic flaws in the CA ecosystem if such events are functionally
inevitable, then my hope is you'd have solutions you can offer, and ideas
that have not yet been considered. Those would be examples of productive
contributions.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla Policy Requirements CA Incidents

2019-10-08 Thread Paul Walsh via dev-security-policy
I read Jeremy’s last response before posting my comment. 

Dear Ryan,

It would help a great deal, if you tone down your constant insults towards the 
entire CA world. Questioning whether you should trust any CA is a bridge too 
far.

Instead, why don’t you try to focus on specific issues with specific CAs, or 
specific issues with most CAs. I don’t think you have a specific issue with 
every CA in the world.

If specific CAs fail to do what you think is appropriate for browser vendors, 
perhaps you need to implement new, or improve existing audits? Propose 
solutions, implement checks and execute better reviews. Then iterate until 
everyone gets it right. 

I could write a book on how Google is the least “trustworthy” browser vendor on 
the planet. I could write another book about how Google is constantly 
contradicting its own advice and best practices. One example is where Google 
tells us to focus on the part of the URL that matters most - the domain name. 
But over here we have AMP, where URLs go to die a slow painful death within 
Google’s closed system, adding no value to the world outside of advertising. 
The list is endless when it comes to the lack of respect for people’s privacy 
from *some* browser vendors. Not all browsers are evil. Not all CAs are evil.

So, please can you get off your high horse and stick to facts and propose 
solutions instead of constantly making personal insults and bringing up 
problems without implementing new processes to address same. 

Can we just keep in mind that we’re all trying to do our job. No company is 
perfect. No process is perfect. No technology solution is perfect. 

Peace!

- Paul

p.s. I don’t work for a CA and never have. And I believe there are many 
weaknesses that could can should be better addressed.



> On Oct 7, 2019, at 5:45 PM, Ryan Sleevi via dev-security-policy 
>  wrote:
> 
> On Mon, Oct 7, 2019 at 7:06 PM Jeremy Rowley 
> wrote:
> 
>> Interesting. I can't tell with the Netlock certificate, but the other
>> three non-EKU intermediates look like replacements for intermediates that
>> were issued before the policy date and then reissued after the compliance
>> date.  The industry has established that renewal and new issuance are
>> identical (source?), but we know some CAs treat these as different
>> instances.
> 
> 
> Source: Literally every time a CA tries to use it as an excuse? :)
> 
> My question is how we move past “CAs provide excuses”, and at what point
> the same excuses fall flat?
> 
> While that's not an excuse, I can see why a CA could have issues with a
>> renewal compared to new issuance as changing the profile may break the
>> underlying CA.
> 
> 
> That was Quovadis’s explanation, although with no detail to support that it
> would break something, simply that they don’t review the things they sign.
> Yes, I’m frustrated that CAs continue to struggle with anything that is not
> entirely supervised. What’s the point of trusting a CA then?
> 
> However, there's probably something better than "trust" vs. "distrust" or
>> "revoke" v "non-revoke", especially when it comes to an intermediate.  I
>> guess the question is what is the primary goal for Mozilla? Protect users?
>> Enforce compliance?  They are not mutually exclusive objectives of course,
>> but the primary drive may influence how to treat issuing CA non-compliance
>> vs. end-entity compliance.
> 
> 
> I think a minimum goal is to ensure the CAs they trust are competent and
> take their job seriously, fully aware of the risk they pose. I am more
> concerned about issues like this which CAs like QuoVadis acknowledges they
> would not cause.
> 
> The suggestion of a spectrum of responses fundamentally suggests root
> stores should eat the risk caused by CAs flagrant violations. I want to
> understand why browsers should continue to be left holding the bag, and why
> every effort at compliance seems to fall on how much the browsers push.
> 
> Of the four, only Quovadis has responded to the incident with real
>> information, and none of them have filed the required format or given
>> sufficient information. Is it too early to say what happens before there is
>> more information about what went wrong? Key ceremonies are, unfortunately,
>> very manual beasts. You can automate a lot of it with scripting tools, but
>> the process of taking a key out, performing a ceremony, and putting things
>> a way is not automated due to the off-line root and FIPS 140-3
>> requirements.
> 
> 
> Yes, I think it’s appropriate to defer discussing what should happen to
> these specific CAs. However, I don’t think it’s too early to begin to try
> and understand why it continues to be so easy to find massive amounts of
> misissuance, and why policies that are clearly communicated and require
> affirmative consent is something CAs are still messing up. It suggests
> trying to improve things by strengthening requirements isn’t helping as
> much as needed, and perhaps more consistent distrusting is a 

Audit Letter Validation (ALV) on intermediate certs in CCADB

2019-10-08 Thread Kathleen Wilson via dev-security-policy

CAs,

There is now an "Audit Letter Validation (ALV)" button on intermediate 
certificate records in the CCADB. There is also a new task list item on 
your home page. In the summary section you will see a line item like the 
following.

"Intermediate Certs with Failed ALV Results: 8"
When that is non-zero, you will see a section that can be opened called
"Check failed Audit Letter Validation (ALV) results"

Instructions for the new task list item:
The intermediate certificates listed below have a failed Audit Letter 
Validation (ALV) result. Please check the intermediate certificate to 
make sure its SHA-256 Fingerprint is correctly listed in the 
corresponding audit statements. If you do not agree with the ALV 
results, add comments to the ‘Standard Audit ALV Comments’ or ‘BR Audit 
ALV Comments’ fields in the intermediate certificate record.


If you find that the SHA-256 fingerprint of an intermediate certificate 
is indeed missing from the applicable audit statement(s) and the 
certificate chains to a root that is trusted by Mozilla, then create a 
Bugzilla bug to provide an incident report along with your plan and 
timing to resolve the problem, as described here:

https://wiki.mozilla.org/CA/Responding_To_An_Incident#Incident_Report
Then add the link to the Bugzilla Bug to the ‘Standard Audit ALV 
Comments’ or ‘BR Audit ALV Comments’ field for that certificate.


ALV will sometimes report that it was unable to find the SHA-256 
fingerprint of the certificate even though it is in the audit statement. 
When you find this to be the situation, please add a comment to the 
‘Standard Audit ALV Comments’ or ‘BR Audit ALV Comments’ field in the 
record to state that the SHA-256 fingerprint for that cert is in the 
audit statement.


To help improve accuracy of ALV finding SHA-256 fingerprints have your 
auditors follow these guidelines for the SHA-256 fingerprints that are 
listed in the audit statements as being in scope of the audits:

- MUST: No colons, no spaces, and no linefeeds
- MUST: Uppercase letters
- SHOULD: be encoded in the document (PDF) as “selectable” text, not an 
image


Note, the new task list item focuses on the SHA-256 Fingerprints that 
are not found in the audit statements, but there are also many failures 
regarding dates that we would like resolved in future audit statements. 
So please have your auditors use the following date format guidelines in 
all future audit statements.

  Accepted date formats (month names in English):
  - Month DD, example: May 7, 2016
  - DD Month example: 7 May 2016
  - -MM-DD example: 2016-05-07
  - No extra text within the date, such as “7th” or “the”

-- Implementation Details Below --

The new task list item is filtered as follows.
CA Owner/Certificate Record Type equals Intermediate Certificate
AND Technically Constrained equals False
AND Revocation Status equals Not Revoked
AND OneCRL Status not equal to Added to OneCRL
AND Valid To (GMT) greater than TODAY
AND ((Mozilla Root Status equals Included or Change Requested)
OR (Microsoft Root Status equals Included or Change Requested))
AND ((Standard Audit ALV Found Cert equals FAIL)
OR (BR Audit ALV Found Cert equals FAIL))

The "Standard Audit ALV Found Cert" and "BR Audit ALV Found Cert" are 
set according to the ALV result AllThumbprintsListed, which looks for 
the cert's SHA-256 fingerprint in the corresponding audit statement.


There is a "Derived Trust Bits" field in the "Certificate Data [Fields 
NOT editable; extracted from PEM]" section. Very high level logic: If 
the cert has EKU in it, then that will be used. Otherwise see which root 
store it's parent root cert is in. If in both Mozilla and Microsoft then 
create union of the trust bits that the parent/root cert is trusted for.


When "Derived Trust Bits" contains 'Server Authentication', then ALV is 
run on the BR audit. Currently, for intermediate certs, we are only 
processing the standard and BR audit statements.


When "Audits Same as Parent" is checked, CCADB will look up the parent
chain until audit statements are found, and run ALV using those audit 
statements. When "Audits Same as Parent" is not checked, then CCADB will 
just pass the audit statements in the intermediate cert record into ALV.


As always, I will appreciate thoughtful and constructive feedback on 
this, especially as you try out the new functionality.


Thanks,
Kathleen
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla Policy Requirements CA Incidents

2019-10-08 Thread Matthew Hardeman via dev-security-policy
On Tue, Oct 8, 2019 at 2:10 PM Ryan Sleevi via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Tue, Oct 8, 2019 at 2:44 PM Paul Walsh  wrote:
>
> so we need better solutions. It's also being willing to acknowledge that if
> we can't find systemic fixes, it may be that we have a broken system, and
> we should not be afraid of looking to improve or replace the system.
>

Communication styles aside, I believe there's merit to far more serious
community consideration of the notion that either the system overall or the
standard for expectations of the system's performance are literally
broken.  There's probably a better forum for that discussion than this
thread, but I echo that I believe the notion has serious merit.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla Policy Requirements CA Incidents

2019-10-08 Thread Paul Walsh via dev-security-policy

> On Oct 8, 2019, at 12:44 PM, Ryan Sleevi  wrote:
> 
> Paul,

[snip]

> It does not seem you're interested in finding solutions for the issues,

[PW] You are mixing things up Ryan. I am interested in finding solution to 
issues. I specifically kept my message on point, which was your tone and 
approach to communication - this is equally important to the content you put 
forward. My point was made and you obviously didn’t receive it well - I’m ok 
with that. Most people don’t respond well to criticism. 

I will only contribute proposed solutions for issues where I posses deep domain 
expertise - moderating and chairing standards and best practices is one area, 
hence my contribution.

> and you've continued to shift your message, so perhaps it might be better to 
> continue that discussion elsewhere?

[PW] In my opinion, this is the right place. You don’t get to dictate where and 
when. The alternative would be to walk into a broom cupboard and scream at the 
wall. 

I won’t comment on this matter any further as I think we’ve labored the subject 
and I don’t want to take up people’s time any further. 

- Paul


> 
> Thanks.
> 
> On Tue, Oct 8, 2019 at 3:21 PM Paul Walsh  > wrote:
> Ryan,
> 
> You just proved me right by saying I’m confused because I hold an opinion 
> about how you conduct yourself when collaborating with industry stakeholders. 
> My observations are the same across the board. I don’t think I’m confused. 
> But you’re welcome to disagree with me. And, it’s not off-topic. We should be 
> respectful when communicating in forums like this. I think your communication 
> is sometimes disrespectful. 
> 
> You also tell people they are confused about bylaws and other documents when 
> they’re in disagreement with you. It’s possible for someone to fully 
> understand and appreciate specific guidelines and disagree with you at the 
> same time.
> 
> I’ve contributed to many W3C specifications over the years - I co-founded 
> two, including the Mobile Web Initiative. I was also Chair of BIMA.co.uk 
>  for three years. My point is this, when contributing to 
> industry initiatives, I learned that there will always be instances where 
> individuals need to be reminded to show respect to others when communicating 
> differences of opinion - especially when there is a strong chance of culture 
> differences. I don’t mind being reminded from time to time. Nobody is perfect.
> 
> You can take this feedback, or leave it. Your call. 
> 
> - Paul
> 
> 
> 
> 
>> On Oct 8, 2019, at 12:09 PM, Ryan Sleevi > > wrote:
>> 
>> 
>> 
>> On Tue, Oct 8, 2019 at 2:44 PM Paul Walsh > > wrote:
>> Dear Ryan,
>> 
>> It would help a great deal, if you tone down your constant insults towards 
>> the entire CA world. Questioning whether you should trust any CA is a bridge 
>> too far. 
>> 
>> Instead, why don’t you try to focus on specific issues with specific CAs, or 
>> specific issues with most CAs. I don’t think you have a specific issue with 
>> every CA in the world. 
>> 
>> If specific CAs fail to do what you think is appropriate for browser 
>> vendors, perhaps you need to implement new, or improve existing audits? 
>> Propose solutions, implement checks and execute better reviews. Then iterate 
>> until everyone gets it right. 
>> 
>> Paul,
>> 
>> I appreciate your response, even if I believe it's largely off-topic, deeply 
>> confused, and personally insulting.
>> 
>> This thread is acknowledging there are systemic issues, that it's not with 
>> specific CAs, and that the solutions being put forward aren't working, and 
>> so we need better solutions. It's also being willing to acknowledge that if 
>> we can't find systemic fixes, it may be that we have a broken system, and we 
>> should not be afraid of looking to improve or replace the system.
>> 
>> Perhaps you (incorrectly) read "CAs" to mean "Every CA in the world", when 
>> it's just a plurality of "more than one CA". That's a bias on the reader's 
>> part, and suggesting that every plurality be accompanied by a qualified 
>> ("Some", "most") is just tone policing rather than engaging on substance.
>> 
>> That said, it's entirely inappropriate to chastise me for highlighting 
>> issues of non-compliance, and attempt to identify the systemic issue 
>> underneath it. It's also entirely inappropriate to insist that I personally 
>> solve the issue, especially when significant effort has been expended to do 
>> address these issues so far, which continue to fail without much explanation 
>> as to why they're failing. Suggesting that we should accept regular failures 
>> and just deal with it, unfortunately, has no place in reasonable or rational 
>> conversation about how to improve things. That's because such a position is 
>> not interested in finding solutions, or improving, but in accepting the 
>> status quo.
>> 
>> If you have suggestions on why these systemic 

Re: Entrust Root Certification Authority - G4 Inclusion Request

2019-10-08 Thread Wayne Thayer via dev-security-policy
On Mon, Oct 7, 2019 at 9:09 AM Bruce via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Monday, July 29, 2019 at 5:22:19 PM UTC-4, Bruce wrote:
>
> > We will update section 4.2 and 9.12.3 in the next release of the CPS.
>
> The CPS Has been updated to address the above issues, see
> https://www.entrustdatacard.com/-/media/documentation/licensingandagreements/ssl-cps-english-20190930-version-36.pdf
> .
>
> I've verified these updates.

This request has been in discussion for quite a while now. Please post any
further comments by next Tuesday 15-October, and I will plan to end the
discussion period at that time.

- Wayne
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Mozilla Policy Requirements CA Incidents

2019-10-08 Thread Jeremy Rowley via dev-security-policy
I think requiring publication of profiles for certs is a good idea. It’s part 
of what I’ve wanted to publish as part of our CPS. You can see most of our 
profiles here: 
https://content.digicert.com/wp-content/uploads/2019/07/Digicert-Certificate-Profiles.pdf,
 but it doesn’t include ICAs right now. That was an oversight that we should 
fix. Publication of profiles probably won’t prevent issues related to 
engineering snafu’s or more manual procedures. However, publication may 
eliminate a lot of the disagreement on BR/Mozilla policy wording. That’s a lot 
more work though for the policy owners so the community would probably need to 
be more actively involved in reviewing profiles. Requiring publication at least 
gives the public a chance to review the information, which may not exist today.

The manual component definitely introduces a lot of risk in sub CA creation, 
and the explanation I gave is broader than renewals. It’s more about the risks 
currently associated with Sub CAs. The difference between renewal and new 
issuance doesn’t exist at DigiCert – we got caught on that issue a long time 
ago.


From: Ryan Sleevi 
Sent: Tuesday, October 8, 2019 5:49 PM
To: Jeremy Rowley 
Cc: Wayne Thayer ; Ryan Sleevi ; 
mozilla-dev-security-policy 
Subject: Re: Mozilla Policy Requirements CA Incidents



On Tue, Oct 8, 2019 at 6:42 PM Jeremy Rowley 
mailto:jeremy.row...@digicert.com>> wrote:
Tackling Sub CA renewals/issuance from a compliance perspective is difficult 
because of the number of manual components involved. You have the key ceremony, 
the scripting, and all of the formal process involved. Because the root is 
stored in an offline state and only brought out for a very intensive procedure, 
there is lots that can go wrong  compared to end-entity certs, including bad 
profiles and bad coding. These events are also things that happen rarely enough 
that many CAs might not have well defined processes around. A couple things 
we’ve done to eliminate issues include:


  1.  2 person review over the profile + a formal sign-off from the policy 
authority
  2.  A standard scripting tool for generating the profile to ensure only the 
subject info in the cert changes.  This has basic some linting.
  3.  We issue a demo cert. This cert is exactly the same as the cert we want 
to issue but it’s not publicly trusted and includes a different serial. We then 
review the demo cert to ensure profile accuracy. We should run this cert 
through a linter (added to my to-do list).

We used to treat renewals separate from new issuance. I think there’s still a 
sense that they “are” different, but that’s been changing. I’m definitely 
looking forward to hearing what other CAs do.

It's not clear: Are you suggesting the the configuration of sub-CA profiles is 
more, less, or the as risky as for end-entity certificates? It would seem that, 
regardless, the need for review and oversight is the same, so I'm not sure that 
#1 or #2 would be meaningfully different between the two types of certificates?

That said, of the incidents, only two of those were potentially related to the 
issuance of new versions of the intermediates (Actalis and QuoVadis). The other 
two were new issuance.

So I don't think we can explain it as entirely around renewals. I definitely 
appreciate the implicit point you're making: which is every manual action of a 
CA, or more generally, every action that requires a human be involved, is an 
opportunity for failure. It seems that we should replace all the humans, then, 
to mitigate the failure? ;)

To go back to your transparency suggestion, would we have been better if:
1) CAs were required to strictly disclose every single certificate profile for 
everything "they sign"
2) Demonstrate compliance by updating their CP/CPS to the new profile, by the 
deadline required. That is, requiring all CAs update their CP/CPS prior to 
2019-01-01.

Would this prevent issues? Maybe - only to extent CAs view their CP/CPS as 
authoritative, and strictly review what's on them. I worry that such a solution 
would lead to the "We published it, you didn't tell us it was bad" sort of 
situation (as we've seen with audit reports), which then further goes down a 
rabbit-hole of requiring CP/CPS be machine readable, and then tools to lint 
CP/CPS, etc. By the time we've added all of this complexity, I think it's 
reasonable to ask if the problem is not the humans in the loop, but the wrong 
humans (i.e. going back to distrusting the CA). I know that's jumping to 
conclusions, but it's part of what taking an earnest look at these issues are: 
how do we improve things, what are the costs, are there cheaper solutions that 
provide the same assurances?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Updated website owner survey data on identity, browser UIs, and the EV UI

2019-10-08 Thread Paul Walsh via dev-security-policy
I finally got around to digesting the email below. Summary/Reminder: CA related 
data on website identity from the perspective of website owners. 

As Homer Simpson said, "70% of all reports are made up”. So, everything put 
forward by me in previous messages, or anyone else, must be taken with a pinch 
of salt. That said, data does give meaning to personal opinions. Without data, 
we’re left with just opinions.

If we set the data aside for a second, we all know (fingers crossed) that 
opening the wrong link and signing into the wrong website, is something that 
people either worry about, or should be worried about. 

I pitched a company last week. The Director of Threat Intelligence for a 
multi-billion dollar security company in Silicon Valley thought he’d prove that 
he couldn't be caught out. I wasn’t testing the room, but he jumped in and said 
"#10 is the real domain". He was wrong (unfortunately because I felt bad) - it 
was a fake. I had to explain how it wasn’t a reflection on his expertise but 
rather, an emotional state of mind at a given point in time under specific 
circumstances. What the eyes can’t see, the brain fills in [1].

This subject is so important I would love Mozilla to consider implementing a 
beta program. I’d proudly contribute. 

Here’s something we did at MetaCert, that Mozilla could do - auto classify 
regulated TLDs and gTLDs. For example, you could light up the visual indicator 
for URLs on .GOV domains - without any need for third-party interaction. This 
would make it virtually impossible for anyone to fall for a phishing scam when 
filing taxes - for example. Perhaps it would encourage the DNC (and GOP) to 
only use .GOV domains and avoid being hacked by Russians in the future. These 
are just a few use cases where there’s a potential for massive real world 
benefit.

Rather than remove website identity based on the response to poor design 
implementation, we should consider making it better. I believe website owners 
would be more likely to seek verification if they can really protect their 
brand online. And consumers would proactively look for it. 

Website identity won’t ever be perfect, but with new technologies and 
methodologies that have come out in the past 18 months, so much more can also 
be achieved by CAs and other providers, to tighten up the verification process, 
while making it faster and lower cost for customers.

[1] https://www.gla.ac.uk/news/archiveofnews/2011/april/headline_194655_en.html 


- Paul




> On Oct 2, 2019, at 5:12 PM, Kirk Hall via dev-security-policy 
>  wrote:
> 
> On September 21, I sent a message to the Mozilla community with the results 
> of a survey of all of Entrust Datacard’s customers (both those who use EV 
> certificates, and those who don’t) concerning what they think about website 
> identity in browsers, browser UIs in general, and EV browser UIs in 
> particular. [1]  The data we published was based on 504 results collected 
> over two days (a pretty good response).
> 
> The survey was distributed in a way that each customer could only respond 
> once.  We left the survey open, and can now publish updated results from a 
> combined total of 804 separate certificate customers (300 more than last 
> time).  The results mirror the results we first reported two weeks ago – and 
> based on Paul Walsh’s data on when survey results should be considered 
> statistically significant [2], this means that the updated survey results are 
> very solid.
> 
> Here is a summary of the updated respondent results for the six questions 
> listed below.
> 
> (1) 97% of respondents agreed or strongly agreed with the statement: 
> "Customers / users have the right to know which organization is running a 
> website if the website asks the user to provide sensitive data."  (This is 
> the same result as for the prior sample.)
> 
> (2) 94% of respondents agreed or strongly agreed with the statement “Identity 
> on the Internet is becoming increasingly important over time.”  (This is 1% 
> higher than in the prior sample.)
> 
> (3) When respondents were asked “How important is it that your website has an 
> SSL certificate that tells customers they are at your company's official 
> website via a unique and consistent UI in the URL bar?” 76% said it was 
> either extremely important or very important to them. Another 13% said it was 
> somewhat important (total: 89%).  (This is 2% higher than in the prior 
> sample.)
> 
> (4) When respondents were asked “Do you believe that positive visual signals 
> in the browser UI (such as the EV UI for EV sites) are important to encourage 
> website owners to choose EV certificates and undergo the EV validation 
> process for their organization?” 72% said it was either extremely important 
> or very important to them.  (This is down 1% from the prior sample.) Another 
> 18% said it was somewhat important.  (This is up 1% from the 

RE: Mozilla Policy Requirements CA Incidents

2019-10-08 Thread Jeremy Rowley via dev-security-policy
Tackling Sub CA renewals/issuance from a compliance perspective is difficult 
because of the number of manual components involved. You have the key ceremony, 
the scripting, and all of the formal process involved. Because the root is 
stored in an offline state and only brought out for a very intensive procedure, 
there is lots that can go wrong  compared to end-entity certs, including bad 
profiles and bad coding. These events are also things that happen rarely enough 
that many CAs might not have well defined processes around. A couple things 
we’ve done to eliminate issues include:


  1.  2 person review over the profile + a formal sign-off from the policy 
authority
  2.  A standard scripting tool for generating the profile to ensure only the 
subject info in the cert changes.  This has basic some linting.
  3.  We issue a demo cert. This cert is exactly the same as the cert we want 
to issue but it’s not publicly trusted and includes a different serial. We then 
review the demo cert to ensure profile accuracy. We should run this cert 
through a linter (added to my to-do list).

We used to treat renewals separate from new issuance. I think there’s still a 
sense that they “are” different, but that’s been changing. I’m definitely 
looking forward to hearing what other CAs do.

Jeremy


From: Wayne Thayer 
Sent: Tuesday, October 8, 2019 3:20 PM
To: Ryan Sleevi 
Cc: Jeremy Rowley ; mozilla-dev-security-policy 

Subject: Re: Mozilla Policy Requirements CA Incidents

Ryan,

Thank you for pointing out these incidents, and for raising the meta-issue of 
policy compliance. We saw similar issues with CP/CPS compliance to changes in 
the 2.5 and 2.6 versions of policy, with little explanation beyond "it's hard 
to update our CPS" and "oops". Historically, our approach has been to strive to 
communicate policy updates to CAs with the assumption that they will happily 
comply with all of the requirements they are aware of. I don't think that's a 
bad thing to continue, but I agree it is is not working.

Having said that, I do recognize that translating "Intermediates must contain 
EKUs" into "don't renew this particular certificate" across an organization 
isn't as easy as it sounds. I'd be really interested in hearing how CAs are 
successfully managing the task of adapting to new requirements and if there is 
something we can do to encourage all CAs to adopt best practices in this 
regard. Our reactive options short of outright distrust are limited- so I think 
it would be worthwhile to focus on new preventive measures.

Thanks,

Wayne

On Tue, Oct 8, 2019 at 11:02 AM Ryan Sleevi via dev-security-policy 
mailto:dev-security-policy@lists.mozilla.org>>
 wrote:
On the topic of root causes, there's also
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3425554 that was
recently published. I'm not sure if that was peer reviewed, but it does
provide an analysis of m.d.s.p and Bugzilla. I have some concerns about the
study methodology (for example, when incident reports became normalized is
relevant, as well as incident reporting where security researchers first
went to the CA), but I think it looks at root causes a bit holistically.

I recently shared on the CA/B Forum's mailing list another example of
"routine" violation:
https://cabforum.org/pipermail/servercert-wg/2019-October/001154.html

My concern is that, 7 years later, while I think that compliance has
marginally improved (largely due to things led by outside the CA ecosystem,
like CT and ZLint/Certlint), I think the answers/responses/explanations we
get are still falling into the same predictable buckets, and that concerns
me, because it's neither sustainable nor healthy for the ecosystem.


   - We misinterpreted the requirements. It said X, but we thought it meant
   Y (Often: even though there's nothing in the text to support Y, that's just
   how we used to do business, and we're CAs so we know more than browsers
   about what browsers expect from us)
   - We weren't paying attention to the updates. We've now assigned people
   to follow updates.
   - We do X by saying our staff should do X. In this case, they forgot.
   We've retrained our staff / replaced our staff / added more staff to
   correct this.
   - We had a bug. We did not detect the bug because we did not have tests
   for this. We've added tests.
   - We weren't sure if X was wrong, but since no one complained, we
   assumed it was OK.
   - Our auditor said it was OK
   - Our vendor said it was OK

and so forth.

And then, in the responses, we generally see:

   - These certificates are used in Very Important Systems, so even though
   we said we'd comply, we cannot comply.
   - We don't think X is actually bad. We think X should be OK, and it
   should be Browsers that reject X if they don't like X (implicit: But they
   should still trust our CA, even though we aren't doing what they want)
   - Our vendor is not able to develop a fix in time, so we need more time.
   - We agree that X 

Re: Mozilla Policy Requirements CA Incidents

2019-10-08 Thread Ryan Sleevi via dev-security-policy
On Tue, Oct 8, 2019 at 8:16 PM Jeremy Rowley 
wrote:

> I think requiring publication of profiles for certs is a good idea. It’s
> part of what I’ve wanted to publish as part of our CPS. You can see most of
> our profiles here:
> https://content.digicert.com/wp-content/uploads/2019/07/Digicert-Certificate-Profiles.pdf,
> but it doesn’t include ICAs right now. That was an oversight that we should
> fix.
>

FWIW, if you want inspiration for your updates, I'm super enamored with the
following CP/CPSes and their approach to disclosure:
- Izenpe:
http://www.izenpe.eus/contenidos/informacion/doc_especifica/en_def/adjuntos/Certificates_Profile.pdf
- SwissSign: http://repository.swisssign.com/SwissSign-Gold-CP-CPS.pdf (See
7.1)
- Sectigo: https://sectigo.com/uploads/files/Sectigo-CPS-v5.1.5.pdf (see
Appendix C)


> Publication of profiles probably won’t prevent issues related to
> engineering snafu’s or more manual procedures. However, publication may
> eliminate a lot of the disagreement on BR/Mozilla policy wording. That’s a
> lot more work though for the policy owners so the community would probably
> need to be more actively involved in reviewing profiles. Requiring
> publication at least gives the public a chance to review the information,
> which may not exist today.
>
>
>
> The manual component definitely introduces a lot of risk in sub CA
> creation, and the explanation I gave is broader than renewals. It’s more
> about the risks currently associated with Sub CAs. The difference between
> renewal and new issuance doesn’t exist at DigiCert – we got caught on that
> issue a long time ago.
>

Right, I don't discount that manual issuance is hard. For example, 100% of
Amazon Trust Service's incidents have been related to manual issuance, and
not necessarily sub-CAs (
https://bugzilla.mozilla.org/show_bug.cgi?id=1569266 ,
https://bugzilla.mozilla.org/show_bug.cgi?id=1574594 ,
https://bugzilla.mozilla.org/show_bug.cgi?id=1525710 ). I highlight this,
because Amazon has generally been extremely on-the-ball in tooling and
infrastructure to detect issues (e.g. certlint), and yet were still bitten
by when it gets to manual issues.

Yet, going back to the original problem: do we believe that the CA
communications are sufficient to raise awareness such that when a CA is
implementing a manual review process, they'll implement it correctly? If we
don't, then what we can do to improve. If we do, then what should we do
when CAs drop the ball?

>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Website owner survey data on identity, browser UIs, and the EV UI

2019-10-08 Thread Paul Walsh via dev-security-policy

> On Oct 2, 2019, at 3:52 PM, Peter Gutmann  wrote:
> 
> Paul Walsh ​ writes:
> 
>> I would like to see one research paper published by one browser vendor to
>> show that website identity visual indicators can not work.
> 
> Uhhh... are you serious with that request?  You're asking for a study from a
> browser vendor, a group who in any case don't publish research papers but
> write browsers, indicating that their own UI doesn't work?

[PW] I see where you are coming from Peter. I wouldn’t expect any browser 
vendor to provide studies or evidence to explain why they’re implementing 
features. And separately, I wouldn’t expect Google to provide anything to 
anyone for any reason, because they pretty much do what they do for profit. 
Chrome dev is directed by advertising dollars, not by privacy or user safety. 

However, I'd love to think that the Mozilla team still care about the developer 
community and end users more than they care about profit [1] or following other 
browser vendors. Firefox isn’t the “leader” it was, but I still love the brand 
and cause.  

I’m sure you don’t need to be reminded that Mozilla is a foundation, but I 
personally wanted to remind myself of their core values. So with this in mind, 
I’d like to think that the team would stop and rethink decisions that have a 
massive impact on stakeholders and end-users. And when asked for some 
supporting evidence, they wouldn’t fall silent but engage in a meaningful 
debate.  

It has been a long time since my team or I were involved in any way, so this 
might have changed. 

[1] https://www.mozilla.org/en-US/about/ 
> 
>> I’d love you to show me the type of research I’ve asked for. I’m open to
>> learning more. I’m not new to this game. I worked on integrated browsers and
>> search engines in the 90’s at AOL.
> 
> If it's OK to cite peer-reviewed papers from universities published at
> conferences and in journals, I can dig up a few of those.

[PW] If you ever do find the time to dig them out, please do. No pressure.

- Paul

> 
> Peter.
> 
> 

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Website owner survey data on identity, browser UIs, and the EV UI

2019-10-08 Thread Paul Walsh via dev-security-policy

> On Oct 2, 2019, at 1:16 PM, Ronald Crane via dev-security-policy 
>  wrote:
> 
> On 10/1/2019 6:56 PM, Paul Walsh via dev-security-policy wrote:
>> New tools such as Modlishka now automate phishing attacks, making it 
>> virtually impossible for any browser or security solution to detect -  
>> bypassing 2FA. Google has admitted that it’s unable to detect these phishing 
>> scams as they use a phishing domain but instead of a fake website, they use 
>> the legitimate website to steal credentials, including 2FA. This is why 
>> Google banned its users from signing into its own websites via mobile apps 
>> with a WebView. If Google can prevent these attacks, Mozilla can’t.
> 
> I understand that Modlishka emplaces the phishing site as a MITM. This is yet 
> another reason for browser publishers to help train their users to use only 
> authentic domain names, and also to up their game on detecting and banning 
> phishing domains. I don't think it says much about the value, or lack 
> thereof, of EV certs. As has been cited repeatedly in this thread, most 
> phishing sites don't even bother to use SSL, indicating that most users who 
> can be phished aren't verifying the correct domain.

[PW] Ronald, I don’t believe better detection and prevention is the answer for 
anti-phishing - but not trying isn’t an option, obviously. With billions of 
dollars being invested in this area, and with hundreds of millions changing 
hands through M every year, the problem is getting worse. Every week we read 
about yet another security company with anti-phishing [insert fancy words 
here]. It’s ain’t work’n. 

I believe I demonstrated in a previous message, with data and techniques, why 
it’s impossible for any company to detect every phishing URL or website. 

And I’m afraid you’re incorrect about SSL certs. According to Webroot, over 93% 
of all new phishing sites use an SSL certificate. And according to MetaCert 
it’s over 95%.

And of those with a DV cert, over 95% come from Let’s Encrypt - because they’re 
automatically issued for free and they have a near-zero policy for detection, 
prevention or cert revocation. This is why over 14,000 SSL certs were issued by 
Let’s Encrypt for domains with PayPal in it - so if you believe in better 
detection and prevention, why don’t you/we request this of Let’s Encrypt? 

Why isn’t anyone’s head blowing up over the Let’s Encrypt stats? If people 
think “EV is broken” they must think DV is stuck in hell with broken legs.

It’s impossible to properly verify the domain by looking at it - you need to 
carry out other checks. It’s simply not solving the problem. 

I provided data and insight to how website identity UI can work - I’d really 
love to hear counterarguments around that, or agreement that it’s useful. 

- Paul

> 
> -R
> 
> 
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Website owner survey data on identity, browser UIs, and the EV UI

2019-10-08 Thread Matt Palmer via dev-security-policy
On Tue, Oct 08, 2019 at 07:16:59PM -0700, Paul Walsh via dev-security-policy 
wrote:
> Why isn’t anyone’s head blowing up over the Let’s Encrypt stats?

Because those stats don't show anything worth blowing up ones head over.  I
don't see anything in them that indicates that those 14,000 certificates --
or even one certificate, for that matter --was issued without validating
control over the domain name(s) indicated in the certificates.

EV and DV serve different purposes, and while DV is more-or-less solving the
problem it sets out to solve, the credible evidence presented shows that EV
does not solve any problem that browsers are interested in.

> If people think “EV is broken” they must think DV is stuck in hell with
> broken legs.

Alternately, people realise that EV and DV serve different purposes through
different methods, and thus cannot be compared in the trivial and flippant
way you suggest.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla Policy Requirements CA Incidents

2019-10-08 Thread Erwann Abalea via dev-security-policy
Bonsoir,

Le lundi 7 octobre 2019 20:53:11 UTC+2, Ryan Sleevi a écrit :
[...]
> # Intermediates that do not comply with the EKU requirements
> 
> In September 2018 [1], Mozilla sent a CA Communications reminding CAs about
> the changes in Policy 2.6.1. One specific change, called to attention in
> ACTION 3, required the presence of EKUs for intermediates, and the
> separation of e-mail and SSL/TLS from the intermediates. This requirement,
> while new to Mozilla Policy, was not new to publicly trusted CAs, as it
> matched an existing requirement from Microsoft's Root Program [2]. This
> requirement was first introduced by Microsoft in July 2015, with their
> Version 2.0 of their own policy.
> 
> It's a reasonable expectation to expect that all CAs in both Microsoft and
> Mozilla's program would have been conforming to the stricter requirement of
> Microsoft, which goes above-and-beyond the Baseline Requirements. However,
> Mozilla still allowed a grandfathering in of existing intermediates,
> setting the new requirement for their policy at 2019-01-01. Mozilla also
> set forth certain exclusions to account for cross-signing.
> 
> Despite that, four CAs have violated this requirement in 2019:
> * Microsoft: https://bugzilla.mozilla.org/show_bug.cgi?id=1586847
> * Actalis: https://bugzilla.mozilla.org/show_bug.cgi?id=1586787
> * QuoVadis: https://bugzilla.mozilla.org/show_bug.cgi?id=1586792
> * NetLock: https://bugzilla.mozilla.org/show_bug.cgi?id=1586795
> 
> # Authority Key Identifier issues
> 
> RFC 5280, Section 4.2.1.1 [3], defines the Authority Key Identifier
> extension. Within RFC 5280, it states that (emphasis added)
> 
>The identification MAY be based on ***either*** the
>key identifier (the subject key identifier in the issuer's
>certificate) ***or*** the issuer name and serial number.
> 
> That is, it provides an either/or requirement for this field.

If this is to be read as an exclusive choice, then how do you interpret third 
paragraph of clause 4.2:

   Conforming CAs MUST support key identifiers (Sections 4.2.1.1 and
   4.2.1.2), basic constraints (Section 4.2.1.9), key usage (Section
   4.2.1.3), and certificate policies (Section 4.2.1.4) extensions.

Does that mean that CAs MUST exclusively choose between keyIdentifier or 
issuerName+serialNumber, while at the same time use keyIdentifier? Just get rid 
of the issuerName+serialNumber, then.

Now go down to Appendix A.2 containing the ASN.1 module, you'll find some 
comments in the definition (that's the way lazy ASN.1 writers try to express 
constraints):

AuthorityKeyIdentifier ::= SEQUENCE {
keyIdentifier [0] KeyIdentifierOPTIONAL,
authorityCertIssuer   [1] GeneralNames OPTIONAL,
authorityCertSerialNumber [2] CertificateSerialNumber  OPTIONAL }
-- authorityCertIssuer and authorityCertSerialNumber MUST both
-- be present or both be absent

Here, again, the constraint is on presence or absence of both issuer and 
serial, nothing on presence of both keyIdentifier and the (issuer,serial) tuple.

> Despite this
> not being captured in the updated ASN.1 module defined in RFC 5912 [4],
> Mozilla Root Store Policy has, since Version 1.0 [5], included a
> requirement that CAs MUST NOT issue certificates that have (emphasis added)
> "incorrect extensions (e.g., SSL certificates that exclude SSL usage,
> or ***authority
> key IDs that include both the key ID and the issuer's issuer name and
> serial number)***;"

Isn't it strange that while RFC5912 modified the ExtendedKeyIdentifier 
definition to add ASN.1 constraints on presence or absence of both 
authorityCertIssuer/authorityCertSerialNumber elements, nothing has been added 
to extend the same constraint forbidding presence of keyIdentifier and 
issuer+serial? It would have been really easy if it was intended that way.

I'll let participants read X.509 clause 8.2.2.1/9.2.2.1/12.2.2.1 (depending on 
the edition you're reading) to discover that the ASN.1 definition is equal to 
RFC5912's one since 1997 (first edition of X.509v3), and find that both 
keyIdentifier and issuer+serial are explicitly permitted (given that all is 
consistent). That's 6 successive revisions since, and it hasn't changed.


Now, if a strict compliancy to RFC5280 is required, I'd like to understand how 
Mozilla NSS can be compliant with the following paragraph, taken from RFC5280 
clause 4.2:

   At a minimum, applications conforming to this profile MUST recognize
   the following extensions: key usage (Section 4.2.1.3), certificate
   policies (Section 4.2.1.4), subject alternative name (Section
   4.2.1.6), basic constraints (Section 4.2.1.9), name constraints
   (Section 4.2.1.10), policy constraints (Section 4.2.1.11), extended
   key usage (Section 4.2.1.12), and inhibit anyPolicy (Section
   4.2.1.14).

To my knowledge, unless this has changed in the past months, NSS doesn't 
properly handle CertificatePolicies, PolicyConstraints, and 

Re: Mozilla Policy Requirements CA Incidents

2019-10-08 Thread Ryan Sleevi via dev-security-policy
On Tue, Oct 8, 2019 at 6:42 PM Jeremy Rowley 
wrote:

> Tackling Sub CA renewals/issuance from a compliance perspective is
> difficult because of the number of manual components involved. You have the
> key ceremony, the scripting, and all of the formal process involved.
> Because the root is stored in an offline state and only brought out for a
> very intensive procedure, there is lots that can go wrong  compared to
> end-entity certs, including bad profiles and bad coding. These events are
> also things that happen rarely enough that many CAs might not have well
> defined processes around. A couple things we’ve done to eliminate issues
> include:
>
>
>
>1. 2 person review over the profile + a formal sign-off from the
>policy authority
>2. A standard scripting tool for generating the profile to ensure only
>the subject info in the cert changes.  This has basic some linting.
>3. We issue a demo cert. This cert is exactly the same as the cert we
>want to issue but it’s not publicly trusted and includes a different
>serial. We then review the demo cert to ensure profile accuracy. We should
>run this cert through a linter (added to my to-do list).
>
>
>
> We used to treat renewals separate from new issuance. I think there’s
> still a sense that they “are” different, but that’s been changing. I’m
> definitely looking forward to hearing what other CAs do.
>

It's not clear: Are you suggesting the the configuration of sub-CA profiles
is more, less, or the as risky as for end-entity certificates? It would
seem that, regardless, the need for review and oversight is the same, so
I'm not sure that #1 or #2 would be meaningfully different between the two
types of certificates?

That said, of the incidents, only two of those were potentially related to
the issuance of new versions of the intermediates (Actalis and QuoVadis).
The other two were new issuance.

So I don't think we can explain it as entirely around renewals. I
definitely appreciate the implicit point you're making: which is every
manual action of a CA, or more generally, every action that requires a
human be involved, is an opportunity for failure. It seems that we should
replace all the humans, then, to mitigate the failure? ;)

To go back to your transparency suggestion, would we have been better if:
1) CAs were required to strictly disclose every single certificate profile
for everything "they sign"
2) Demonstrate compliance by updating their CP/CPS to the new profile, by
the deadline required. That is, requiring all CAs update their CP/CPS prior
to 2019-01-01.

Would this prevent issues? Maybe - only to extent CAs view their CP/CPS as
authoritative, and strictly review what's on them. I worry that such a
solution would lead to the "We published it, you didn't tell us it was bad"
sort of situation (as we've seen with audit reports), which then further
goes down a rabbit-hole of requiring CP/CPS be machine readable, and then
tools to lint CP/CPS, etc. By the time we've added all of this complexity,
I think it's reasonable to ask if the problem is not the humans in the
loop, but the wrong humans (i.e. going back to distrusting the CA). I know
that's jumping to conclusions, but it's part of what taking an earnest look
at these issues are: how do we improve things, what are the costs, are
there cheaper solutions that provide the same assurances?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla Policy Requirements CA Incidents

2019-10-08 Thread Ryan Sleevi via dev-security-policy
(Sorry for the second e-mail, Erwann still having some Groups issues - this
will be the one that shows up on the list)

On Tue, Oct 8, 2019 at 6:43 PM Erwann Abalea via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> If this is to be read as an exclusive choice, then how do you interpret
> third paragraph of clause 4.2:
>
>Conforming CAs MUST support key identifiers (Sections 4.2.1.1 and
>4.2.1.2), basic constraints (Section 4.2.1.9), key usage (Section
>4.2.1.3), and certificate policies (Section 4.2.1.4) extensions.
>
> Does that mean that CAs MUST exclusively choose between keyIdentifier or
> issuerName+serialNumber, while at the same time use keyIdentifier? Just get
> rid of the issuerName+serialNumber, then.
>

English language plurality? That is talking about "subject key identifiers
and authority key identifiers" - not about keyIdentifiers. From that same
section you're quoting (4.2), the phrase is repeated for applications, but
with a slight twist:

   In addition, applications conforming to this profile SHOULD recognize
   the authority and subject key identifier (Sections 4.2.1.1 and
   4.2.1.2) and policy mappings (Section 4.2.1.5) extensions.

It was just an editorial quirk by the RFC editor relating to there being
"two" items on the latter list, but "four" items on the former.


> > Despite this
> > not being captured in the updated ASN.1 module defined in RFC 5912 [4],
> > Mozilla Root Store Policy has, since Version 1.0 [5], included a
> > requirement that CAs MUST NOT issue certificates that have (emphasis
> added)
> > "incorrect extensions (e.g., SSL certificates that exclude SSL usage,
> > or ***authority
> > key IDs that include both the key ID and the issuer's issuer name and
> > serial number)***;"
>
> Isn't it strange that while RFC5912 modified the ExtendedKeyIdentifier
> definition to add ASN.1 constraints on presence or absence of both
> authorityCertIssuer/authorityCertSerialNumber elements, nothing has been
> added to extend the same constraint forbidding presence of keyIdentifier
> and issuer+serial? It would have been really easy if it was intended that
> way.
>

I don't think that "strange" is relevant here, particularly as it relates
to Mozilla policy? I was trying to head off that argument, but you jumped
full into it with the reference to A.2. That is, even if you want to
suggest that A.2. of 5280 permits it, or that 5912 permits it, or that
X.509 (which no browser really pays attention to, ITU-T being what it is),
that argument would be a moot argument in the presence of the Policy 1.0.


> Now, if a strict compliancy to RFC5280 is required, I'd like to understand
> how Mozilla NSS can be compliant with the following paragraph, taken from
> RFC5280 clause 4.2:
>
>At a minimum, applications conforming to this profile MUST recognize
>the following extensions: key usage (Section 4.2.1.3), certificate
>policies (Section 4.2.1.4), subject alternative name (Section
>4.2.1.6), basic constraints (Section 4.2.1.9), name constraints
>(Section 4.2.1.10), policy constraints (Section 4.2.1.11), extended
>key usage (Section 4.2.1.12), and inhibit anyPolicy (Section
>4.2.1.14).
>
> To my knowledge, unless this has changed in the past months, NSS doesn't
> properly handle CertificatePolicies, PolicyConstraints, and
> InhibitAnyPolicy.
>

It's entirely consistent to require CAs to conform to the RFC 5280 profile,
without requiring applications like NSS conform to the 5280 profile. It's
not even a double-standard: it's two entirely separable pieces. So it's not
worth responding to.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Website owner survey data on identity, browser UIs, and the EV UI

2019-10-08 Thread Paul Walsh via dev-security-policy
> On Oct 2, 2019, at 3:41 PM, Ronald Crane via dev-security-policy 
>  wrote:
> 
> On 10/2/2019 3:00 PM, Paul Walsh via dev-security-policy wrote:
>> On Oct 2, 2019, at 2:52 PM, Ronald Crane via dev-security-policy 
>>  wrote:
> [snip]
>>> Some other changes that might help reduce phishing are:
>>> 1. Site owners should avoid using multiple domains, because using them 
>>> habituates users to the idea that there are several valid domains for a 
>>> given entity. Once users have that idea, phishers are most of the way to 
>>> success. Some of the biggest names in, e.g., brokerage services are 
>>> offenders on this front.
>> [PW] Companies like Google own so many domains and sub-domains that it’s 
>> difficult to stay ahead of them. I think this is an unrealistic expectation. 
>> So if other browser vendors have the same opinion, they should look inward.
> It is not unrealistic to expect, e.g., Blahblah Investments, SIPC, to use 
> only "www.blahblahinvestments.com" for everything related to its retail 
> investment services. It *is* unreasonable to habituate users to bad practices.

[PW] I hear you Ronald. And I agree. My point was that it’s unrealistic for us 
to expect this pattern of domain use to change. I can’t see how any stakeholder 
can force or encourage organizations to use a single domain name or even a 
small number of them for a given purpose. So there’s little point in directing 
energy to something we can’t change.


>>> 2. Site owners should not use URL-shortening services, for the same reason 
>>> as (1).
>> Site owners using shortened URLs isn’t the problem in my opinion. Even if 
>> shortened URLs went away, phishing wouldn’t stop. Unless you have research 
>> to provides more insight?
> Where did I say that phishing would "stop" if URL shortening services 
> disappeared? I said avoiding them would be helpful, since it would reinforce 
> the idea that there is one correct domain per entity, or at least per entity 
> service. Probably all the entity services should be subdomains of the one 
> correct domain, but alas it will take a sustained security campaign and a 
> decade to make a dent in that problem.

[PW] I apologize if I gave the impression that you were saying something that 
you were not. That wasn’t my intention. We can try to encourage companies to 
stop using shortening services, but we’re not likely to have much of an impact. 
People who don’t belong to a brand or organization will continue to use 
shortening services too. 

I have some ideas for shortening services. They can implement better trust. 
Example: a URL that belongs to a site with website identity verified, could 
look like https://verified.tinyurl.com/345kss or they could direct to a TinyURL 
webpage where it informs the user of the verified destination.


>>> 3. Site owners should not use QR codes, since fake ones are perfect for 
>>> phishing.
>> Same as above. You don’t need to mask URLs to have a successful phishing 
>> campaign.
> No, you don't "need" to do it. It is, however, a very useful weapon in 
> phishers' quivers.

[PW] I totally agree. I’d like to add, of the hundred million apps with a 
WebView, many don’t display the URL at all. We also have Google’s AMP project 
which does little to help. And then we also have social media cards and 
previews where it’s possible to trick the system by displaying the og metadata 
from the real website while linking to the malicious destination. Rabbit hole…

>> sɑlesforce[.com] is available for purchase right now.
> 
> I was going to suggest banning non-Latin-glyph domains, since they are yet 
> another useful phishing weapon. FF converts all such domains into Punycode 
> when typed or pasted into the address bar, though the conversion is displayed 
> below the address bar, not in it. So your example becomes 
> "http://xn--slesforce-51d.com/“.
> 
>> 
>>> 4. Browser publishers should petition ICANN to revoke most of the gTLDs it 
>>> has approved, since they provide fertile ground for phishing.
>> Petitioning them won’t work. gTLDs are here to stay, even if we dislike 
>> them. Also, most phishing sites use .com and other well known TLDs. I’m not 
>> saying gTLDs aren’t used, they are. But they’re not needed.
> Of course they're not "needed" for phishing. They are, however, useful for 
> phishing.
>> So, bringing it back to Mozilla. I’d still love to see recent research/data 
>> to back up Mozilla’s decision to remove identity UI in Firefox. By promoting 
>> the padlock without education about phishing, browser vendors are actually 
>> making the web more dangerous.
> 
> I also would like to see more research.

- Paul

> 
> -R
> 
> 
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org

Re: [FORGED] Website owner survey data on identity, browser UIs, and the EV UI

2019-10-08 Thread Paul Walsh via dev-security-policy

> On Oct 2, 2019, at 4:05 PM, Ronald Crane via dev-security-policy 
>  wrote:
> 
> On 10/2/2019 3:27 PM, Peter Gutmann wrote:
>> Ronald Crane via dev-security-policy  
>> writes:
>> 
>>> "Virtually impossible"? "Anyone"? Really? Those are big claims that need 
>>> real
>>> data.
>> How many references to research papers would you like?  Would a dozen do, or
>> do you want two dozen?
> One well-done paper would do.
>> I'm pretty sure I haven't been phished yet.
>> How would you know?
> 
> Since most phishing appears to be financial, I would expect unauthorized 
> withdrawals from financial accounts, unauthorized credit card charges, 
> unordered packages showing up, dunning notices from the IRS because I filed 
> my tax returns with a phisher, etc. I haven't observed these indicia of 
> getting phished.

[PW] I agree that financial is a good incentive. But it’s by no means the only 
incentive. 

According to Verizon, 93% of data breaches start with phishing - to steal 
credentials. 

Here’s what happens:

Marriott Starwood Hotels, Aadhar, Exactis, MyFitnessPal and Quora were breached 
last year.
Over 2 billion records were compromised.

Most people changed their password on the site that was compromised.
Most people use the same password for many services.
Most people didn’t change their credentials on sites that weren’t compromised.
Threat actor searches a one or more databases for a company or person and buys 
their credentials. Or just buys them in bulk.
Threat actor tries the person’s credentials on internal systems or services 
with sensitive information.
Another company is comprised.
Loop.

While the media talks about hacking and breaches and other cool “cyber” terms, 
what they’re not saying, is that social engineering is at the core of many of 
these attacks. Social engineering is cheaper, quicker and easier than trying to 
find computer or network based vulnerabilities. 

The latter does happen and there are many amazing security professionals 
building systems to detect and prevent those types of attacks. I’m not one of 
them because I’m not smart enough to address those weaknesses. 

> 
>> And how does this help the other 7.53 billion people who
>> will be targets for phishers?
> Alas it doesn't. We do need better phishing prevention. Do you have a 
> suggestion?

[PW] While phishing detection and prevention is improving all the time, it will 
never be good enough. It’s much easier for a user to know that PayPal.com is 
who they think it is based on a visual indicator, than it is to detect the 
14,000 PayPal phishing sites with a Let’s Encrypt DV certificate. 

Yes, I just went there :)

- Paul


>>> In any case, have we ever really tried to teach users to use the correct
>>> domain?
>> Yes, we've tried that.  And that.  And that too.  And the other thing.  Yes,
>> that too.
>> 
>> None of them work.
> 
> Please cite the best study you know about on this topic (BTW, I am *not* 
> snidely implying that there isn't one).
> 
> -R
> 
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy