When should honest subscribers expect sudden (24 hours / 120 hours) revocations?

2018-12-27 Thread Jakob Bohm via dev-security-policy
Looking at the BRs, specifically BR 4.9.1, the reasons that can lead 
to fast revocation fall into a few categories / groups:

(I will reference the numbered items with 24 hour limit as A#, the numbered 
items with 120 hour limit as B# and the numbered items in 4.9.1.2 as C#).

(Some of the numbered items A1 to C9 fall under different categories 
depending on concrete circumstances).

G1. Explicit actions by the subscriber themselves (A1, A2, B4, B6):
   These are triggered and timed by their own actions and are not really 
  unexpected or sudden.

G2. Dishonest actions by the subscriber themselves (A2, B2, B3, B4, 
   B5, B8):
   The subscriber brought this upon themselves, no (or little) mercy.

G3. An underlying security failure in the subscriber's 
  systems/organization (A3, B5, B11):
   These are easily explained by the actual security incident, and any 
  haste and recrimination goes to that incident, the certificate 
  revocation is a necessary action to protect the subscriber from the 
  effects of the security failure.

G4. Massive failure at the CA (B9, all of C):
   This is typically a major situation affecting large parts of the 
  Internet (unless it was an unusually small CA).  Global disaster
  mitigation is generally initiated in such rare situations.

G5. The CAB/F changes the minimum key strength requirements in BR 6.1.5 or
  6.1.6 without a transition period (B1, C3): This would typically be voted 
  down by a majority of CAs.

G6. The CA has second thoughts / doubts about how they validated the 
  information in the certificate even though that information is actually 
  correct (A4, B7):
   This is an operational failure at the CA, and rightly justifies blame 
  against the CA (however it would be nice if those two BR points allowed 
  retroactive correction similar to A2 and C2).

G7. The CA made a technical error in the certificate (B7, C5):
   Again an operational error that justifies blaming the CA.

G8. CA specific rules not required by the BRs (B10, C9):
   Clearly blameable on the CA, and possibly a reason to not choose that 
  CA in the first place.

So absent a bad CA, I wonder where there is a rule that subscribers 
should be ready to quickly replace certificates due to actions far 
outside their own control.


Enjoy

Jakob
-- 
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Underscore characters

2018-12-27 Thread Matt Palmer via dev-security-policy
On Fri, Dec 28, 2018 at 03:19:19AM +, Jeremy Rowley via dev-security-policy 
wrote:
> > I'm not sure I'd call it "leniency", but I think you're definitely asking 
> > for "special treatment" -- pre-judgment on a potential incident so you can 
> > decide whether or not it's worth it (to DigiCert) to deliberately break the 
> > rules.
>
> I'm not sure there's a policy against asking for special treatment or 
> pre-judgment. Like I said, I feel like this is a weird area where I'm not 
> 100% 
> sure how to proceed.
There's certainly a fuzzy area in the middle between "here is a problem,
what should we do?" and the other extreme of "please let me know in advance
if we'll be OK with doing this bad thing, because I'd like to decide whether
it's worth breaking the rules".  I have to say that several of your messages
have read far more towards the latter than the former.

Of course, the ability to distinguish is muddied by the need for you to
provide specific data about the scope of the problem, which focuses things
on just DigiCert, when there is the distinct possibility that other CAs are
sitting quietly in the wings, having all the data but not wanting to step
into the ring, as it were.

> Like how do you raise when you think obedience to rules 
> is riskier than breaking them? Breaking them then explaining why seems like a 
> really bad idea. The best I could come up with is ask what to do and see if 
> the browsers agree. Acknowledged that this would be very bad in most cases, 
> but I'm not sure where you decide?

I think you've followed the best course open to you.  Talking about issues
is pretty much guaranteed to be better than keeping quiet and hoping for the
best (thanks, CT!).

Certainly, knowingly breaking the rules and then having it turn up later is
terrible -- as Ryan said, that's a quick way to get yourself distrusted.  I
certainly think that if any other CA comes out with an incident report
post-Jan-15 dealing with unrevoked underscore-bearing certificates, the
general reaction is going to be along the lines of, "are you 
*kidding* me?!?".

> > What were the criteria by which DigiCert decided which customers to grant 
> > exceptions to?

[snip]

> Honestly, it came down to which ones were the most mad at me for telling
> them I am going to revoke their certs.

I can imagine...

> > First off, your customers.  There is a certain amount of exposition in the 
> > pharmacy company bug, however I can't say that what's there so far fills me 
> > with a sense of contentment.  You said in your most recent post, "Security 
> > vulnerabilities are patched based on their rating", and that lacking a CVSS 
> > it is difficult to get recognition of a problem.  Would it be fair to say 
> > that this narrow approach to security is shared by all/most/some/none of 
> > the 
> > other similarly situated customers?
>
> No, but it's generally how people can get exceptions to the blackout period. 
> More the norm is around how these certs are rolled out. They fall under three 
> camps: a) a third party offering the main companies service that requires a 
> bunch of testing and permissions (probably contractual), b) complicated 
> policies about changes during/around blackout periods and c) certs actually 
> used in software that require code changes and deployment to update.

Those are useful categories to have, thanks.  It's especially handy for CAs
to bear in mind when they're communicating with their customers about the
risks of deeply embedding data which may need to change at short notice.

> > Focusing on the "what about next time?" aspect, which I believe is the most 
> > important, I'd be interested to know what your customers are planning on 
> > changing about their systems and processes, such that if a similar event 
> > happens in the future, the outcome won't be the same.
>
> After this, I'd like to talk about removing some of the Symantec roots from 
> Mozilla. A lot of these don't need trust in Mozilla and Chrome. The mix is in 
> the OS vs. Web ecosystem. They need trust in OS platforms, but Web is more 
> optional for a lot of the certs.  If we have roots that are only trusted in 
> the two OS platforms (MS and Apple), the risk changes for the web community.

I wonder how well that'll work out, given the dominant server platform
(Linux, in its many and varied incarnations) generally sources its trust
store from Mozilla (for better or worse).  Given the highly variable
timeline that distros have for updating their trust stores, you might be
dealing with the fallout from that one for a *long* time to come.

> > Hence, what is it that DigiCert plans to change, such that an equivalent 
> > result cannot happen in the future, given a similar event?  There was one 
> > rather draconian possibility suggested up-thread, of DigiCert limiting 
> > itself to 100 days validity, and revoking a number of randomly-chosen 
> > certificates periodically.  That would certainly remove any practical 
> > possibility 

RE: Underscore characters

2018-12-27 Thread Jeremy Rowley via dev-security-policy
>> I think Matt provided a pretty clear moral hazard here - of customers 
>> suggesting their CAs didn't do enough (e.g. should have tried harder to 
>> intentionally violated by not revoking). One significant way to mitigating 
>> that risk is to take meaningful steps to ensure that "We couldn't revoke" is 
>> not really a viable or defensible option.

 

Oh – thanks. I missed that. A lack of knowledge is already not a defensible 
position. Revocation requirements and an agreement to revoke within 24 hours is 
in all our of existing DigiCert contracts. The same language is going into all 
Symantec customer contracts now as customers transition to DigiCert systems. 
All of our documentation, including the CPS, say we can revoke with less than 1 
day notice. 

 

>From section 4.9.1 of our CPS:

 

DigiCert will revoke a Certificate within 24 hours if one or more of the 
following occurs: 

1.  The Subscriber requests in writing that DigiCert revoke the 
Certificate; 
2.  The Subscriber notifies DigiCert that the original Certificate request 
was not authorized and does not retroactively grant authorization; 

3.DigiCert obtains evidence that the Subscriber’s Private Key corresponding to 
the Public Key in the Certificate suffered aKey Compromise; or 

4. DigiCert obtains evidence that the validation of domain authorization or 
control for any FDQN or IP address in the Certificate should not be relied 
upon. 

 

DigiCert may revoke a certificate within 24 hours and will revoke a Certificate 
within 5 days if one or more of  the following occurs: 

1.  The Certificate no longer complies with the requirements of Sections 
6.1.5 and 6.1.6 of the CA/B forum baseline requirements; 
2.  DigiCert obtains evidence that the Certificate was misused; 

 3.The Subscriber or the cross‐certified CA breached a material obligation 
under the CP, this CPS, or the relevant agreement; 

4. DigiCert confirms any circumstance indicating that use of a FQDN or IP 
address in the Certificate is no longer legally permitted (e.g. a court or 
arbitrator has revoked a Domain Name registrant’s right to use the Domain Name, 
a relevant licensing or services agreement between the Domain Name registrant 
and the Applicant has terminated, or the Domain Name registrant has failed to 
renew the Domain Name); 

5. DigiCert confirms that a Wildcard Certificate has been used to authenticate 
a fraudulently misleading subordinate FQDN; 

6. DigiCert confirms a material change in the information contained in the 
Certificate; 

7. DigiCert confirms that the Certificate was not issued in accordance with the 
CA/B forum requirements or the DigiCert CP or this CPS; 

8. DigiCert determines or confirms that any of the information appearing in the 
Certificate is inaccurate; 

9. DigiCert’s right to issue Certificates under the CA/B forum requirements 
expires or is revoked or terminated, unless DigiCert has made arrangements to 
continue maintaining the CRL/OCSP  Repository; 

….

 

This is why I couch it as we can revoke technically and legally, but I don’t 
think we should. 

 

>> This doesn't really inspire confidence. If the answer for how to deal with 
>> this is block efforts to remediate issues, then it runs all the risk that 
>> Matt was speaking to. "We knew people couldn't replace in January" is a 
>> problem, for sure, but because fundamentally the risk is always there that 
>> someone would need to revoke in January - or December, or November, or 
>> whenever the sensitive holiday freeze or critical sales or lunar alignment 
>> or personal vacation is - it's not really a mitigation at all for the issue.

 

>> I tried to give suggestions earlier for meaningful steps - such as making 
>> sure all customers know that certificates may need to be revoked as soon as 
>> 24 hours. This has been a pattern of challenge in the past for DigiCert if I 
>> recall correctly - I believe both Blizzard and GitHub had issues where the 
>> keys were compromised, but these organizations didn't want to revoke the 
>> certs until they could ship new private keys in their software (... ignoring 
>> all the issues in that one). I know you've said you've got the contracts in 
>> place to defensibly revoke these, but how are you helping your users 
>> understand these risks? Do you have documentation on this? Do you recommend 
>> users use automation? I know some of this speaks to business practice, but I 
>> think that's somewhat core to the issue - since revocation may be required, 
>> how is the CA, the party best placed to communicate to the customer, 
>> communicating that necessity?

 

Sorry – I thought you meant in addition to those things. All customers know we 
can revoke within 24 hours. Note that in both those cases the GitHub case we 
did revoke the cert within 24 hours of notification. We have documentation on 
revocation (eg https://www.digicert.com/certificate-revocation.htm) and talk 
about it a lot. We also recommend 

Re: Underscore characters

2018-12-27 Thread Ryan Sleevi via dev-security-policy
On Thu, Dec 27, 2018 at 10:00 PM Jeremy Rowley 
wrote:

> The risk Matt identified is too nebulous of an issue to address, tbh. How
> do you address a moral issue?  The only way I can think of to address the
> moral issue is to say “we promise to be good”. But the weight that carries
> depends on how much you trust the actor. If you trust the actor, then the
> moral issue is addressed. If you don’t trust the actor, moral issue is not
> addressed. If you or Matt can identify a specific threat you’d like me to
> address about the moral issue, I’ll do my best to respond.
>

I think Matt provided a pretty clear moral hazard here - of customers
suggesting their CAs didn't do enough (e.g. should have tried harder to
intentionally violated by not revoking). One significant way to mitigating
that risk is to take meaningful steps to ensure that "We couldn't revoke"
is not really a viable or defensible option.


>
>- What happens is that you ask why there is risk of outage to begin
>with and what can be done to improve going forward? Let’s assume you do
>revoke, and it causes an outage - is DigiCert taking steps to ensure no
>customer of theirs is ever faced with that risk? If so, what are those
>steps?
>
>
>
> Yeah – there are several things we can do to improve going forward:
>
>1. Communicate better with the customers. The first mistake was
>waiting until we had good data to communicate with the customers. This
>delayed notification. This was unknown to me at the time, or we would have
>sent out communication prior to the ballot passing. That instruction has
>been passed along (no waiting on these critical issues) plus training.
>2. No more skipping CAB Forum meetings for me. This was easily a
>foreseeable issue because we knew people couldn’t replace in January. I
>think it’s been brought up a half dozen times in the forum at least. I’m
>not sure why we didn’t communicate this in Shanghai. But, the real problem
>is I didn’t have direct knowledge of what was going on. I probably need to
>be there in person each time so we can align the company correctly with
>that is going on.
>
> That... doesn't really inspire confidence. If the answer for how to deal
with this is block efforts to remediate issues, then it runs all the risk
that Matt was speaking to. "We knew people couldn't replace in January" is
a problem, for sure, but because fundamentally the risk is always there
that someone would need to revoke in January - or December, or November, or
whenever the sensitive holiday freeze or critical sales or lunar alignment
or personal vacation is - it's not really a mitigation at all for the issue.

I tried to give suggestions earlier for meaningful steps - such as making
sure all customers know that certificates may need to be revoked as soon as
24 hours. This has been a pattern of challenge in the past for DigiCert if
I recall correctly - I believe both Blizzard and GitHub had issues where
the keys were compromised, but these organizations didn't want to revoke
the certs until they could ship new private keys in their software (...
ignoring all the issues in that one). I know you've said you've got the
contracts in place to defensibly revoke these, but how are you helping your
users understand these risks? Do you have documentation on this? Do you
recommend users use automation? I know some of this speaks to business
practice, but I think that's somewhat core to the issue - since revocation
may be required, how is the CA, the party best placed to communicate to the
customer, communicating that necessity?

As Matt spoke to it somewhat, there's understandably competitive advantage
to being the CA that will try their hardest not to revoke. And while I
don't think this has risen to that level based on the information provided
so far, understanding how that perception is being mitigated is key. There
are other solutions, to be sure. Helping users move from publicly trusted
CAs to managed CAs, for example, can still meet the business needs of these
users w/o the attendant revocation risk.

Things like Heartbleed have shown that rapid revocation can be necessary.
Misissuance or misvalidation by the CA that results in revocation surely
can as well. Understandably, an answer of "Don't ever misissue" is great,
but if it's really pinning all the hopes on one thing. Other CAs have taken
steps like ensuring automation and short-lived certs as a way of ensuring
that the upper-bound of any issue is limited (for example, to 90 days, or
six months), and that automation is the default way of getting certs.


>
>- And this is the framing that I think is incredibly helpful.
>Understanding why customers can’t change, and what steps are being done to
>ensure they can, is hugely useful. Wayne’s question were to this point - as
>were mine towards understanding the problem from the other side, which are
>steps the CA is taking. As I've repeatedly 

RE: Underscore characters

2018-12-27 Thread Jeremy Rowley via dev-security-policy
> I don't think there's *any* result from all this that everyone would 
> consider desirable -- otherwise we wouldn't need to have this conversation.
+ 1 to that.

> I'm not sure I'd call it "leniency", but I think you're definitely asking 
> for "special treatment" -- pre-judgment on a potential incident so you can 
> decide whether or not it's worth it (to DigiCert) to deliberately break the 
> rules.
I'm not sure there's a policy against asking for special treatment or 
pre-judgment. Like I said, I feel like this is a weird area where I'm not 100% 
sure how to proceed. Like how do you raise when you think obedience to rules 
is riskier than breaking them? Breaking them then explaining why seems like a 
really bad idea. The best I could come up with is ask what to do and see if 
the browsers agree. Acknowledged that this would be very bad in most cases, 
but I'm not sure where you decide?

> What were the criteria by which DigiCert decided which customers to grant 
> exceptions to?  My default assumption is "whichever ones will cost us the 
> most money, on a risk-of-departure-weighted basis, if we revoke their 
> misissued certs", so if DigiCert's criteria was different, I'd be keen to 
> have my assumption changed.
Based on the number of certificates, the reasons the customer identified they 
couldn't make change, and whether revocation would take down a critical site. 
It actually isn't tied to $ at all. The largest issuer of certificates isn't 
on the exception list. Honestly, it came down to which ones were the most mad 
at me for telling them I am going to revoke their certs. I also filed the 
incident reports in that order.

> First off, your customers.  There is a certain amount of exposition in the 
> pharmacy company bug, however I can't say that what's there so far fills me 
> with a sense of contentment.  You said in your most recent post, "Security 
> vulnerabilities are patched based on their rating", and that lacking a CVSS 
> it is difficult to get recognition of a problem.  Would it be fair to say 
> that this narrow approach to security is shared by all/most/some/none of the 
> other similarly situated customers?
No, but it's generally how people can get exceptions to the blackout period. 
More the norm is around how these certs are rolled out. They fall under three 
camps: a) a third party offering the main companies service that requires a 
bunch of testing and permissions (probably contractual), b) complicated 
policies about changes during/around blackout periods and c) certs actually 
used in software that require code changes and deployment to update.

> As an aside, on the subject of "there's no CVSS score for this", let me fix 
> that up, with the official WombleSecure(TM)(R)(Patent Pending) CVSS for 
> "your certs are getting revoked":
https://clicktime.symantec.com/a/1/yUHHbekYeF5I1ApCiRHB3c4GRi5h119CZduhXSUjcHQ=?d=jjXJ8wGMEM-BgSpW3_vhyQL0sXCIhGbj3gBpMQofOamgauLb68trqD6rFgW1WlGMp2x8t2VFcaY0DBIxDVgeeB1NTgFMApldbJMcAgO-QzAYKleHGSG1QMDssL8YiuasGm7sy54zIql5pGoFC32z-FPTIi19g1UDgwcBY97oowWvIdYn96-dpAc9Bgo0beU6KZJB4GgT4nsTYZfQEPWR6iJovigq7cka80r2jfU6Ef-FnpegGAkDENlMwnIoHo4ti6V0kNC1BnXX92EeVaD_XCRNLlzHjHvbe0_9OrBDSAOuXH7r90tkFNs5Jf15Y9tnE-nNgpNo-7ATwrZ6C-AfpSHr9tX-RnCPFHoSUEIJ9az2IiiMo_si4rA2uaMaKtjN1Ziuk7XNO9s%3D=https%3A%2F%2Fwww.first.org%2Fcvss%2Fcalculator%2F3.0%23CVSS%3A3.0%2FAV%3AN%2FAC%3AL%2FPR%3AN%2FUI%3AN%2FS%3AU%2FC%3AN%2FI%3AN%2FA%3AH%2FE%3AH%2FRL%3AO%2FRC%3AC%2FAR%3AH%2FMAV%3AN%2FMAC%3AL%2FMPR%3AN%2FMUI%3AN%2FMS%3AU%2FMC%3AN%2FMI%3AN%2FMA%3AH
7.5 base, 7.2 temporal, and 8.9 environmental.  All those scores are in the 
"high" band.  "Availability" *is* one of the sides of the security triangle, 
after all.
Lol - thanks. I'll be sure to share this with them.

> Focusing on the "what about next time?" aspect, which I believe is the most 
> important, I'd be interested to know what your customers are planning on 
> changing about their systems and processes, such that if a similar event 
> happens in the future, the outcome won't be the same.
After this, I'd like to talk about removing some of the Symantec roots from 
Mozilla. A lot of these don't need trust in Mozilla and Chrome. The mix is in 
the OS vs. Web ecosystem. They need trust in OS platforms, but Web is more 
optional for a lot of the certs.  If we have roots that are only trusted in 
the two OS platforms (MS and Apple), the risk changes for the web community.

> A similar question applies, even more forcefully, to DigiCert itself. 
> Clearly, whatever you've done so far didn't work, because these customers of 
> yours didn't heed whatever warnings and caveats you provided, and built 
> themselves systems and processes that are unable to comply with their 
> agreements to DigiCert (and, by extension, relying parties).
See above. Also see my response to Ryan on the migration from legacy Symantec 
systems.

> Hence, what is it that DigiCert plans to change, such that an equivalent 
> result cannot happen in the 

RE: Underscore characters

2018-12-27 Thread Jeremy Rowley via dev-security-policy
The risk Matt identified is too nebulous of an issue to address, tbh. How do 
you address a moral issue?  The only way I can think of to address the moral 
issue is to say “we promise to be good”. But the weight that carries depends on 
how much you trust the actor. If you trust the actor, then the moral issue is 
addressed. If you don’t trust the actor, moral issue is not addressed. If you 
or Matt can identify a specific threat you’d like me to address about the moral 
issue, I’ll do my best to respond. 

 

*   What happens is that you ask why there is risk of outage to begin with 
and what can be done to improve going forward? Let’s assume you do revoke, and 
it causes an outage - is DigiCert taking steps to ensure no customer of theirs 
is ever faced with that risk? If so, what are those steps?

 

Yeah – there are several things we can do to improve going forward:

1.  Communicate better with the customers. The first mistake was waiting 
until we had good data to communicate with the customers. This delayed 
notification. This was unknown to me at the time, or we would have sent out 
communication prior to the ballot passing. That instruction has been passed 
along (no waiting on these critical issues) plus training.
2.  No more skipping CAB Forum meetings for me. This was easily a 
foreseeable issue because we knew people couldn’t replace in January. I think 
it’s been brought up a half dozen times in the forum at least. I’m not sure why 
we didn’t communicate this in Shanghai. But, the real problem is I didn’t have 
direct knowledge of what was going on. I probably need to be there in person 
each time so we can align the company correctly with that is going on.

 

I don’t think we can ever take steps to ensure that no customer is ever faced 
with the risk of revoked certs. I’m sure there will be other items that are 
adopted we don’t foresee.  That said, we do promote automation, short-lived 
certs (you can get anything from about 8 hours up through our system), and CT 
logging. I think the biggest surprise on this one was it applied to certs that 
are no longer trusted by Mozilla or Google. 

 

> This seems to suggest that perhaps other CAs have prepared their customers 
> for revocation. How does this surprise - that no other CA faces this - lead 
> to tangible changes in the business processes? How would this change, if 
> another CA did have the same issue? Surely you can see there are real and 
> fundamental issues that you’re uniquely qualified to help your customers 
> address in ways that we cannot. 

 

I suppose they did prepare better. Maybe other CAs are just smarter than me? I 
won’t leave that off the table.  I agree that we are uniquely positioned to 
help our customers remediate. Definitely anxious to do that (and are doing so). 

 

*   Have you analyzed CT, for example, to see why DigiCert is unique? 
Certainly, by sheer volume, it's heavily tilted towards the old Symantec 
infrastructure - and the customers that came over to DigiCert. With those sorts 
of details, how does this change how things were done, or how they will be done?

 

We do know most of the customers were legacy Symantec, but there are definitely 
some DigiCert customers in there. I think we still continue the same course. 
It’s only been a year from the transition, and we’ve migrated nearly everyone 
off the Symantec infrastructure. Next comes shutting down all the legacy 
Symantec systems. 

 

*   I’m not trying to pick on y’all - I think it is legitimately good that 
you provided concrete data. Even if you do revoke on Jan 15, this is still 
useful to understand the challenges, but only if this leads to meaningful 
changes. What might those look like?

I appreciate that. I think these are all fair questions, and I’m trying my best 
to answer them. I especially don’t feel picked on since we’re requesting the 
information/decision on what to do.

 

I don’t know how to answer the question of what changes to make because I was a 
bit blindsided by the decision to revoke the certs. Probably shouldn’t have 
been considering the conversation at the CAB Forum.  My number one priority 
right now is to shut down all of the legacy Symantec systems. Last year was 
mostly migration of issuance and trying to get the systems up to an expected 
caliber of performance. At the same time we’re introducing industry-standard 
(and above) automation of issuance and deployment systems that we hope will 
help people replace certificates faster. 

 

*   And this is the framing that I think is incredibly helpful. 
Understanding why customers can’t change, and what steps are being done to 
ensure they can, is hugely useful. Wayne’s question were to this point - as 
were mine towards understanding the problem from the other side, which are 
steps the CA is taking. As I've repeatedly highlighted from 
https://wiki.mozilla.org/CA/Responding_To_An_Incident#Revocation , the goal is 
not punishment - but 

Re: Underscore characters

2018-12-27 Thread Matt Palmer via dev-security-policy
On Thu, Dec 27, 2018 at 11:56:41PM +, Jeremy Rowley via dev-security-policy 
wrote:
> The risk is primarily outages of major sites across the web, including
> certs used in Google wallet.  We’re thinking that is a less than desirable
> result, but we weren’t sure how the Mozilla community would feel/react. 

I don't think there's *any* result from all this that everyone would consider
desirable -- otherwise we wouldn't need to have this conversation.

> We’re still considering revoking all of the certs on Jan 15th based on
> these discussions.  I don’t think we’re asking for leniency (maybe we are
> if that’s a factor?)

I'm not sure I'd call it "leniency", but I think you're definitely asking
for "special treatment" -- pre-judgment on a potential incident so you can
decide whether or not it's worth it (to DigiCert) to deliberately break the
rules.

> Normally, we would just revoke the certs, but there are a significant
> number of certs in the Alexa top 100.  We’ve told most customers, “No
> exception”.

What were the criteria by which DigiCert decided which customers to grant
exceptions to?  My default assumption is "whichever ones will cost us the
most money, on a risk-of-departure-weighted basis, if we revoke their
misissued certs", so if DigiCert's criteria was different, I'd be keen to
have my assumption changed.

> I also thought it’s better to get the information out there so we can all
> make rational decisions (DigiCert included) if as many facts are known as
> possible.

There are a number of areas that I think could stand to have some more facts
added.

First off, your customers.  There is a certain amount of exposition in the
pharmacy company bug, however I can't say that what's there so far fills me
with a sense of contentment.  You said in your most recent post, "Security
vulnerabilities are patched based on their rating", and that lacking a CVSS
it is difficult to get recognition of a problem.  Would it be fair to say
that this narrow approach to security is shared by all/most/some/none of the
other similarly situated customers?

As an aside, on the subject of "there's no CVSS score for this", let me fix
that up, with the official WombleSecure(TM)(R)(Patent Pending) CVSS for
"your certs are getting revoked":

https://www.first.org/cvss/calculator/3.0#CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H/E:H/RL:O/RC:C/AR:H/MAV:N/MAC:L/MPR:N/MUI:N/MS:U/MC:N/MI:N/MA:H

7.5 base, 7.2 temporal, and 8.9 environmental.  All those scores are in the
"high" band.  "Availability" *is* one of the sides of the security triangle,
after all.

Focusing on the "what about next time?" aspect, which I believe is the most
important, I'd be interested to know what your customers are planning on
changing about their systems and processes, such that if a similar event
happens in the future, the outcome won't be the same.

A similar question applies, even more forcefully, to DigiCert itself. 
Clearly, whatever you've done so far didn't work, because these customers of
yours didn't heed whatever warnings and caveats you provided, and built
themselves systems and processes that are unable to comply with their
agreements to DigiCert (and, by extension, relying parties).

Hence, what is it that DigiCert plans to change, such that an equivalent
result cannot happen in the future, given a similar event?  There was one
rather draconian possibility suggested up-thread, of DigiCert limiting
itself to 100 days validity, and revoking a number of randomly-chosen
certificates periodically.  That would certainly remove any practical
possibility of customers not being able to refresh their certificates
if-and-when, however I can imagine it might be a bit of a shock to the
system for many of them.

Hence, I'd be interested in hearing what DigiCert's actual plans are,
because if it were my call, *that* would be the single biggest factor in
determining the disposition of an event like this.  That errors occur is
regrettable, but it's when they happen repeatedly that it becomes
indefensible.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Underscore characters

2018-12-27 Thread Ryan Sleevi via dev-security-policy
On Thu, Dec 27, 2018 at 6:56 PM Jeremy Rowley 
wrote:

> The risk is primarily outages of major sites across the web, including
> certs used in Google wallet. We’re thinking that is a less than desirable
> result, but we weren’t sure how the Mozilla community would feel/react.
>

I don’t think that is a particularly helpful framing, to be honest. The
risk these organizations face here is self-inflicted; regardless of the
feeling of underscores, there is unquestionably an issue for organizations
that cannot respond in the BR timeframes, let alone extended ones that
extend for months. That's a real ecosystem issue, and regardless of the CA
these customers partner with, an issue that needs both better understanding
and, to be honest, better prevention.

Matt has spoken at length to the risk to the community, which doesn’t
really seem like it’s been acknowledged, let alone proposed as to how it
will be mitigated. I have to ask again - what steps is DigiCert taking to
avoid these issues going forward?

 We’re still considering revoking all of the certs on Jan 15th based on
> these discussions.  I don’t think we’re asking for leniency (maybe we are
> if that’s a factor?), but I don’t know what happens if you’re faced with
> causing outages vs. compliance.
>

What happens is that you ask why there is risk of outage to begin with and
what can be done to improve going forward? Let’s assume you do revoke, and
it causes an outage - is DigiCert taking steps to ensure no customer of
theirs is ever faced with that risk? If so, what are those steps?

I started the conversation because I feel like we should be good netizans
> and make people aware of what’s going on instead of just following policy.
> I’m actually surprised at least one other CA that has issued a large number
> of underscore character certs hasn’t run into the same timing issues.
>

This seems to suggest that perhaps other CAs have prepared their customers
for revocation. How does this surprise - that no other CA faces this - lead
to tangible changes in the business processes? How would this change, if
another CA did have the same issue? Surely you can see there are real and
fundamental issues that you’re uniquely qualified to help your customers
address in ways that we cannot.

Have you analyzed CT, for example, to see why DigiCert is unique?
Certainly, by sheer volume, it's heavily tilted towards the old Symantec
infrastructure - and the customers that came over to DigiCert. With those
sorts of details, how does this change how things were done, or how they
will be done?

I’m not trying to pick on y’all - I think it is legitimately good that you
provided concrete data. Even if you do revoke on Jan 15, this is still
useful to understand the challenges, but only if this leads to meaningful
changes. What might those look like?

Normally, we would just revoke the certs, but there are a significant
> number of certs in the Alexa top 100. We’ve told most customers, “No
> exception”. I also thought it’s better to get the information out there so
> we can all make rational decisions (DigiCert included) if as many facts are
> known as possible.
>

And this is the framing that I think is incredibly helpful. Understanding
why customers can’t change, and what steps are being done to ensure they
can, is hugely useful. Wayne’s question were to this point - as were mine
towards understanding the problem from the other side, which are steps the
CA is taking. As I've repeatedly highlighted from
https://wiki.mozilla.org/CA/Responding_To_An_Incident#Revocation , the goal
is not punishment - but understanding how these issues are being addressed.

>
> We are working with the partners to get the certs revoked before the
> deadline. Most will.
>

This seems like a significant improvement from “100% of customers can’t”

By January 15th, I hope there won’t be too many certs left. Unfortunately,
> by then it’s also too late to discuss what happens if the cert is not
> revoked. Ie – what are the benefits of revoking (strict compliance) vs
> revoking the larger impact certs as they are migrated (incident report).
> Unfortunately part 2, there’s no guidance on whether an incident report
> means total distrust v. something on your audit and a stern lecture.
>

I mean, it’s two-fold, right? Any incident can lead to total distrust, but
it’s also unlikely that a single incident leads to total distrust. The way
to balance those competing statements is to do what you’re doing - and to
be transparent. As Matt has highlighted, there’s a huge risk here that this
leads to a moral hazard - and the best way to mitigate that is to discuss
steps being taken to reduce that risk going forward, particularly about
what a core part of the problem statement is - difficulty in revocation.

I’d happily suffer a lecture than take down a top site. Not so willing to
> gamble the whole company. This is why we wanted to have the discussion now,
> despite no violation so far. The response from the browsers 

RE: Underscore characters

2018-12-27 Thread Jeremy Rowley via dev-security-policy
Treading carefully… 

 

Mozilla is the only browser related to the discussion. Probably sufficient to 
say that the revocation/no-revoke decision is entirely dependent on the results 
of this thread. 

 

From: James Burton  
Sent: Thursday, December 27, 2018 6:07 PM
To: Jeremy Rowley 
Cc: Matt Palmer ; mozilla-dev-security-policy 

Subject: Re: Underscore characters

 

I'm not sure if you're allowed to state this publicly. Has Microsoft giving you 
the go ahead?

 

On Fri, Dec 28, 2018 at 1:05 AM Jeremy Rowley via dev-security-policy 
mailto:dev-security-policy@lists.mozilla.org> > wrote:

I disagree that we won't get that. I think we could see a "it's okay to wait
until April 30 for large pharmacy" or "Waiting until April 30 is too long
but March 1 is okay". I don't think Mozilla wants outages either. But... if
Mozilla did say that we should revoke now, that would be great as well. I'd
have a firm answer I can go back with. No risk, but no exception. 

Well except moral risk of course  

-Original Message-
From: dev-security-policy mailto:dev-security-policy-boun...@lists.mozilla.org> > On
Behalf Of Matt Palmer via dev-security-policy
Sent: Thursday, December 27, 2018 5:55 PM
To: dev-security-policy@lists.mozilla.org 
 
Subject: Re: Underscore characters

On Fri, Dec 28, 2018 at 12:12:03AM +, Jeremy Rowley via
dev-security-policy wrote:
> This is very helpful. If I had those two options, we'd just revoke all 
> the certs, screw outages. Unfortunately, the options are much broader than
that.
> If I could know what the risk v. benefit is, then you can make a 
> better decision? DigiCert distrusted - all revoked. DigiCert gets some 
> mar on its audit - outages seem worse. Make sense?

Given that Mozilla wants CAs to abide by its policies, which include
adherence to the BRs, and you appear to be saying that you'll adhere to the
BRs if you're threatened with distrust... I'd say the logical response from
Mozilla would be to threaten distrust.  I doubt, especially now, that you'll
get a categorical advance "it's OK to not revoke" from Mozilla.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org 
 
https://clicktime.symantec.com/a/1/JAUY6LMmpzDeGtxtOiXLJVWWYjWV65xcMjKoLj_GS 

 
gs=?d=2r4BCPONnLRAQaYxhIYsrR2xI_C73HdzeRvSzxfwF1rOccA0cfq95qcKptTpNVYkGzCfgl
u40QMyhwHQJyWghm9tDreLIrUFB4D0ugqZlnn2SKyEI85b9QcQlb6I-o78NypjSLQRAUF9s9i5tF
sXc6oVsnhZly7GCR8HrTZqfLEL8fXQKwA8A7MRCYPr2Hy61TCorYztrVr2u8IME1WcJdVQxd1tkB
MIgZG8M74du5AO2ELfvkGfV3pBYbOUubjwoFhmqqgsHy5GyDIO_EZS68OavUwfNHvpkZ-5paTSWR
yGwQFw0uz8CKa2kO0IOOBGt55A-WAyvJnhPJScUvwu_c9n2KmEljO7EbvvYGYA0E3Ef6rWWdpZbm
D8FZ39LChfaUgdEP4DX6Y%3D=https%3A%2F%2Flists.mozilla.org 
 %2Flistinfo%2Fdev-
security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org 
 
https://lists.mozilla.org/listinfo/dev-security-policy



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Underscore characters

2018-12-27 Thread Jeremy Rowley via dev-security-policy
I disagree that we won't get that. I think we could see a "it's okay to wait
until April 30 for large pharmacy" or "Waiting until April 30 is too long
but March 1 is okay". I don't think Mozilla wants outages either. But... if
Mozilla did say that we should revoke now, that would be great as well. I'd
have a firm answer I can go back with. No risk, but no exception. 

Well except moral risk of course  

-Original Message-
From: dev-security-policy  On
Behalf Of Matt Palmer via dev-security-policy
Sent: Thursday, December 27, 2018 5:55 PM
To: dev-security-policy@lists.mozilla.org
Subject: Re: Underscore characters

On Fri, Dec 28, 2018 at 12:12:03AM +, Jeremy Rowley via
dev-security-policy wrote:
> This is very helpful. If I had those two options, we'd just revoke all 
> the certs, screw outages. Unfortunately, the options are much broader than
that.
> If I could know what the risk v. benefit is, then you can make a 
> better decision? DigiCert distrusted - all revoked. DigiCert gets some 
> mar on its audit - outages seem worse. Make sense?

Given that Mozilla wants CAs to abide by its policies, which include
adherence to the BRs, and you appear to be saying that you'll adhere to the
BRs if you're threatened with distrust... I'd say the logical response from
Mozilla would be to threaten distrust.  I doubt, especially now, that you'll
get a categorical advance "it's OK to not revoke" from Mozilla.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://clicktime.symantec.com/a/1/JAUY6LMmpzDeGtxtOiXLJVWWYjWV65xcMjKoLj_GS
gs=?d=2r4BCPONnLRAQaYxhIYsrR2xI_C73HdzeRvSzxfwF1rOccA0cfq95qcKptTpNVYkGzCfgl
u40QMyhwHQJyWghm9tDreLIrUFB4D0ugqZlnn2SKyEI85b9QcQlb6I-o78NypjSLQRAUF9s9i5tF
sXc6oVsnhZly7GCR8HrTZqfLEL8fXQKwA8A7MRCYPr2Hy61TCorYztrVr2u8IME1WcJdVQxd1tkB
MIgZG8M74du5AO2ELfvkGfV3pBYbOUubjwoFhmqqgsHy5GyDIO_EZS68OavUwfNHvpkZ-5paTSWR
yGwQFw0uz8CKa2kO0IOOBGt55A-WAyvJnhPJScUvwu_c9n2KmEljO7EbvvYGYA0E3Ef6rWWdpZbm
D8FZ39LChfaUgdEP4DX6Y%3D=https%3A%2F%2Flists.mozilla.org%2Flistinfo%2Fdev-
security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Underscore characters

2018-12-27 Thread Jeremy Rowley via dev-security-policy
The 7 required items under the Mozilla template are:

1.  Timeline of events
2.  Timeline of actions taken
3.  Whether the CA has stopping issuing
4.  Summary of problematic certs
5.  Cert data
6.  How mistakes were made
7.  Remediation plan

 

The info we’re working on getting a complete list of:

1.  Blackout periods
2.  Where each cert is used in the infrastructure
3.  Why 30 day certs won’t work (on a per cert basis)
4.  Reason the certs are publicly trusted
5.  What risk are associated with the replacement
6.  The date each cert can be revoked

 

Mostly we’re hearing back general answers. They’re almost all the same answer, 
but I’m really trying to get the level of detail requested.

 

I see how you could interpret the question that way. I see it more as the CAB 
forum got the date wrong. Could Mozilla please extend this after weighing the 
risks of revoking vs. non-revoking? Maybe two sides of the same question.

 

The second deadline is coming from the impacted parties. That’s the request 
from them so I’m relaying it on. Everyone is willing to move, just a matter of 
timing. If there’s a better balance of risk vs. risk, then we’d be happy to 
hear that.

 

>> So the assumption here is that, in all of this discussion, DigiCert's done 
>> everything it can to understand the issue, the

>> timelines, remediation, etc, and has plans to address both each and every 
>> customer and the systemic issues that have

>>  emerged. If that's not the case, then how are we not in one of those two 
>> scenarios above? And if it is the case, isn't that

>> information readily available by now?

 

The information is readily available for the companies I posted in incident 
reports, particularly the first one. I think we’ve done everything reasonable 
to understand the issue. I haven’t, for example, chartered a flight to sit in 
their data center and examine their infrastructure. We do have daily calls with 
most of them on the issue.  Maybe the amount of information the company has 
provided should be the guiding light? 

 

 

From: Ryan Sleevi  
Sent: Thursday, December 27, 2018 1:16 PM
To: Jeremy Rowley 
Cc: mozilla-dev-security-policy 
Subject: Re: Underscore characters

 

I'm not trying to throw you under the bus here, but I think it's helpful if you 
could highlight what new information you see being required, versus that which 
is already required.

 

I think, yes, you're right that it's not well received if you go violate the 
BRs and then, after the fact, say "Hey, yeah, we violated, but here's why", and 
finding out that the reasons are met with a lot of skepticism and the math 
being shaky, and you can see that from past incident reports it doesn't go over 
well.

 

But it's also not well received if it's before, and the statement is "Our 
customer thinks we should violate the BRs. What would happen if we did, and 
what information do you need from us?". That gets into the moral hazard that 
Matt spoke to, and is a huge burden on the community where the expectation is 
that the CA says "Sorry, we can't do that".

 

So the assumption here is that, in all of this discussion, DigiCert's done 
everything it can to understand the issue, the timelines, remediation, etc, and 
has plans to address both each and every customer and the systemic issues that 
have emerged. If that's not the case, then how are we not in one of those two 
scenarios above? And if it is the case, isn't that information readily 
available by now?

 

>From the discussions on the incident reports, I feel like that's been the 
>heart of the questions; which is trying to understand what the root cause is 
>and what the remediation plan is. The statement "We'll miss the first 
>deadline, but we'll hit the second", but without any details about how or why, 
>or the steps being taken to ensure no deadlines are missed in the future, 
>doesn't really inspire confidence, and is exactly the same kind of feedback 
>that would be given post-incident.

 

On Thu, Dec 27, 2018 at 1:50 PM Jeremy Rowley via dev-security-policy 
mailto:dev-security-policy@lists.mozilla.org> > wrote:

There's a little bit of a "damned if you do, damned if you don't problem here". 
Wait until you have all the information? That's a paddlin'.  File before you 
have enough information? That's a paddlin'. I'd appreciate better guidance on 
what Mozilla expects from these incident reports timing-wise. 

-Original Message-
From: dev-security-policy mailto:dev-security-policy-boun...@lists.mozilla.org> > On Behalf Of Jeremy 
Rowley via dev-security-policy
Sent: Thursday, December 27, 2018 11:47 AM
To: r...@sleevi.com  
Cc: dev-security-policy@lists.mozilla.org 
 
Subject: RE: Underscore characters

The original incident report contained all of the details of the initial 
filing.  The additional, separated reports are trickling in as I get 

Re: Use cases of publicly-trusted certificates

2018-12-27 Thread Peter Bowen via dev-security-policy
On Thu, Dec 27, 2018 at 9:04 AM Nick Lamb via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Thu, 27 Dec 2018 15:30:01 +0100
> Jakob Bohm via dev-security-policy
>  wrote:
>
> > The problem here is that the prohibition lies in a complex legal
> > reading of multiple documents, similar to a situation where a court
> > rules that a set of laws has an (unexpected to many) legal
> > consequence.
>
> I completely disagree. This prohibition was an obvious fact, well known
> to (I had assumed prior to this present fever) everyone who cared about
> the Internet's underlying infrastructure.
>
> The only species of technical people I ever ran into previously who
> professed "ignorance" of the rule were the sort who see documents like
> RFCs as descriptive rather than prescriptive and so their position
> would be (as it seems yours is) "Whatever I can do is allowed". Hardly
> a useful rule for the Web PKI.
>

As I wrote in the thread on underscores, I am one of the people who
believed it was not clear if underscores were allowed or not.  This was
reflected in the earliest versions of certlint/cablint.

If you think it should have been clear, consider the following examples
from the real world:
- The character Asterisk (U+002A, '*') is not allowed in dNSName SANs per
the same rule forbidding Low Line (U+005F, '_').   RFC 5280 does say:
"Finally, the semantics of subject alternative names that include wildcard
characters (e.g., as a placeholder for a set of names) are not addressed by
this specification.  Applications with specific requirements MAY use such
names, but they must define the semantics."  However it never defines what
"wildcard characters" are acceptable.  As Wikipedia helpfully documents,
there are many different characters that can be wildcards:
https://en.wikipedia.org/wiki/Wildcard_character.  The very same ballot
that attempted to clarify the status of the Low Line character tried to
clarify wildcards, but it failed.  The current BRs state "Wildcard FQDNs
are permitted." in the section about subjectAltName, but the term "Wildcard
FQDN" is never defined.  Given the poor drafting, I might be able to argue
that Low Line should be considered a wildcard character that is designed to
match a single character, similar to Full Stop (U+002E, '.') in regular
expressions.

- The meaning of the extendedKeyUsage extension in a CA certificate is
unclear.  There are at least two views: 1) It constrains the use of the
public key in the certificate and 2) It constrains the use of end-entity
public keys certified by the CA named in the CA certificate.  This has been
discussed multiple times on the IETF PKIX mailing list and no consensus has
been reached.  Similarly, the X.509 standard does not clarify.  Mozilla
takes the second option, but it is entirely possible that a clarification
could show up in a future RFC or X.500-series doc that goes with the first
option.

These are just two cases where the widely deployed and widely accepted
status does not match the RFC.


> > It would benefit the honesty of this discussion if the side that won
> > in the CAB/F stops pretending that everybody else "should have known"
> > that their victory was the only legally possible outcome and should
> > never have acted otherwise.
>
> I would suggest it would more benefit the honesty of the discussion if
> those who somehow convinced themselves of falsehood would accept this
> was a serious flaw and resolve to do better in future, rather than
> suppose that it was unavoidable and so we have to expect they'll keep
> doing it.
>

Of course people are going to try to do better, but part of that is
understanding that people are not perfect and that even automation can
break. I wrote certlint/cablint with hundreds of tests and continue to get
reports of gaps in the tests.  Yes, things will get better, but we need to
get them there in an orderly way.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Underscore characters

2018-12-27 Thread Matt Palmer via dev-security-policy
On Fri, Dec 28, 2018 at 12:12:03AM +, Jeremy Rowley via dev-security-policy 
wrote:
> This is very helpful. If I had those two options, we'd just revoke all the
> certs, screw outages. Unfortunately, the options are much broader than that.
> If I could know what the risk v. benefit is, then you can make a better
> decision? DigiCert distrusted - all revoked. DigiCert gets some mar on its
> audit - outages seem worse. Make sense? 

Given that Mozilla wants CAs to abide by its policies, which include
adherence to the BRs, and you appear to be saying that you'll adhere to the
BRs if you're threatened with distrust... I'd say the logical response from
Mozilla would be to threaten distrust.  I doubt, especially now, that you'll
get a categorical advance "it's OK to not revoke" from Mozilla.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Use cases of publicly-trusted certificates

2018-12-27 Thread Jeremy Rowley via dev-security-policy
It clearly wasn't understood by everyone. That's why we had two ballots on it, 
one of them failing to address the issue. You can just look through the long 
discussions on the topic to see people didn't agree. 

-Original Message-
From: dev-security-policy  On 
Behalf Of Jakob Bohm via dev-security-policy
Sent: Thursday, December 27, 2018 2:43 PM
To: mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: Use cases of publicly-trusted certificates

On 27/12/2018 18:03, Nick Lamb wrote:
> On Thu, 27 Dec 2018 15:30:01 +0100
> Jakob Bohm via dev-security-policy
>  wrote:
> 
>> The problem here is that the prohibition lies in a complex legal 
>> reading of multiple documents, similar to a situation where a court 
>> rules that a set of laws has an (unexpected to many) legal 
>> consequence.
> 
> I completely disagree. This prohibition was an obvious fact, well 
> known to (I had assumed prior to this present fever) everyone who 
> cared about the Internet's underlying infrastructure.
> 

The group who most definitely were unaware of the very specific reading of 
RFC5280 is the subscribers using such host names in ways that passed all other 
requirements (including domain name validation).
Not the people seeking to allow these names via ballot 202, similar to what was 
done for other RFC5280 deviations in ballots 75, 88 and 144.

> The only species of technical people I ever ran into previously who 
> professed "ignorance" of the rule were the sort who see documents like 
> RFCs as descriptive rather than prescriptive and so their position 
> would be (as it seems yours is) "Whatever I can do is allowed". Hardly 
> a useful rule for the Web PKI.
> 

You must be traveling in a rather limited bubble of PKIX experts, all of whom 
live and breathe the reading of RFC5280.  Technical people outside that bubble 
may have easily misread the relevant paragraph in RFC5280 in various ways.

Possible ways to overlook the ban on underscores:

1. Not chasing down the RFC1034/RFC1123 references but relying on
  previously learned rules for what can be in a DNS name.

2. Interpreting the wording in RFC5280 section 4.2.1.6 as simply requiring
  a canonical encoding of DNS names, thus not allowing e.g. the UTF-8
  equivalent of an IDN or duplicate periods, then deferring that encoding
  job to a 3rd party PKI library.

3. Relying on practice established in certificates without the SAN extension,
  (thus not subject to section 4.2.1.6 rules) and then continuing without
  detailed review after it became mandatory to always include the SAN
  extension for end entities.

4. Trusting the word of others on how to interpret the rules, those others
  being the ones misreading the standards.

> Descriptive documents certainly have their place - I greatly admire 
> Geoff Pullum's Cambridge Grammar of the English Language, and I do own 
> the more compact "Student's Introduction" book, both of which are 
> descriptive since of course a natural language is not defined by such 
> documents and can only be described by them (and imperfectly, exactly 
> what's going on in English remains an active area of research).
> But that place is not here, the exact workings of DNS are prescribed, 
> in documents you've called a "complex legal reading of multiple documents"
> but more familiarly as "a bunch of pretty readable RFCs on exactly 
> this topic".
> 

The documents that prescribes the exact workings of DNS do not prohibit (only 
discourage) DNS names containing underscores.  Web browser interfaces for URL 
parsing may not allow them, which would be a technical benefit for at least one 
usage of such certificates reported in the recent discussion.

The complex reading comes in the understanding that a single sentence in
RFC5280 elevates the recommendation in the DNS standards to an absolute 
requirement in PKIX, combined with the opinion that this particular effect of 
the wording is not another errata that should be corrected in any later update 
of the PKIX standard, and/or overridden by a BR clause.

>> It would benefit the honesty of this discussion if the side that won 
>> in the CAB/F stops pretending that everybody else "should have known"
>> that their victory was the only legally possible outcome and should 
>> never have acted otherwise.
> 
> I would suggest it would more benefit the honesty of the discussion if 
> those who somehow convinced themselves of falsehood would accept this 
> was a serious flaw and resolve to do better in future, rather than 
> suppose that it was unavoidable and so we have to expect they'll keep 
> doing it.
> 
> Consider it from my position. In one case I know Jakob made an error 
> but has learned a valuable lesson from it and won't be caught the same 
> way twice. In the other case Jakob is unreliable on simple matters of 
> fact and I shouldn't believe anything further he says.
> 

That I disagree with you on certain questions of fact doesn't mean I'm 
unreliable, merely that you have not 

RE: Underscore characters

2018-12-27 Thread Jeremy Rowley via dev-security-policy
This is very helpful. If I had those two options, we'd just revoke all the
certs, screw outages. Unfortunately, the options are much broader than that.
If I could know what the risk v. benefit is, then you can make a better
decision? DigiCert distrusted - all revoked. DigiCert gets some mar on its
audit - outages seem worse. Make sense? 

-Original Message-
From: dev-security-policy  On
Behalf Of thomas.gh.horn--- via dev-security-policy
Sent: Thursday, December 27, 2018 1:50 PM
To: mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: Underscore characters


As to why these certificates have to be revoked, you should see this the
other way round: as a very generous service of the community to you and your
customers!

Certificates with (pseudo-)hostnames in them are clearly invalid, so a
conforming implementation should not accept them for anything and they
should not pose any security risk. Based on this assessment (no revokation
if no security risk), a CA could very well issue a certificate including any
of the (psuedo-)hostnames "example.com_cvs.com", "example.com/cvs.com",
"cvs.com/example.com",
"https://clicktime.symantec.com/a/1/Bz3KjBhWfzAsIJ0uIM5iJZb_Vq9KOZqIbbEqrWx1
PPc=?d=nuBPRsMXvpmDCViEfj_vdMTuPr8sqLAI5iKEWF4ohV9p1yKSHaat1UnUMwQC2TM1Glbqm
sZ5vll_Ws-lffmZiGXLoAjAa1j4xYlIvj_mjSSwyyAqosT8up883sRCNtFds_0zcjRxOOoj2-Clo
cugotsEOb5kZj4DN2uJO-MXnpA-ayZPZSvrBhJ61IzJdnfMh1ufcgt0H6eS4MDVVELwAzREz5sDF
lQhRCO_bmD3I3jI7vj9qUbLzQFJGYVKa0aQ_RlnmWxfRFD0s4bJcUeW2SLinms3T2PnVDt62TguH
hnVQeT7XLb0uAGF0x7KNhbpJbykznPGT6vDGP6xnntYiQHZgZqRiOfJvYE642rqp3X9NoRx26Q0Q
Qy4KgOGUE-nAs60vFYry1msFrinKGViW9Q%3D=https%3A%2F%2Fexample.com%2Fcvs.com"
, "example@cvs.com" to the owner of example.com (who, arguably, has the
exact same right to them as the owner of cvs.com has) and refuse to revoke
them.

As to the consequences (in case this really becomes an incident
report/incident reports): this shows a SEVERE lack of ability to revoke
certificates on DigiCert's side, which must have been known AND ACCEPTED for
a long time (this cannot be the first "blackout period" of (in the best
case) 3.5 months). Thus, it seems to be a good idea to:

1. Henceforth, make NSS only accept certificates by DigiCert with a maximum
validity of 100 days. Let's Encrypt has shown that this is clearly feasible.

or

2. Henceforth, require DigiCert to revoke a small, randomly (e.g., using RFC
3797) selected subset of their certificates every day (within 7 days). If
this, e.g., for the same reasons as outlined in these incident reports, is
not possible, it will trigger (a incrementally decreasing number of) more
incident reports.

Both proposals would lead to more automation and a better understanding of
the requirement of timely revocation, while pushing the ecosystem in the
right direction. For its easiness, the first proposal would be my favorite
but I would be very interested in hearing other people's thoughts about
these proposals.

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://clicktime.symantec.com/a/1/2hiT00ldRBQieEaN_06CurvCo04Hq3RsaRxAAoyWN
IY=?d=nuBPRsMXvpmDCViEfj_vdMTuPr8sqLAI5iKEWF4ohV9p1yKSHaat1UnUMwQC2TM1Glbqms
Z5vll_Ws-lffmZiGXLoAjAa1j4xYlIvj_mjSSwyyAqosT8up883sRCNtFds_0zcjRxOOoj2-Cloc
ugotsEOb5kZj4DN2uJO-MXnpA-ayZPZSvrBhJ61IzJdnfMh1ufcgt0H6eS4MDVVELwAzREz5sDFl
QhRCO_bmD3I3jI7vj9qUbLzQFJGYVKa0aQ_RlnmWxfRFD0s4bJcUeW2SLinms3T2PnVDt62TguHh
nVQeT7XLb0uAGF0x7KNhbpJbykznPGT6vDGP6xnntYiQHZgZqRiOfJvYE642rqp3X9NoRx26Q0QQ
y4KgOGUE-nAs60vFYry1msFrinKGViW9Q%3D=https%3A%2F%2Flists.mozilla.org%2Flis
tinfo%2Fdev-security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Underscore characters

2018-12-27 Thread Jeremy Rowley via dev-security-policy
This is accurate. We have the technical capability and policy ability to
revoke the certificates. What we were hoping was a discussion based on
impact of the revocation so we could hear what we should do. Blind obedience
isn't my favorite answer, but it's an option. The guidance so far is file an
incident report now so we can discuss the potential impact. I've filed for
two companies, crossed a couple more off the list, and am still working with
the remainder to get things resolved. Although some have escalated over my
head, I think most are eager to hear what the community has to say. I also
think this is an interesting question for Mozilla's policy -  not sure we've
ever addressed a potential non-compliance like this.  

-Original Message-
From: dev-security-policy  On
Behalf Of Peter Bowen via dev-security-policy
Sent: Thursday, December 27, 2018 2:19 PM
To: thomas.gh.h...@gmail.com
Cc: mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: Underscore characters

On Thu, Dec 27, 2018 at 12:53 PM thomas.gh.horn--- via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

>
> As to why these certificates have to be revoked, you should see this 
> the other way round: as a very generous service of the community to 
> you and your customers!
>
> Certificates with (pseudo-)hostnames in them are clearly invalid, so a 
> conforming implementation should not accept them for anything and they 
> should not pose any security risk. Based on this assessment (no 
> revokation if no security risk), a CA could very well issue a 
> certificate including any of the (psuedo-)hostnames 
> "example.com_cvs.com", "example.com/cvs.com", "cvs.com/example.com",
"https://example.com/cvs.com;, "example@cvs.com"
> to the owner of example.com (who, arguably, has the exact same right 
> to them as the owner of cvs.com has) and refuse to revoke them.
>

I'm not clear how you get that the owner of example.com is covered anywhere
here.  Parsed into labels, these all have com as the label closet to the
root and then have 'com_cvs', 'com/cvs', 'com/example', 'com/cvs', and
'com@cvs' as the next label respectively.  None have 'example' as the next
label.


> As to the consequences (in case this really becomes an incident 
> report/incident reports): this shows a SEVERE lack of ability to 
> revoke certificates on DigiCert's side, which must have been known AND 
> ACCEPTED for a long time (this cannot be the first "blackout period" 
> of (in the best
> case) 3.5 months).


I don't see how this follows.  DigiCert has made it clear they are able to
technically revoke these certificates and presumably are contractually able
to revoke them as well.  What is being said is that their customers are
asking them to delay revoking them because the _customers_ have blackout
periods where the customers do not want to make changes to their systems.
DigiCert's customers are saying that they are judging the risk from
revocation is greater than the risk from leaving them unrevoked and asking
DigiCert to not revoke. DigiCert is then presenting this request along to
Mozilla to get feedback from Mozilla.


> Thus, it seems to be a good idea to:
>
> 1. Henceforth, make NSS only accept certificates by DigiCert with a 
> maximum validity of 100 days. Let's Encrypt has shown that this is 
> clearly feasible.
>
> or
>
> 2. Henceforth, require DigiCert to revoke a small, randomly (e.g., 
> using RFC 3797) selected subset of their certificates every day (within 7
days).
> If this, e.g., for the same reasons as outlined in these incident 
> reports, is not possible, it will trigger (a incrementally decreasing 
> number of) more incident reports.
>
> Both proposals would lead to more automation and a better 
> understanding of the requirement of timely revocation, while pushing 
> the ecosystem in the right direction. For its easiness, the first 
> proposal would be my favorite but I would be very interested in 
> hearing other people's thoughts about these proposals.
>

I don't agree that demanding all certificate customers have "more
automation" is desirable.  I am very familiar with the Chaos Monkey approach
Netflix has implemented and companies like Gremlin that offer similar
"Failure as a Service" products, but forcing this on customers seems like a
poor idea.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Underscore characters

2018-12-27 Thread Jeremy Rowley via dev-security-policy
The risk is primarily outages of major sites across the web, including certs 
used in Google wallet. We’re thinking that is a less than desirable result, but 
we weren’t sure how the Mozilla community would feel/react.  We’re still 
considering revoking all of the certs on Jan 15th based on these discussions.  
I don’t think we’re asking for leniency (maybe we are if that’s a factor?), but 
I don’t know what happens if you’re faced with causing outages vs. compliance. 
I started the conversation because I feel like we should be good netizans and 
make people aware of what’s going on instead of just following policy.  I’m 
actually surprised at least one other CA that has issued a large number of 
underscore character certs hasn’t run into the same timing issues. 

 

Normally, we would just revoke the certs, but there are a significant number of 
certs in the Alexa top 100. We’ve told most customers, “No exception”. I also 
thought it’s better to get the information out there so we can all make 
rational decisions (DigiCert included) if as many facts are known as possible.  

 

We are working with the partners to get the certs revoked before the deadline. 
Most will. By January 15th, I hope there won’t be too many certs left. 
Unfortunately, by then it’s also too late to discuss what happens if the cert 
is not revoked. Ie – what are the benefits of revoking (strict compliance) vs 
revoking the larger impact certs as they are migrated (incident report).  
Unfortunately part 2, there’s no guidance on whether an incident report means 
total distrust v. something on your audit and a stern lecture. I’d happily 
suffer a lecture than take down a top site. Not so willing to gamble the whole 
company. This is why we wanted to have the discussion now, despite no violation 
so far. The response from the browsers is public  - that they cannot make that 
determination. Does that mean we have our answer? Revoke is the only acceptable 
response?   

 

From: James Burton  
Sent: Thursday, December 27, 2018 2:24 PM
To: Ryan Sleevi 
Cc: Jeremy Rowley ; mozilla-dev-security-policy 

Subject: Re: Underscore characters

 

 

 

On Thu, Dec 27, 2018 at 9:00 PM Ryan Sleevi mailto:r...@sleevi.com> > wrote:

I'm not really sure I understand this response at all. I'm hoping you can 
clarify.

 

On Thu, Dec 27, 2018 at 3:45 PM James Burton mailto:j...@0.me.uk> > wrote:

For a CA to intentionally state that they are going to violate the BR 
requirements means that that CA is under immense pressure to comply with 
demands or face retribution. 

 

I'm not sure I understand how this flows. Comply with whose demands? Face 
retribution from who, and why?

 

The CA must be under immense pressure to comply with demands from certain 
customers to determine that they don't have much of a choice but to 
intentionally violate the BR requirements and by telling community and root 
stores early they are hoping for leniency. The retribution by them customers 
could be legal which is outside of this forum but is but it's still relevant to 
them if that is the case. 

 

 

The severity inflicted on a CA by intentionally violating the BR requirements 
can be severe. Rolling a dice of chance. Why take the risk?

 

I'm not sure I understand the question at the end, and suspect there's a point 
to the question I'm missing.

 

The CA is rolling the dice of chance, they are intentionally risking everything 
by violating the BR requirements and they know that such action can face 
sanctions or distrust in the wrong case. The question I asked is why are they 
taking the risk which leads from the first statement.  

 

 

Presumably, a CA stating they're going to violate the BR requirements, knowing 
the risk to trust that it may pose, would have done everything possible to 
gather every piece of information so that they could assess the risk of 
violation is outweighed by whatever other risks (in this case, revocation). If 
that's the case, is it unreasonable to ask how the CA determined that - which 
is the root cause analysis question? And how to mitigate whatever other risk 
(in this case, revocation) poses going forward, so that violating the BRs isn't 
consistently seen as the "best" option? 



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Underscore characters

2018-12-27 Thread Matt Palmer via dev-security-policy
On Thu, Dec 27, 2018 at 01:19:26PM -0800, Peter Bowen via dev-security-policy 
wrote:
> I don't see how this follows.  DigiCert has made it clear they are able to
> technically revoke these certificates and presumably are contractually able
> to revoke them as well.  What is being said is that their customers are
> asking them to delay revoking them because the _customers_ have blackout
> periods where the customers do not want to make changes to their systems.
> DigiCert's customers are saying that they are judging the risk from
> revocation is greater than the risk from leaving them unrevoked and asking
> DigiCert to not revoke. DigiCert is then presenting this request along to
> Mozilla to get feedback from Mozilla.

It's worth clarifying that "risk" is not a property of the universe, like
magnetic flux density, but rather is assessed relative to specific entities. 
Thus, when talking about risk, it's worth clearly identifying to whom a risk
is associated, as in this variant of part of the above paragraph:

> DigiCert's customers are saying that they are judging the risk *to them*
> from revocation is greather than the risk *to them* from leaving them
> unrevoked

I'm sure you're familiar with all this, Peter.  I just thought it was worth
highlighting for a wider audience, that one entity's assessment of risk to
them doesn't make it a physical constant that applies equally to everyone. 
I find it very helpful when assessing such things to attach explicit
markers, somewhat like ensuring I specify both magnitude *and* direction on
my vectors.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Use cases of publicly-trusted certificates

2018-12-27 Thread Jakob Bohm via dev-security-policy
On 27/12/2018 18:03, Nick Lamb wrote:
> On Thu, 27 Dec 2018 15:30:01 +0100
> Jakob Bohm via dev-security-policy
>  wrote:
> 
>> The problem here is that the prohibition lies in a complex legal
>> reading of multiple documents, similar to a situation where a court
>> rules that a set of laws has an (unexpected to many) legal
>> consequence.
> 
> I completely disagree. This prohibition was an obvious fact, well known
> to (I had assumed prior to this present fever) everyone who cared about
> the Internet's underlying infrastructure.
> 

The group who most definitely were unaware of the very specific reading 
of RFC5280 is the subscribers using such host names in ways that passed 
all other requirements (including domain name validation).
Not the people seeking to allow these names via ballot 202, similar to 
what was done for other RFC5280 deviations in ballots 75, 88 and 144.

> The only species of technical people I ever ran into previously who
> professed "ignorance" of the rule were the sort who see documents like
> RFCs as descriptive rather than prescriptive and so their position
> would be (as it seems yours is) "Whatever I can do is allowed". Hardly
> a useful rule for the Web PKI.
> 

You must be traveling in a rather limited bubble of PKIX experts, all of 
whom live and breathe the reading of RFC5280.  Technical people outside 
that bubble may have easily misread the relevant paragraph in RFC5280 in 
various ways.

Possible ways to overlook the ban on underscores:

1. Not chasing down the RFC1034/RFC1123 references but relying on 
  previously learned rules for what can be in a DNS name.

2. Interpreting the wording in RFC5280 section 4.2.1.6 as simply requiring 
  a canonical encoding of DNS names, thus not allowing e.g. the UTF-8 
  equivalent of an IDN or duplicate periods, then deferring that encoding 
  job to a 3rd party PKI library.

3. Relying on practice established in certificates without the SAN extension, 
  (thus not subject to section 4.2.1.6 rules) and then continuing without 
  detailed review after it became mandatory to always include the SAN 
  extension for end entities.

4. Trusting the word of others on how to interpret the rules, those others 
  being the ones misreading the standards.

> Descriptive documents certainly have their place - I greatly admire
> Geoff Pullum's Cambridge Grammar of the English Language, and I
> do own the more compact "Student's Introduction" book, both of which
> are descriptive since of course a natural language is not defined by
> such documents and can only be described by them (and imperfectly,
> exactly what's going on in English remains an active area of research).
> But that place is not here, the exact workings of DNS are prescribed, in
> documents you've called a "complex legal reading of multiple documents"
> but more familiarly as "a bunch of pretty readable RFCs on exactly this
> topic".
> 

The documents that prescribes the exact workings of DNS do not prohibit 
(only discourage) DNS names containing underscores.  Web browser 
interfaces for URL parsing may not allow them, which would be a technical 
benefit for at least one usage of such certificates reported in the recent 
discussion.

The complex reading comes in the understanding that a single sentence in 
RFC5280 elevates the recommendation in the DNS standards to an absolute 
requirement in PKIX, combined with the opinion that this particular 
effect of the wording is not another errata that should be corrected in 
any later update of the PKIX standard, and/or overridden by a BR clause.

>> It would benefit the honesty of this discussion if the side that won
>> in the CAB/F stops pretending that everybody else "should have known"
>> that their victory was the only legally possible outcome and should
>> never have acted otherwise.
> 
> I would suggest it would more benefit the honesty of the discussion if
> those who somehow convinced themselves of falsehood would accept this
> was a serious flaw and resolve to do better in future, rather than
> suppose that it was unavoidable and so we have to expect they'll keep
> doing it.
> 
> Consider it from my position. In one case I know Jakob made an error
> but has learned a valuable lesson from it and won't be caught the same
> way twice. In the other case Jakob is unreliable on simple matters of
> fact and I shouldn't believe anything further he says.
> 

That I disagree with you on certain questions of fact doesn't mean I'm 
unreliable, merely that you have not presented any persuasive arguments 
that you are not the one being wrong.

I have accepted that a strict, legalistic reading of RFC5280 leads to 
the ban on underscores in subject alternative names.  I merely dispute 
that this was obvious to every reader of those documents, even if they 
understood them to be binding technical standards.  Hence why I consider 
the passing of ballot SC12 as similar to a supreme court ruling that a 
combination of laws constitutes an actually 

Re: Underscore characters

2018-12-27 Thread James Burton via dev-security-policy
On Thu, Dec 27, 2018 at 9:00 PM Ryan Sleevi  wrote:

> I'm not really sure I understand this response at all. I'm hoping you can
> clarify.
>
> On Thu, Dec 27, 2018 at 3:45 PM James Burton  wrote:
>
>> For a CA to intentionally state that they are going to violate the BR
>> requirements means that that CA is under immense pressure to comply with
>> demands or face retribution.
>>
>
> I'm not sure I understand how this flows. Comply with whose demands? Face
> retribution from who, and why?
>

The CA must be under immense pressure to comply with demands from certain
customers to determine that they don't have much of a choice but to
intentionally violate the BR requirements and by telling community and root
stores early they are hoping for leniency. The retribution by them
customers could be legal which is outside of this forum but is but it's
still relevant to them if that is the case.


>
>> The severity inflicted on a CA by intentionally violating the BR
>> requirements can be severe. Rolling a dice of chance. Why take the risk?
>>
>
> I'm not sure I understand the question at the end, and suspect there's a
> point to the question I'm missing.
>

The CA is rolling the dice of chance, they are intentionally risking
everything by violating the BR requirements and they know that such action
can face sanctions or distrust in the wrong case. The question I asked is
why are they taking the risk which leads from the first statement.


> Presumably, a CA stating they're going to violate the BR requirements,
> knowing the risk to trust that it may pose, would have done everything
> possible to gather every piece of information so that they could assess the
> risk of violation is outweighed by whatever other risks (in this case,
> revocation). If that's the case, is it unreasonable to ask how the CA
> determined that - which is the root cause analysis question? And how to
> mitigate whatever other risk (in this case, revocation) poses going
> forward, so that violating the BRs isn't consistently seen as the "best"
> option?
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Underscore characters

2018-12-27 Thread Peter Bowen via dev-security-policy
On Thu, Dec 27, 2018 at 12:53 PM thomas.gh.horn--- via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

>
> As to why these certificates have to be revoked, you should see this the
> other way round: as a very generous service of the community to you and
> your customers!
>
> Certificates with (pseudo-)hostnames in them are clearly invalid, so a
> conforming implementation should not accept them for anything and they
> should not pose any security risk. Based on this assessment (no revokation
> if no security risk), a CA could very well issue a certificate including
> any of the (psuedo-)hostnames "example.com_cvs.com", "example.com/cvs.com",
> "cvs.com/example.com", "https://example.com/cvs.com;, "example@cvs.com"
> to the owner of example.com (who, arguably, has the exact same right to
> them as the owner of cvs.com has) and refuse to revoke them.
>

I'm not clear how you get that the owner of example.com is covered anywhere
here.  Parsed into labels, these all have com as the label closet to the
root and then have 'com_cvs', 'com/cvs', 'com/example', 'com/cvs', and
'com@cvs' as the next label respectively.  None have 'example' as the next
label.


> As to the consequences (in case this really becomes an incident
> report/incident reports): this shows a SEVERE lack of ability to revoke
> certificates on DigiCert's side, which must have been known AND ACCEPTED
> for a long time (this cannot be the first "blackout period" of (in the best
> case) 3.5 months).


I don't see how this follows.  DigiCert has made it clear they are able to
technically revoke these certificates and presumably are contractually able
to revoke them as well.  What is being said is that their customers are
asking them to delay revoking them because the _customers_ have blackout
periods where the customers do not want to make changes to their systems.
DigiCert's customers are saying that they are judging the risk from
revocation is greater than the risk from leaving them unrevoked and asking
DigiCert to not revoke. DigiCert is then presenting this request along to
Mozilla to get feedback from Mozilla.


> Thus, it seems to be a good idea to:
>
> 1. Henceforth, make NSS only accept certificates by DigiCert with a
> maximum validity of 100 days. Let's Encrypt has shown that this is clearly
> feasible.
>
> or
>
> 2. Henceforth, require DigiCert to revoke a small, randomly (e.g., using
> RFC 3797) selected subset of their certificates every day (within 7 days).
> If this, e.g., for the same reasons as outlined in these incident reports,
> is not possible, it will trigger (a incrementally decreasing number of)
> more incident reports.
>
> Both proposals would lead to more automation and a better understanding of
> the requirement of timely revocation, while pushing the ecosystem in the
> right direction. For its easiness, the first proposal would be my favorite
> but I would be very interested in hearing other people's thoughts about
> these proposals.
>

I don't agree that demanding all certificate customers have "more
automation" is desirable.  I am very familiar with the Chaos Monkey
approach Netflix has implemented and companies like Gremlin that offer
similar "Failure as a Service" products, but forcing this on customers
seems like a poor idea.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Use cases of publicly-trusted certificates

2018-12-27 Thread Peter Bowen via dev-security-policy
On Thu, Dec 27, 2018 at 12:12 PM Wayne Thayer  wrote:

> On Wed, Dec 26, 2018 at 2:42 PM Peter Bowen via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
>> In the discussion of how to handle certain certificates that no longer
>> meet
>> CA/Browser Forum baseline requirements, Wayne asked for the "Reason that
>> publicly-trusted certificates are in use" by the customers.  This seems to
>> imply that Mozilla has an opinion that the default should not be to use
>> "publicly-trusted certificates".  I've not seen this previously raised, so
>> I want to better understand the expectations here and what customers
>> should
>> consider for their future plans.
>>
>
> The context for the question is that at least one of the organizations
> having difficulty with the underscore sunset stated that they couldn't just
> replace the certificates - they need to ship updates to the client. If you
> are hard-coding certificate information into client software, it's fair to
> ask why you're using publicly-trusted certificates (PTCs).
>

I was not aware of this being an issue in this case.  Thanks for this
explanation.

I believe a similar concern was discussed at length during the SHA-1 sunset
> in relation to payment terminals. As has been suggested, maybe it's simply
> a matter of cost. I suspect, however, that it is more about a lack of
> recognition of the responsibilities that come along with using PTCs. In the
> spirit of incident reporting, I think it would help to have a better
> understanding of the decisions that are driving the use of PTCs in these
> use cases
>

I agree that many people developing products do not understand the fully
scope of the responsibilities that come with using Mozilla PTCs.  From what
I've personally observed, the requirements are frequently: "I want to have
a third party manage the CA at no cost to me", "I want that third party to
make it relatively easy and fairly inexpensive for arbitrary people and
organizations to get certificates that are signed by/chain to the CA", "I
want some level of assurance that the third party is doing the right things
without having to figure out what the right things are", and (usually only
realized much later) "I want to be able to make a decision on whether the
risk of not revoking a given a certificate outweigh the benefit of leaving
it unrevoked and have the third party not suffer any negative consequences
from my decision".

I have seen these requirements from organizations large and small.  They
are not usually written out in these terms, rather there are other
requirements that boil down to these.


> Is the expectation that "publicly trusted certificates" should only be used
>> by customers who for servers that are:
>> - meant to be accessed with a Mozilla web browser, and
>>
>
> No.
>
> - publicly accessible on the Internet (meaning the DNS name is publicly
>> resolvable to a public IP), and
>>
>
> No.
>
> - committed to complying with a 24-hour (wall time) response time
>> certificate replacement upon demand by Mozilla?
>>
>> Committed to comply with section 4.9.1.1 (Reasons for Revoking a
> Subscriber Certificate) of the BRs - yes.
>

In recent revisions to the BRs, it seems that this is extended to 5 days
for many cases, including this underscore case.  However I think that many
customers ("subscribers" in BR terminology) would be very surprised at this
requirement, even though it is long standing.


> Is the recommendation from Mozilla that customers who want to allow Mozilla
>> browsers to access sites but do not want to meet one or both of the other
>> two use the Firefox policies for Certificates (
>>
>> https://github.com/mozilla/policy-templates/blob/master/README.md#certificates
>> ) to add a new CA to the browser?
>>
>>  No, that was not my intent. Rather, I am hoping for a better recognition
> of the commitments (per the Subscriber Agreement and CPS) and risks
> involved when an organization chooses to use PTCs, especially for
> non-browser use cases.]
>

I think this is a good callout.  Mozilla PTCs are a fairly unique situation
because there is very little ability to negotiate terms. Most large
organizations are accustomed to having a set of requirements as a starting
point but working person to person (or organization to organization) to
modify the terms to meet their needs.  It is clear that this is not an
option for Mozilla PTCs and this lack of option is very surprising to the
organizations.  I'm not sure what can be done about existing deployments of
roots in places other than Mozilla software, but it is clear that CAs
should be working on options for future non-Mozilla software cases if their
customers need more policy flexibility and do not need compatibility with
Mozilla software.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Underscore characters

2018-12-27 Thread Ryan Sleevi via dev-security-policy
I'm not really sure I understand this response at all. I'm hoping you can
clarify.

On Thu, Dec 27, 2018 at 3:45 PM James Burton  wrote:

> For a CA to intentionally state that they are going to violate the BR
> requirements means that that CA is under immense pressure to comply with
> demands or face retribution.
>

I'm not sure I understand how this flows. Comply with whose demands? Face
retribution from who, and why?


> The severity inflicted on a CA by intentionally violating the BR
> requirements can be severe. Rolling a dice of chance. Why take the risk?
>

I'm not sure I understand the question at the end, and suspect there's a
point to the question I'm missing.

Presumably, a CA stating they're going to violate the BR requirements,
knowing the risk to trust that it may pose, would have done everything
possible to gather every piece of information so that they could assess the
risk of violation is outweighed by whatever other risks (in this case,
revocation). If that's the case, is it unreasonable to ask how the CA
determined that - which is the root cause analysis question? And how to
mitigate whatever other risk (in this case, revocation) poses going
forward, so that violating the BRs isn't consistently seen as the "best"
option?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Underscore characters

2018-12-27 Thread thomas.gh.horn--- via dev-security-policy


As to why these certificates have to be revoked, you should see this the other 
way round: as a very generous service of the community to you and your 
customers!

Certificates with (pseudo-)hostnames in them are clearly invalid, so a 
conforming implementation should not accept them for anything and they should 
not pose any security risk. Based on this assessment (no revokation if no 
security risk), a CA could very well issue a certificate including any of the 
(psuedo-)hostnames "example.com_cvs.com", "example.com/cvs.com", 
"cvs.com/example.com", "https://example.com/cvs.com;, "example@cvs.com" to 
the owner of example.com (who, arguably, has the exact same right to them as 
the owner of cvs.com has) and refuse to revoke them.

As to the consequences (in case this really becomes an incident report/incident 
reports): this shows a SEVERE lack of ability to revoke certificates on 
DigiCert's side, which must have been known AND ACCEPTED for a long time (this 
cannot be the first "blackout period" of (in the best case) 3.5 months). Thus, 
it seems to be a good idea to:

1. Henceforth, make NSS only accept certificates by DigiCert with a maximum 
validity of 100 days. Let's Encrypt has shown that this is clearly feasible.

or

2. Henceforth, require DigiCert to revoke a small, randomly (e.g., using RFC 
3797) selected subset of their certificates every day (within 7 days). If this, 
e.g., for the same reasons as outlined in these incident reports, is not 
possible, it will trigger (a incrementally decreasing number of) more incident 
reports.

Both proposals would lead to more automation and a better understanding of the 
requirement of timely revocation, while pushing the ecosystem in the right 
direction. For its easiness, the first proposal would be my favorite but I 
would be very interested in hearing other people's thoughts about these 
proposals.

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Underscore characters

2018-12-27 Thread James Burton via dev-security-policy
For a CA to intentionally state that they are going to violate the BR
requirements means that that CA is under immense pressure to comply with
demands or face retribution. The severity inflicted on a CA by
intentionally violating the BR requirements can be severe. Rolling a dice
of chance. Why take the risk?



On Thu, Dec 27, 2018 at 8:21 PM Ryan Sleevi via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> I'm not trying to throw you under the bus here, but I think it's helpful if
> you could highlight what new information you see being required, versus
> that which is already required.
>
> I think, yes, you're right that it's not well received if you go violate
> the BRs and then, after the fact, say "Hey, yeah, we violated, but here's
> why", and finding out that the reasons are met with a lot of skepticism and
> the math being shaky, and you can see that from past incident reports it
> doesn't go over well.
>
> But it's also not well received if it's before, and the statement is "Our
> customer thinks we should violate the BRs. What would happen if we did, and
> what information do you need from us?". That gets into the moral hazard
> that Matt spoke to, and is a huge burden on the community where the
> expectation is that the CA says "Sorry, we can't do that".
>
> So the assumption here is that, in all of this discussion, DigiCert's done
> everything it can to understand the issue, the timelines, remediation, etc,
> and has plans to address both each and every customer and the systemic
> issues that have emerged. If that's not the case, then how are we not in
> one of those two scenarios above? And if it is the case, isn't that
> information readily available by now?
>
> From the discussions on the incident reports, I feel like that's been the
> heart of the questions; which is trying to understand what the root cause
> is and what the remediation plan is. The statement "We'll miss the first
> deadline, but we'll hit the second", but without any details about how or
> why, or the steps being taken to ensure no deadlines are missed in the
> future, doesn't really inspire confidence, and is exactly the same kind of
> feedback that would be given post-incident.
>
> On Thu, Dec 27, 2018 at 1:50 PM Jeremy Rowley via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
> > There's a little bit of a "damned if you do, damned if you don't problem
> > here". Wait until you have all the information? That's a paddlin'.  File
> > before you have enough information? That's a paddlin'. I'd appreciate
> > better guidance on what Mozilla expects from these incident reports
> > timing-wise.
> >
> > -Original Message-
> > From: dev-security-policy  >
> > On Behalf Of Jeremy Rowley via dev-security-policy
> > Sent: Thursday, December 27, 2018 11:47 AM
> > To: r...@sleevi.com
> > Cc: dev-security-policy@lists.mozilla.org
> > Subject: RE: Underscore characters
> >
> > The original incident report contained all of the details of the initial
> > filing.  The additional, separated reports are trickling in as I get
> enough
> > info to post something in reply to the updated questions. As the
> questions
> > asked have changed from the original 7 in the Mozilla incident report,
> > getting the info back takes time. Especially during the holiday season.
> > We’re also working to close out as many without an exception as possible.
> > Note that the deadline has not passed yet so all of these incident
> reports
> > are theoretical (and not actually incidents) until Jan 15th. I gave the
> > community the total potential number of certificates impacted and the
> total
> > number of customers so we can have a community discussion on the overall
> > risk and get public comments into the process before the deadline passes.
> > I’m unaware of any policy at Mozilla or Google that provides guidance on
> > how to file expected issues before they happen. If there is, I’d gladly
> > follow that.
> >
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Underscore characters

2018-12-27 Thread Ryan Sleevi via dev-security-policy
I'm not trying to throw you under the bus here, but I think it's helpful if
you could highlight what new information you see being required, versus
that which is already required.

I think, yes, you're right that it's not well received if you go violate
the BRs and then, after the fact, say "Hey, yeah, we violated, but here's
why", and finding out that the reasons are met with a lot of skepticism and
the math being shaky, and you can see that from past incident reports it
doesn't go over well.

But it's also not well received if it's before, and the statement is "Our
customer thinks we should violate the BRs. What would happen if we did, and
what information do you need from us?". That gets into the moral hazard
that Matt spoke to, and is a huge burden on the community where the
expectation is that the CA says "Sorry, we can't do that".

So the assumption here is that, in all of this discussion, DigiCert's done
everything it can to understand the issue, the timelines, remediation, etc,
and has plans to address both each and every customer and the systemic
issues that have emerged. If that's not the case, then how are we not in
one of those two scenarios above? And if it is the case, isn't that
information readily available by now?

From the discussions on the incident reports, I feel like that's been the
heart of the questions; which is trying to understand what the root cause
is and what the remediation plan is. The statement "We'll miss the first
deadline, but we'll hit the second", but without any details about how or
why, or the steps being taken to ensure no deadlines are missed in the
future, doesn't really inspire confidence, and is exactly the same kind of
feedback that would be given post-incident.

On Thu, Dec 27, 2018 at 1:50 PM Jeremy Rowley via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> There's a little bit of a "damned if you do, damned if you don't problem
> here". Wait until you have all the information? That's a paddlin'.  File
> before you have enough information? That's a paddlin'. I'd appreciate
> better guidance on what Mozilla expects from these incident reports
> timing-wise.
>
> -Original Message-
> From: dev-security-policy 
> On Behalf Of Jeremy Rowley via dev-security-policy
> Sent: Thursday, December 27, 2018 11:47 AM
> To: r...@sleevi.com
> Cc: dev-security-policy@lists.mozilla.org
> Subject: RE: Underscore characters
>
> The original incident report contained all of the details of the initial
> filing.  The additional, separated reports are trickling in as I get enough
> info to post something in reply to the updated questions. As the questions
> asked have changed from the original 7 in the Mozilla incident report,
> getting the info back takes time. Especially during the holiday season.
> We’re also working to close out as many without an exception as possible.
> Note that the deadline has not passed yet so all of these incident reports
> are theoretical (and not actually incidents) until Jan 15th. I gave the
> community the total potential number of certificates impacted and the total
> number of customers so we can have a community discussion on the overall
> risk and get public comments into the process before the deadline passes.
> I’m unaware of any policy at Mozilla or Google that provides guidance on
> how to file expected issues before they happen. If there is, I’d gladly
> follow that.
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Use cases of publicly-trusted certificates

2018-12-27 Thread Peter Bowen via dev-security-policy
On Thu, Dec 27, 2018 at 8:34 AM Ryan Sleevi via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Thu, Dec 27, 2018 at 11:12 AM Jakob Bohm via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
> > Yes, you are consistently mischaracterizing everything I post.
> >
> > My question was a refinement of the original question to the one case
> > where the alternative in the original question (configuring the browser
> > to trust a non-default PKI) would not be meaningful.
> >
>
> I hope you can understand my confusion, as again, you've provided a
> statement, but not an actual question.
>
> Peter provided two, fairly simple to understand, very direct questions:
>

>From earlier messages, I realized that the answer to my initial question is
obviously "no", because there is at least one more supported Mozilla
product that  uses the same trust store: Thunderbird.  The second part is
also faulty, because it doesn't account for certificates for public IP
addresses.  Fixing this is makes the question more complex:

Is it the expectation of Mozilla that "publicly trusted certificates" for
server authentication should only be used by customers for servers that are:
a) meant be accessed by Mozilla Firefox and/or Mozilla Thunderbird
  - This effectively means the server is serving at least one of HTTP, FTP,
WS (WebSocket), NNTP, IMAP, POP3, SMTP, IRC, or XMPP over TLS (including
iCalendar, CalDAV, WCAP, RSS, and Twitter API over one of the supported
protocols)
b) are publicly accessible on the Internet
  - This mean either server is accessed via an IP address is a public IP or
via a hostname is publicly resolvable to a public IP
  - Thunderbird does do SRV record lookups, but SRV records are just
pointers to a hostname, so this does not change the above
c) committed to complying with a 24-hour (wall time) response time
certificate replacement upon demand by Mozilla?

This is a longer question, but more accurately reflects how Mozilla uses
publicly trusted certificates.

Is the expectation that "publicly trusted certificates" should only be used
> > by customers who for servers that are:
> > - meant to be accessed with a Mozilla web browser, and
> > - publicly accessible on the Internet (meaning the DNS name is publicly
> > resolvable to a public IP), and
> > - committed to complying with a 24-hour (wall time) response time
> > certificate replacement upon demand by Mozilla?
>

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Use cases of publicly-trusted certificates

2018-12-27 Thread Jakob Bohm via dev-security-policy

On 27/12/2018 17:28, Ryan Sleevi wrote:

On Thu, Dec 27, 2018 at 11:12 AM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


Yes, you are consistently mischaracterizing everything I post.

My question was a refinement of the original question to the one case
where the alternative in the original question (configuring the browser
to trust a non-default PKI) would not be meaningful.



I hope you can understand my confusion, as again, you've provided a
statement, but not an actual question.

Peter provided two, fairly simple to understand, very direct questions:

Is the expectation that "publicly trusted certificates" should only be used

by customers who for servers that are:
- meant to be accessed with a Mozilla web browser, and
- publicly accessible on the Internet (meaning the DNS name is publicly
resolvable to a public IP), and
- committed to complying with a 24-hour (wall time) response time
certificate replacement upon demand by Mozilla?




Is the recommendation from Mozilla that customers who want to allow Mozilla

browsers to access sites but do not want to meet one or both of the other
two use the Firefox policies for Certificates (

https://github.com/mozilla/policy-templates/blob/master/README.md#certificates
) to add a new CA to the browser?



You presented a question as:

Is the recommendation that customers should not use publicly

trusted certificates for servers that are meant to be accessed by the
general public using a Mozilla web browser unless they are committed
to complying with a 24-hour (wall time) response time certificate
replacement upon demand by Mozilla?



It would appear that it is merely a rephrasing of that first question, but
as a negative question ("should not") rather than Peter's original positive
question ("should only").

Could you help me understand what's different about Peter's first question
and your question? It's very clear you have opinions as to the second
question, but it still seems as if you're merely asking the first question,
but in a way that provides less information. If there's something new or
unique to the question, rephrasing your question may make it clearer. Doing
so without expressing a particular opinion on what the answer should be
seems like an even more positive step forward.



Once again, the question was about the special case of the combination
of Peter's two closely related questions for the case where the option
suggested in the second question (using Firefox policies for
Certificates) makes no sense, as the "customer" does not control the
browser.

But you seem insistent on mischaracterizing an unpleasant question in 
every way possible.




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Underscore characters

2018-12-27 Thread Ryan Sleevi via dev-security-policy
On Wed, Dec 26, 2018 at 1:03 PM Jeremy Rowley 
wrote:

> Much better to treat this question as “We know X is going to happen.
> What’s the best way to mitigate the concerns of the community?”  Exception
> was the wrong word in my original post. I should have used “What would you
> like us to do to mitigate when we miss the Jan 15ht deadline?” instead.
> Apologies for the confusion there.
>

As I tried to highlight several times during early discussions, it's not
really ideal to have each of these trickle in over time.

DigiCert has apparently decided that for 14-15 customers it has sufficient
information to know that X is going to happen, based on their risk
analysis. Why are we seeing bugs trickle in, such as
https://bugzilla.mozilla.org/show_bug.cgi?id=1516545 ?

It would seem uncontroversial to suggest that, as part of the risk analysis
that DigiCert is claiming has already been done, that it has all the
information for an incident report for all of the customers it expects to
not revoke certificates for. If it doesn't, then it suggests that the risk
analysis is not being done responsibly, and being outsourced to the
community to perform.

Should we expect another 12 bugs to be filed? If so, when? If not, why?

As mentioned, if treating this as part of a "Responding to underscores"
incident, then this has the effect of being a slow trickle of an incomplete
incident report overall, and incomplete remediation plan, and those tend
not to bode well. I don't think it'd really be engaging with mitigating to,
say, file a bug on Jan 14th - so how do we move the discussion forward and
make sure the facts are available?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Use cases of publicly-trusted certificates

2018-12-27 Thread Nick Lamb via dev-security-policy
On Thu, 27 Dec 2018 15:30:01 +0100
Jakob Bohm via dev-security-policy
 wrote:

> The problem here is that the prohibition lies in a complex legal
> reading of multiple documents, similar to a situation where a court
> rules that a set of laws has an (unexpected to many) legal
> consequence.

I completely disagree. This prohibition was an obvious fact, well known
to (I had assumed prior to this present fever) everyone who cared about
the Internet's underlying infrastructure.

The only species of technical people I ever ran into previously who
professed "ignorance" of the rule were the sort who see documents like
RFCs as descriptive rather than prescriptive and so their position
would be (as it seems yours is) "Whatever I can do is allowed". Hardly
a useful rule for the Web PKI.

Descriptive documents certainly have their place - I greatly admire
Geoff Pullum's Cambridge Grammar of the English Language, and I
do own the more compact "Student's Introduction" book, both of which
are descriptive since of course a natural language is not defined by
such documents and can only be described by them (and imperfectly,
exactly what's going on in English remains an active area of research).
But that place is not here, the exact workings of DNS are prescribed, in
documents you've called a "complex legal reading of multiple documents"
but more familiarly as "a bunch of pretty readable RFCs on exactly this
topic".

> It would benefit the honesty of this discussion if the side that won
> in the CAB/F stops pretending that everybody else "should have known"
> that their victory was the only legally possible outcome and should
> never have acted otherwise.

I would suggest it would more benefit the honesty of the discussion if
those who somehow convinced themselves of falsehood would accept this
was a serious flaw and resolve to do better in future, rather than
suppose that it was unavoidable and so we have to expect they'll keep
doing it.

Consider it from my position. In one case I know Jakob made an error
but has learned a valuable lesson from it and won't be caught the same
way twice. In the other case Jakob is unreliable on simple matters of
fact and I shouldn't believe anything further he says.


> Maybe because it is not publicly prohibited in general (the DNS
> standard only recommends against it, and other public standards
> require some such names for uses such as publishing certain public
> keys).  The prohibition exists only in the certificate standard
> (PKIX) and maybe in the registration policies of TLDs (for TLD+1
> names only).

Nope. You are, as it seems others in your position have done before,
confusing restrictions on all names in DNS with restrictions on names
for _hosts_ in DNS. Lots of things can have underscores in their names,
and will continue to have underscores in their names, but hosts cannot.
Web PKI certs are issued for host names (and IP addresses, and as a
special case, TOR hidden services).

Imagine if, on the same basis, a CA were to insist that they'd
understood Texas to be a US state, and so they'd written C=TX on the
rationale that a "state" is essentially the same kind of thing as a
"country".

I do not doubt they could find a few (mostly Texan) people to defend
this view, but it's obviously wrong, and when the City of Austin
Independent League of Skateboarders protests that they need to keep
getting certificates with C=TX for compatibility reasons we'd have a
good laugh and tell the CA to stop being so stupid, revoke these certs
and move on.

> Also it isn't the "Web PKI".  It is the "Public TLS PKI", which is
> not confined to Web Browsers surfing online shops and social
> networks, and hasn't been since at least the day TLS was made an IETF
> standard.

It is _named_ the Web PKI. As you point out, it is lots of things, and
so "Web PKI" is not a good description but its name remains the Web
PKI anyway.

The name for people from my country is "Britons". Again it's not a good
description, since some of them aren't from the island of Great Britain
as the country extends to adjacent islands too. Nevertheless the name is
"Britons".

Nick.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Use cases of publicly-trusted certificates

2018-12-27 Thread Ryan Sleevi via dev-security-policy
On Thu, Dec 27, 2018 at 11:12 AM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Yes, you are consistently mischaracterizing everything I post.
>
> My question was a refinement of the original question to the one case
> where the alternative in the original question (configuring the browser
> to trust a non-default PKI) would not be meaningful.
>

I hope you can understand my confusion, as again, you've provided a
statement, but not an actual question.

Peter provided two, fairly simple to understand, very direct questions:

Is the expectation that "publicly trusted certificates" should only be used
> by customers who for servers that are:
> - meant to be accessed with a Mozilla web browser, and
> - publicly accessible on the Internet (meaning the DNS name is publicly
> resolvable to a public IP), and
> - committed to complying with a 24-hour (wall time) response time
> certificate replacement upon demand by Mozilla?



Is the recommendation from Mozilla that customers who want to allow Mozilla
> browsers to access sites but do not want to meet one or both of the other
> two use the Firefox policies for Certificates (
>
> https://github.com/mozilla/policy-templates/blob/master/README.md#certificates
> ) to add a new CA to the browser?


You presented a question as:

Is the recommendation that customers should not use publicly
> trusted certificates for servers that are meant to be accessed by the
> general public using a Mozilla web browser unless they are committed
> to complying with a 24-hour (wall time) response time certificate
> replacement upon demand by Mozilla?


It would appear that it is merely a rephrasing of that first question, but
as a negative question ("should not") rather than Peter's original positive
question ("should only").

Could you help me understand what's different about Peter's first question
and your question? It's very clear you have opinions as to the second
question, but it still seems as if you're merely asking the first question,
but in a way that provides less information. If there's something new or
unique to the question, rephrasing your question may make it clearer. Doing
so without expressing a particular opinion on what the answer should be
seems like an even more positive step forward.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Use cases of publicly-trusted certificates

2018-12-27 Thread James Burton via dev-security-policy
The main reason that publicly trusted certificates are used by
organizations for all infrastructure (internal and external) is that it's
far cheaper than building and maintaining an internal PKI.

On Thu, Dec 27, 2018 at 4:14 PM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On 27/12/2018 17:02, Rob Stradling wrote:
> > On 27/12/2018 15:38, Jakob Bohm via dev-security-policy wrote:
> > 
> >> For example, the relevant EKU is named "id-kp-serverAuth" not "id-kp-
> >> browserWwwServerAuth" .  WWW is mentioned only in a comment under the
> >> OID definition.
> >
> > Hi Jakob.
> >
> > Are you suggesting that comments in ASN.1 specifications are meaningless
> > or that they do not convey intent?
> >
> > Also, are you suggesting that a canonical OID name must clearly convey
> > the full and precise intent of the purpose(s) for which the OID should
> > be used?
> >
>
> In general no.  However in this special case, the comment is
> inconsistent with everything else.
>
> Enjoy
>
> Jakob
> --
> Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
> Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
> This public discussion message is non-binding and may contain errors.
> WiseMo - Remote Service Management for PCs, Phones and Embedded
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Use cases of publicly-trusted certificates

2018-12-27 Thread Jakob Bohm via dev-security-policy

On 27/12/2018 17:13, Jakob Bohm wrote:

On 27/12/2018 17:02, Rob Stradling wrote:

On 27/12/2018 15:38, Jakob Bohm via dev-security-policy wrote:


For example, the relevant EKU is named "id-kp-serverAuth" not "id-kp-
browserWwwServerAuth" .  WWW is mentioned only in a comment under the
OID definition.


Hi Jakob.

Are you suggesting that comments in ASN.1 specifications are meaningless
or that they do not convey intent?

Also, are you suggesting that a canonical OID name must clearly convey
the full and precise intent of the purpose(s) for which the OID should
be used?



In general no.  However in this special case, the comment is
inconsistent with everything else.



Furthermore, this particular comment is absent in the actual ASN.1
module at the end of RFC5280, making it clear that it isn't a semantic
comment.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Use cases of publicly-trusted certificates

2018-12-27 Thread Jakob Bohm via dev-security-policy

On 27/12/2018 17:02, Rob Stradling wrote:

On 27/12/2018 15:38, Jakob Bohm via dev-security-policy wrote:


For example, the relevant EKU is named "id-kp-serverAuth" not "id-kp-
browserWwwServerAuth" .  WWW is mentioned only in a comment under the
OID definition.


Hi Jakob.

Are you suggesting that comments in ASN.1 specifications are meaningless
or that they do not convey intent?

Also, are you suggesting that a canonical OID name must clearly convey
the full and precise intent of the purpose(s) for which the OID should
be used?



In general no.  However in this special case, the comment is
inconsistent with everything else.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Use cases of publicly-trusted certificates

2018-12-27 Thread Jakob Bohm via dev-security-policy

On 27/12/2018 16:55, Ryan Sleevi wrote:

On Thu, Dec 27, 2018 at 10:41 AM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


He described three combined conditions to be met. You've described a
situation "What if you meet two, but not three". I believe that was
originally captured in his question, so what new information is being

asked

about here?



Using Firefox policies to reconfigure the browser is not a relevant
alternative for genuinely public web servers in the age of HTTPS-
everywhere.  That's the difference from the other combinations.



I'm sorry, but I still fail to see the question there. That seems to be a
statement of opinion.

Do you believe it's mischaracterizing your question as effectively
restating "What happens if you meet two, but not all three?" If you do,
perhaps you could help clarify what your original question is, without any
statement about what you believe or presume the answer to be. That seems
the best way to get the information you feel is lacking.



Yes, you are consistently mischaracterizing everything I post.

My question was a refinement of the original question to the one case
where the alternative in the original question (configuring the browser
to trust a non-default PKI) would not be meaningful.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Use cases of publicly-trusted certificates

2018-12-27 Thread Rob Stradling via dev-security-policy
On 27/12/2018 15:38, Jakob Bohm via dev-security-policy wrote:

> For example, the relevant EKU is named "id-kp-serverAuth" not "id-kp-
> browserWwwServerAuth" .  WWW is mentioned only in a comment under the
> OID definition.

Hi Jakob.

Are you suggesting that comments in ASN.1 specifications are meaningless 
or that they do not convey intent?

Also, are you suggesting that a canonical OID name must clearly convey 
the full and precise intent of the purpose(s) for which the OID should 
be used?

-- 
Rob Stradling
Senior Research & Development Scientist
Sectigo Limited

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Use cases of publicly-trusted certificates

2018-12-27 Thread Ryan Sleevi via dev-security-policy
On Thu, Dec 27, 2018 at 10:41 AM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> > He described three combined conditions to be met. You've described a
> > situation "What if you meet two, but not three". I believe that was
> > originally captured in his question, so what new information is being
> asked
> > about here?
> >
>
> Using Firefox policies to reconfigure the browser is not a relevant
> alternative for genuinely public web servers in the age of HTTPS-
> everywhere.  That's the difference from the other combinations.
>

I'm sorry, but I still fail to see the question there. That seems to be a
statement of opinion.

Do you believe it's mischaracterizing your question as effectively
restating "What happens if you meet two, but not all three?" If you do,
perhaps you could help clarify what your original question is, without any
statement about what you believe or presume the answer to be. That seems
the best way to get the information you feel is lacking.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Use cases of publicly-trusted certificates

2018-12-27 Thread Ryan Sleevi via dev-security-policy
On Thu, Dec 27, 2018 at 10:38 AM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> PKIX clearly uses definitions that make it clear that the same PKI
> should be used for most/all TLS implementations for the public Internet,
> and this is indeed the common practice on any OS that installs a root
> store in a shared location rather than inside browser source code.
>

This is also not correct, and I'm afraid may be a result of a confusion
between certificate profile and what a PKI is. While I would be more than
happy to help identify your confusion and resolve it, I don't think this
would be the best thread. Unfortunately, both your statement of intent and
history are, thankfully, false.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Use cases of publicly-trusted certificates

2018-12-27 Thread Jakob Bohm via dev-security-policy
On 27/12/2018 16:24, Ryan Sleevi wrote:
> On Thu, Dec 27, 2018 at 9:34 AM Jakob Bohm via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
> 
>> On 26/12/2018 22:42, Peter Bowen wrote:
>>> In the discussion of how to handle certain certificates that no longer
>> meet
>>> CA/Browser Forum baseline requirements, Wayne asked for the "Reason that
>>> publicly-trusted certificates are in use" by the customers.  This seems
>> to
>>> imply that Mozilla has an opinion that the default should not be to use
>>> "publicly-trusted certificates".  I've not seen this previously raised,
>> so
>>> I want to better understand the expectations here and what customers
>> should
>>> consider for their future plans.
>>>
>>> Is the expectation that "publicly trusted certificates" should only be
>> used
>>> by customers who for servers that are:
>>> - meant to be accessed with a Mozilla web browser, and
>>> - publicly accessible on the Internet (meaning the DNS name is publicly
>>> resolvable to a public IP), and
>>> - committed to complying with a 24-hour (wall time) response time
>>> certificate replacement upon demand by Mozilla?
>>>
>>> Is the recommendation from Mozilla that customers who want to allow
>> Mozilla
>>> browsers to access sites but do not want to meet one or both of the other
>>> two use the Firefox policies for Certificates (
>>>
>> https://github.com/mozilla/policy-templates/blob/master/README.md#certificates
>>> ) to add a new CA to the browser?
>>>
>>
>> Also, is the recommendation that customers should not use publicly
>> trusted certificates for servers that are meant to be accessed by the
>> general public using a Mozilla web browser unless they are
>>
>>> - committed to complying with a 24-hour (wall time) response time
>>> certificate replacement upon demand by Mozilla?
>>
> 
> Could you help me understand how that question is meaningfully different
> than what Peter originally asked?
> 
> He described three combined conditions to be met. You've described a
> situation "What if you meet two, but not three". I believe that was
> originally captured in his question, so what new information is being asked
> about here?
> 

Using Firefox policies to reconfigure the browser is not a relevant 
alternative for genuinely public web servers in the age of HTTPS-
everywhere.  That's the difference from the other combinations.


Enjoy

Jakob
-- 
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Use cases of publicly-trusted certificates

2018-12-27 Thread Jakob Bohm via dev-security-policy

On 27/12/2018 16:16, Ryan Sleevi wrote:

On Thu, Dec 27, 2018 at 9:30 AM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


Also it isn't the "Web PKI".  It is the "Public TLS PKI", which is not
confined to Web Browsers surfing online shops and social networks, and
hasn't
been since at least the day TLS was made an IETF standard.



This reply is filled with a number of unrelated and unproductive
non-sequitors, but this one is particularly worth calling out as wrong -
historically and factually.



And I would say that this more accurately describes your reply.


TLS has, as with the specifications that preceded it (SSL, PCT), treated
PKI as an opaque black box for which inputs go in, and a yes/no come out.
TLS has entirely, and intentionally, left unspecified what goes on within
that box. The existence of TLS, much like the existence of S/MIME, no more
defines the PKI than it defines the color of the sky or what time to set
your alarm for the morning.

The concept of the PKI has, even in traces of the X.500 DIT, considered
itself a loose amalgamation of various different PKIs, interoperating where
they are compatible (technology, policies, implementation), but otherwise
managed distinct. This can be seen from the first discussions of audits,
which were concerned about assessing the interoperability of these distinct
PKIs, to the development and foundation of the PKIX WG, which produced a
number of documents to smooth the technological differences (e.g. RFC
5280-and-predecessors) and policy differences (3647 and predecessors).


PKIX clearly uses definitions that make it clear that the same PKI
should be used for most/all TLS implementations for the public Internet,
and this is indeed the common practice on any OS that installs a root
store in a shared location rather than inside browser source code.

For example, the relevant EKU is named "id-kp-serverAuth" not "id-kp-
browserWwwServerAuth" .  WWW is mentioned only in a comment under the
OID definition.



Yes, it very much is the "Web PKI", and has been for some time. Considering
those sets of CAs bundled in the context for SSL 2.0 and Netscape Navigator
were very much intended for the Web, it would be demonstrable ignorance to
argue otherwise.



Netscape Navigator/Communicator used the same PKI for mail and news
servers, Mozilla Thunderbird still does, and both Google and Mozilla use
such certificates for their public mail and NNTP servers.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Use cases of publicly-trusted certificates

2018-12-27 Thread Ryan Sleevi via dev-security-policy
On Thu, Dec 27, 2018 at 9:34 AM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On 26/12/2018 22:42, Peter Bowen wrote:
> > In the discussion of how to handle certain certificates that no longer
> meet
> > CA/Browser Forum baseline requirements, Wayne asked for the "Reason that
> > publicly-trusted certificates are in use" by the customers.  This seems
> to
> > imply that Mozilla has an opinion that the default should not be to use
> > "publicly-trusted certificates".  I've not seen this previously raised,
> so
> > I want to better understand the expectations here and what customers
> should
> > consider for their future plans.
> >
> > Is the expectation that "publicly trusted certificates" should only be
> used
> > by customers who for servers that are:
> > - meant to be accessed with a Mozilla web browser, and
> > - publicly accessible on the Internet (meaning the DNS name is publicly
> > resolvable to a public IP), and
> > - committed to complying with a 24-hour (wall time) response time
> > certificate replacement upon demand by Mozilla?
> >
> > Is the recommendation from Mozilla that customers who want to allow
> Mozilla
> > browsers to access sites but do not want to meet one or both of the other
> > two use the Firefox policies for Certificates (
> >
> https://github.com/mozilla/policy-templates/blob/master/README.md#certificates
> > ) to add a new CA to the browser?
> >
>
> Also, is the recommendation that customers should not use publicly
> trusted certificates for servers that are meant to be accessed by the
> general public using a Mozilla web browser unless they are
>
> > - committed to complying with a 24-hour (wall time) response time
> > certificate replacement upon demand by Mozilla?
>

Could you help me understand how that question is meaningfully different
than what Peter originally asked?

He described three combined conditions to be met. You've described a
situation "What if you meet two, but not three". I believe that was
originally captured in his question, so what new information is being asked
about here?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Use cases of publicly-trusted certificates

2018-12-27 Thread Ryan Sleevi via dev-security-policy
On Thu, Dec 27, 2018 at 9:30 AM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Also it isn't the "Web PKI".  It is the "Public TLS PKI", which is not
> confined to Web Browsers surfing online shops and social networks, and
> hasn't
> been since at least the day TLS was made an IETF standard.
>

This reply is filled with a number of unrelated and unproductive
non-sequitors, but this one is particularly worth calling out as wrong -
historically and factually.

TLS has, as with the specifications that preceded it (SSL, PCT), treated
PKI as an opaque black box for which inputs go in, and a yes/no come out.
TLS has entirely, and intentionally, left unspecified what goes on within
that box. The existence of TLS, much like the existence of S/MIME, no more
defines the PKI than it defines the color of the sky or what time to set
your alarm for the morning.

The concept of the PKI has, even in traces of the X.500 DIT, considered
itself a loose amalgamation of various different PKIs, interoperating where
they are compatible (technology, policies, implementation), but otherwise
managed distinct. This can be seen from the first discussions of audits,
which were concerned about assessing the interoperability of these distinct
PKIs, to the development and foundation of the PKIX WG, which produced a
number of documents to smooth the technological differences (e.g. RFC
5280-and-predecessors) and policy differences (3647 and predecessors).

Yes, it very much is the "Web PKI", and has been for some time. Considering
those sets of CAs bundled in the context for SSL 2.0 and Netscape Navigator
were very much intended for the Web, it would be demonstrable ignorance to
argue otherwise.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Use cases of publicly-trusted certificates

2018-12-27 Thread Jakob Bohm via dev-security-policy
On 26/12/2018 22:42, Peter Bowen wrote:
> In the discussion of how to handle certain certificates that no longer meet
> CA/Browser Forum baseline requirements, Wayne asked for the "Reason that
> publicly-trusted certificates are in use" by the customers.  This seems to
> imply that Mozilla has an opinion that the default should not be to use
> "publicly-trusted certificates".  I've not seen this previously raised, so
> I want to better understand the expectations here and what customers should
> consider for their future plans.
> 
> Is the expectation that "publicly trusted certificates" should only be used
> by customers who for servers that are:
> - meant to be accessed with a Mozilla web browser, and
> - publicly accessible on the Internet (meaning the DNS name is publicly
> resolvable to a public IP), and
> - committed to complying with a 24-hour (wall time) response time
> certificate replacement upon demand by Mozilla?
> 
> Is the recommendation from Mozilla that customers who want to allow Mozilla
> browsers to access sites but do not want to meet one or both of the other
> two use the Firefox policies for Certificates (
> https://github.com/mozilla/policy-templates/blob/master/README.md#certificates
> ) to add a new CA to the browser?
> 

Also, is the recommendation that customers should not use publicly 
trusted certificates for servers that are meant to be accessed by the 
general public using a Mozilla web browser unless they are

> - committed to complying with a 24-hour (wall time) response time
> certificate replacement upon demand by Mozilla?

Which I have repeatedly argued is extremely onerous on a huge subset of 
all server operators.

Enjoy

Jakob
-- 
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Use cases of publicly-trusted certificates

2018-12-27 Thread Jakob Bohm via dev-security-policy
On 27/12/2018 13:39, Nick Lamb wrote:
> As a relying party I read this in the context of the fact that we're 
> talking about names that are anyway prohibited.
> 

The problem here is that the prohibition lies in a complex legal reading 
of multiple documents, similar to a situation where a court rules that a 
set of laws has an (unexpected to many) legal consequence.

Such rulings frequently come out from the highest federal courts of the 
US and the EU, and this is generally referred to as those courts 
effectively creating new legislation.

It would benefit the honesty of this discussion if the side that won in 
the CAB/F stops pretending that everybody else "should have known" that 
their victory was the only legally possible outcome and should never 
have acted otherwise.

> Why would you need a publicly trusted certificate that specifies a name 
> that is publicly prohibited?
> 

Maybe because it is not publicly prohibited in general (the DNS standard 
only recommends against it, and other public standards require some such 
names for uses such as publishing certain public keys).  The prohibition 
exists only in the certificate standard (PKIX) and maybe in the registration 
policies of TLDs (for TLD+1 names only).

> I guess the answer is "But it works on Windows". And Windows is welcome 
> to implement a parallel "Windows PKI" which can have its own rules about 
> naming and whatever else and so the certificates could be issued in that 
> PKI but not in the Web PKI.

Actually, my only current uses of such names (none with certificates anyway) 
are all done using a non-Windows OS, and the names seem to work with every 
DNS library and tool tried.

Also it isn't the "Web PKI".  It is the "Public TLS PKI", which is not 
confined to Web Browsers surfing online shops and social networks, and hasn't 
been since at least the day TLS was made an IETF standard.



Enjoy

Jakob
-- 
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Use cases of publicly-trusted certificates

2018-12-27 Thread Nick Lamb via dev-security-policy
As a relying party I read this in the context of the fact that we're talking about names that are anyway prohibited.Why would you need a publicly trusted certificate that specifies a name that is publicly prohibited?I guess the answer is "But it works on Windows". And Windows is welcome to implement a parallel "Windows PKI" which can have its own rules about naming and whatever else and so the certificates could be issued in that PKI but not in the Web PKI.___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Online exposed keys database

2018-12-27 Thread Rob Stradling via dev-security-policy
On 27/12/2018 10:35, Matt Palmer via dev-security-policy wrote:
> Hmm, Rob's reply never made it to my inbox.  I'll reply to that separately
> now I know it's a thing.

Hi Matt.  I'm consistently receiving "Undelivered Mail Returned to 
Sender" messages from your mailserver, which is presumably why you 
didn't see my reply.  The body of each message is as follows:


This is the mail system at host mail.hezmatt.org.

I'm sorry to have to inform you that your message could not
be delivered to one or more recipients. It's attached below.

For further assistance, please send mail to postmaster.

If you do so, please include this problem report. You can
delete your own text from the attached returned message.

The mail system

: Command time limit exceeded: "exec 
/usr/bin/procmail -t
 -a "${EXTENSION}""


> On Thu, Dec 27, 2018 at 05:56:08PM +0900, Hector Martin 'marcan' via 
> dev-security-policy wrote:
>> On 19/12/2018 20:09, Rob Stradling via dev-security-policy wrote:
>>> I'm wondering how I might add a pwnedkeys check to crt.sh.  I think I'd
>>> prefer to have a table of SHA-256(SPKI) stored locally on the crt.sh DB.
>>
>> Yes, I think the right approach for an upstream source is to provide a big
>> list of hashes. People can then postprocess that into whatever database or
>> filter format they want.
> 
> The reason I haven't provided that (yet) is because, unlike pwnedpasswords,
> the set of pwned keys increases in real-time, as my scrapers go out into the
> world and find more keys.  Thus, a once-off dump of what's in the database
> today isn't going to be very useful tomorrow, or next week, or next month.
> 
> I don't want to put up a dump of everything until I've got a solid mechanism
> for people to retrieve and load updates of the dataset.  The last thing I
> want to do is give people any encouragement to use a stale data set.
> 
> Implementation of an auto-update mechanism is on the todo list, but it's
> quite a bit lower down the priority list than other things, like improving
> key scraping, and implementing a bloom filter of keys, which I feel is more
> useful, because you've got to hit the API anyway to get the attestation of
> compromise, so something with a bit of a false-positive rate isn't a big
> deal.
> 
> - Matt

-- 
Rob Stradling
Senior Research & Development Scientist
Sectigo Limited

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


AW: CA Communication: Underscores in dNSNames

2018-12-27 Thread Buschart, Rufus via dev-security-policy
> On Tue, Dec 18, 2018 at 8:19 AM Jakob Bohm via dev-security-policy < 
> dev-security-policy@lists.mozilla.org> wrote:
> > On 10/12/2018 18:09, Ryan Sleevi wrote:
> > > On Mon, Dec 10, 2018 at 6:16 AM Buschart, Rufus via
> > > dev-security-policy < dev-security-policy@lists.mozilla.org> wrote:
> > >
> > >> Hello!
> > >>
> > >> It would be helpful, if the CA/B or Mozilla could publish a
> > >> document on its web pages to which we can redirect our customers,
> > >> if they have technical questions about this underscore issue. Right
> > >> now, I can only
> > tell
> > >> them, that they are forbidden because the ballot to explicitly
> > >> allow
> > them
> > >> failed, but not really why. Especially since the first result in
> > >> Google
> > for
> > >> "underscore domain name" is a StackOverflow article (
> > >> https://stackoverflow.com/a/2183140/1426535) stating that it is
> > >> technically perfectly okay and also RFC 5280 says "These characters
> > >> [underscore and at-sign] often appear in Internet addresses.  Such
> > >> addresses  MUST be encoded using an ASN.1 type that supports them."
> > >>
> > >
> > > There's definitely been a lot of back and forth on this topic. It's
> > unclear
> > > if you're looking for a clearer statement about why they're
> > > forbidden or where they're forbidden.
> > >
> >
> > It is clear that Rufus is looking for a link to the deprecation
> > ballot, rather than the old (failed) non-deprecation ballot.
> >
> 
> Thanks for sharing your interpretation. I don't think that is an accurate 
> summary, but it's useful to understand your perspective and
> how you interpret things.
> 

Thank you very much for the good and constructive discussion that followed my 
question. The main problem I had with this underscore issue is, that the first 
hit at Google links to a SO article 
(https://stackoverflow.com/questions/2180465/can-domain-name-subdomains-have-an-underscore-in-it/2183140#2183140)
 which is a little bit misleading, if not being red with a lot of care. But 
based on this discussion and the statement of Wayne I think everything is clear 
now. Thank you once again!

/Rufus
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Online exposed keys database

2018-12-27 Thread Matt Palmer via dev-security-policy
On Wed, 19 Dec 2018 05:09:11 -0600, Rob Stradling wrote:
> How do you handle malformed SPKIs?  (e.g., the algorithm parameters 
> field for an RSA public key is missing, whereas it should be present and 
> should contain an ASN.1 NULL). 
>
> Presumably your server/database only deals with correctly encoded SPKIs, 
> and it's up to the caller to ensure that they repair an incorrectly 
> encoded SPKI before they call your API? 

Yes, that's right.  I'm not doing exhaustive checks of SPKIs at present, but
the intention is to provide hashes for the canonically correct form(s) of
keys.  NIST EC keys annoying have *two* canonical forms (compressed and
uncompressed point), and I've taken the approach of listing the key under
both hashes, for completeness.  However, where there's only *supposed* to be
one form of key, that's all I'm providing a hash for in the API.  Frontends
can do the heavy lifting of detecting incorrect key encodings, and either
refuse to proceed, or try and do a patch job to get a key to query.

> I'm wondering how I might add a pwnedkeys check to crt.sh.  I think I'd 
> prefer to have a table of SHA-256(SPKI) stored locally on the crt.sh DB.

As I mentioned in my reply to Hector, a raw dump of the existing keys (or,
rather, fingerprints thereof) is a low(ish) priority.  There's lots of more
important things to be working on at the moment, and this is only a
spare-time project.  Scrapers first!

> This was the gist of the idea: 
> https://www.mail-archive.com/trans@ietf.org/msg02705.html

Unfortunately that doesn't really give much in the "why" and "how"
dimensions, either.  I'd love it if you could expand on what benefits a
verifiable data structure provides, in your opinion, over and above a
straightforward database.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Online exposed keys database

2018-12-27 Thread Matt Palmer via dev-security-policy
Hmm, Rob's reply never made it to my inbox.  I'll reply to that separately
now I know it's a thing.

On Thu, Dec 27, 2018 at 05:56:08PM +0900, Hector Martin 'marcan' via 
dev-security-policy wrote:
> On 19/12/2018 20:09, Rob Stradling via dev-security-policy wrote:
> > I'm wondering how I might add a pwnedkeys check to crt.sh.  I think I'd
> > prefer to have a table of SHA-256(SPKI) stored locally on the crt.sh DB.
> 
> Yes, I think the right approach for an upstream source is to provide a big
> list of hashes. People can then postprocess that into whatever database or
> filter format they want.

The reason I haven't provided that (yet) is because, unlike pwnedpasswords,
the set of pwned keys increases in real-time, as my scrapers go out into the
world and find more keys.  Thus, a once-off dump of what's in the database
today isn't going to be very useful tomorrow, or next week, or next month.

I don't want to put up a dump of everything until I've got a solid mechanism
for people to retrieve and load updates of the dataset.  The last thing I
want to do is give people any encouragement to use a stale data set.

Implementation of an auto-update mechanism is on the todo list, but it's
quite a bit lower down the priority list than other things, like improving
key scraping, and implementing a bloom filter of keys, which I feel is more
useful, because you've got to hit the API anyway to get the attestation of
compromise, so something with a bit of a false-positive rate isn't a big
deal.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Online exposed keys database

2018-12-27 Thread Hector Martin 'marcan' via dev-security-policy

On 19/12/2018 20:09, Rob Stradling via dev-security-policy wrote:

I'm wondering how I might add a pwnedkeys check to crt.sh.  I think I'd
prefer to have a table of SHA-256(SPKI) stored locally on the crt.sh DB.


Yes, I think the right approach for an upstream source is to provide a 
big list of hashes. People can then postprocess that into whatever 
database or filter format they want. For example, this is how Pwned 
Passwords does things, and I wrote a bloom filter implementation to 
import that for production usage (with parameters tuned for my personal 
taste of false positive rate, etc).


--
Hector Martin "marcan"
Public key: https://mrcn.st/pub
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy