Re: Organization info in certs not being properly recognized byFirefox
It's debatable if those are actually facts but perhaps some perspective will help the conversation. I'll use this case as a launching point:* Users are, quite reasonably, focused on the viewport. After all, that's where the content is and where the task is. Many people simply never see the Location Bar or its security indicators.That is certainly a true statement and I would take it a step further to say the user should not be expected to check the address bar under normal circumstances. Or, to put it differently, any security feature which requires the user to pay close attention to the address bar should be considered ineffectual.That said, there is a use-case worth considering here: when the page being viewed doesn't look right. Examples include when I expected site A but end up at B, or when I go to login at C but instead have what looks like a phishing page. In such cases, seeing the organization info and so forth can be useful.Even if the best the browser can do is say "this site is owned by Google, although I really can't confirm it" there is utility in that. It just might give me a fighting chance at being secure--which is not to say the alternative is no chance but that my ability to make secure decisions are diminished without it. Original Message From: Chris PalmerSent: Monday, October 27, 2014 1:48 PMOn Mon, Oct 27, 2014 at 10:58 AM, John Nagle na...@sitetruth.com wrote: It's appropriate for browsers to show that new information with users. In the browser, there are two issues: 1) detecting OV certs, which requires a list of per-CA OIDs, and 2) displaying something in the GUI.If users perceive the new information — and that's a big if — what doyou expect that they will do with it?While formulating your response, please keep these facts in mind:* Users understand their task well enough to complete it, but are alsodistracted (including by security indicators and their numerous falsepositives), and busy. 0 people in the world understand 100% of the insand outs of X.509 and TLS; normal people have no chance and should nothave to. X.509-style PKI is an engineering failure in part because ofits absurd complexity.* Users are, quite reasonably, focused on the viewport. After all,that's where the content is and where the task is. Many people simplynever see the Location Bar or its security indicators.* The only security boundary on the web is the origin (the tuplescheme, host, port).* URLs are incredibly hard to parse, both for engineers (search theweb for hundreds of attempts to parse URLs with regular expressions!)and for normal people.* The only part of the origin that users understand is the hostname;it's better if the hostname is just effective TLD + 1 label below(e.g. example + co.sg, or example + com). Long hostnames look phishy.* User who look away from the viewport have a chance to understand 1bit of security status information: "Secure" or "Not secure".Currently, the guaranteed-not-safe schemes like http, ws, and ftp arethe only ones guaranteed to never incur any warning or bad indicator,leading people to reasonably conclude that they are safe. Fixing thatis the/a #1 priority for me; it ranks far higher thanever-more-fine-grained noise about organization names, hostnames,OV/DV/EV, and so on.* You can try to build a square by cramming a bunch of differentZooko's Triangles together, but it's probably going to be a majorbummer. After all, that's the status quo; why would more triangleshelp?* We have to design products that work for most people in the worldmost of the time, and which are not egregiously unsafe or egregiouslyhard to understand. It's good to satisfy small populations of powerusers if we can, but not at the expense of normal every day use.* There are some threat models for which no defense can be computed.For example, attempts to get to the "true" business entity, and toensure that they are not a proxy for some service behind, start tolook a lot like remote attestation. RA is not really possible even onclosed networks; on the internet it's really not happening. ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: 738 sites need their certs revoked
Hi Richard,I was hoping to do some community organizing in here before going to the CA's individually. Some thoughts:Going to a CA and demanding (?) that they revoke certs is not a normal situation and I'm expecting some will push back. It would be advantageous to have a more unified voice to articulate what the security community expects.For the CA-related folks who participate in this forum I was hoping to hear if any of them anticipate problems or concerns. In particular I'm thinking of the pay-to-revoke policies.When you say configure the browser, I assume you mean adding the 738 certs in the trust store with all the bits disabled? This would be the option of last resort in my mind. My expectation is that CA's will comply because it is in their best interest to keep poorly maintained sites from being associated with their brand.Maybe this isn't a controversial issue. I hope not. This situation provides us with a good opportunity to provide greater definition to an abstract concept like "a secure internet". This is in the interest of site admins, CA's, and the general public alike.So once we get a list of the sites, does anyone have thoughts on the best way to disseminate the information and go about forcing action? From: Richard BarnesSent: Thursday, October 2, 2014 1:32 PMHi there,To be clear: Mozilla does not revoke certificates. In some specific cases (e.g., private browsing), Firefox will refuse to connect to a web site, but this is not the same as the certificate being revoked.It's possible that we could configure the browser to not connect to known vulnerable sites. However, there's a fair degree of administrative overhead involved in maintaining that list, so I would be inclined to only pursue this option if there were a large number of sites affected or if some especially important sites were affected. Unfortunately, in the context of the web, 738 sites is not that many.I would encourage you to make your case to the CAs that have issued certificates to the vulnerable sites.--RichardOn Sep 30, 2014, at 11:15 PM, fhw...@gmail.com wrote: According to SSL Pulse there are 738 sites that are vulnerable to Heartbleed: https://www.trustworthyinternet.org/ssl-pulse/ I just don't see how that can be tolerated. I'm assuming this data means we have sites that are presenting valid certs even though their private keys can be (and may have already been) compromised. That's not acceptable. To get the discussion going, I think one way to move forward is for the issuing agencies to notify the offending sites that their certificates will be revoked in, say, 21 days. If a site takes no action then secure connections will fail after that period, which may or may not be a problem for those sites. If a site wishes to avoid that disruption then the following needs to happen: apply the relevant patches to the vulnerable system; generate a new public/private key pair; get a new certificate issued; and finally install the new cert before the end of the 21 day window. I imagine that could be controversial but we have to start somewhere. So speak up! ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: Client certs
FIDO has its shortcomings, too, and its users can be victims of phishing just as much as anyone else. All you need is the right inducement. For example...Passwords: Enter your password now or your account will be frozen.Tokens: Enter the token code now or your account will be frozen.FIDO: Swipe your finger on your FIDO device now or your account will be frozen.But back to your original query, Gerv, I would just add the following to all the other good points people have made:From the perspective of HSBC, devices get infected with malware all the time, and sometimes people will want to use that device for their banking.This means that anything associated with a compromised device--passwords, certificates, and even USB security devices--has the potential to be compromised which ultimately can lead to fraud. The way to mitigate some of that risk, then, is to have a completely separate device like Secure Key that you don't plug in to anything.My guess is that that's where they are coming from--the effectiveness of reducing risk weighed against the cost of bank fraud. Relying on client certs wouldn't sufficiently reduce that risk.Still, it's possible that certs could be a better way to go in a different context, and some interesting cases have come up here. It's just a question of picking the right tool for the job.Good discussion!From: Ryan SleeviSent: Friday, September 26, 2014 4:57 AM...However, what I'm surprised to see no one having pointed out is that allof these 2FA systems - including the one you mentioned - is phishing.This is where 2FA systems like FIDO come in to play, because thecryptographic assertion is bound to the channel (like TLS clientcertificates), and thus cannot be phished (as the assertion is no longer abearer token, as it is with those PIN systems). You can see more athttps://fidoalliance.org/___dev-security-policy mailing listdev-security-policy@lists.mozilla.orghttps://lists.mozilla.org/listinfo/dev-security-policy ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
738 sites need their certs revoked
According to SSL Pulse there are 738 sites that are vulnerable to Heartbleed: https://www.trustworthyinternet.org/ssl-pulse/I just don't see how that can be tolerated. I'm assuming this data means we have sites that are presenting valid certs even though their private keys can be (and may have already been) compromised. That's not acceptable.To get the discussion going, I think one way to move forward is for the issuing agencies to notify the offending sites that their certificates will be revoked in, say, 21 days. If a site takes no action then secure connections will fail after that period, which may or may not be a problem for those sites.If a site wishes to avoid that disruption then the following needs to happen: apply the relevant patches to the vulnerable system; generate a new public/private key pair; get a new certificate issued; and finally install the new cert before the end of the 21 day window.I imagine that could be controversial but we have to start somewhere. So speak up! ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: HSTS
Thanks Hubert. I was guessing about 1,000 sites so seeing 3,000 is better but still small. What I didn't expect is that fewer than 50,000 sites present themselves as being secure in the first place. That's smaller than it ought to be. The real shocker however is how many sites exhibit known vulnerabilities. The Heartbleed stat especially stands out. I suppose those sites are given an F rating but really the certs need to be revoked in all 738 cases. Any way the CA's can help us confirm that any site which is vulnerable to Heartbleed has had its cert revoked? Original Message From: Hubert Kario Sent: Friday, September 26, 2014 6:07 AM To: fhw...@gmail.com Cc: dev-security-policy@lists.mozilla.org Subject: Re: HSTS - Original Message - From: fhw...@gmail.com To: dev-security-policy@lists.mozilla.org Sent: Thursday, 25 September, 2014 7:39:33 PM Subject: Re: HSTS I'll address the DoS thing momentarily but first I'm curious if there's any data out there on how widely deployed HSTS currently is About 2% of sites advertise HSTS, see https://www.trustworthyinternet.org/ssl-pulse/ -- Regards, Hubert Kario ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: HSTS
I'll address the DoS thing momentarily but first I'm curious if there's any data out there on how widely deployed HSTS currently is and/or to what extent site/domain owners are committing to support it going forward?Also are the cases where self-DoS might occur well known? The cases I can think of generally fall into 3 different categories, but since the actual ways in which you might shoot yourself in the foot are numerous (and subtle) I'd argue that choosing to implement HSTS is a much larger commitment than HTTPS alone. For one thing, you need knowledge of your whole domain and the content being delivered (and how it's being deployed) or you run the risk of screwing something up.You hit upon one such case below, where a subdomain that doesn't have SSL becomes inaccessible due to the "includeSubdomains" flag. Actually the other case is a problem too, but for illustration purposes I'll talk about the former.So, consider a brand like Nike who has a large internet presence and a lot of products serving different people and markets. I don't personally know anything about them or how they get their internet needs met, but let's just assume for this discussion they have a bunch of different teams and outsourcing agreements that try to make it all work (something I think could be said for all major corporations).Next, let's suppose they want to run a marketing campaign during a major sports event and give away free shoes to the first 500 people who sign up at a new micro site setup just for this campaign. The browser requests go something like this (substituting - for. )1. Go to the landing page at freeshoes-nike-com using http2. Grab some some logo graphics from nike-com using https, hsts is enabled with includesubdomains3. Grab a js file at freeshoes-nike-com that will collect people's information using http, which is rewritten to be https but a cert was never installed for "freeshoes"Clearly you are screwed, the page will not display correctly. And if you try to go back to the landing page (with just http), you're even worse off because then nothing shows up, only the error screen. People will be very upset, especially the marketing team who can do nothing but watch their campaign blow up before their very eyes.Put simply, a debacle such as this would be a very big deal, and no matter how much people might like the idea of security there is not a person out there who wants to risk losing their job just to be more secure.So that's why I have a hard time seeing HSTS becoming widely adopted. Maybe it will make my site more secure but if it's going to screw everything up, I'm not interested. Bait-and-switch.Some of the other DoS cases might be even more problematic, but I don't know if anyone wants to get into them here.Thanks. From: David KeelerSent: Wednesday, September 24, 2014 12:32 PMOn 09/23/2014 10:03 PM, fhw...@gmail.com wrote:...snip... The shortcoming of HSTS is on the deployment side, where on the one hand it purports to help web app developers and deployment teams who falter at security and on the other hand gives those same people all-new ways to falter at security. It's your classic bait-and-switch except this time your site could become unusablesnip...A site can only DOS itself if it sets a long-lived header and then stopssupporting https (or if it sets includeSubdomains and a subdomaindoesn't support https). The easy answer is if your site is committed toalways supporting https, then HSTS is appropriate. If not, then it isn'tappropriate. The most ambitious of web sites and services will be up for the challenge of doing a proper HSTS implementation but the rest...I don't know. Any thoughts on how widely this will be adopted?Again, using HSTS is essentially as difficult as using https properly.If that's doable (and it's definitely a whole lot easier than it used tobe), then setting an HSTS header is a small incremental step that doesincrease a site's security. ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
HSTS (was: Indicators for high-security features)
So I read through RFC 6797 and see that some of my concerns are addressed there. Still, I would like to have a better understanding of Mozilla's implementation since there is user agent flexibility that's open to interpretation. One other thing that isn't clear to me is how complete the Mozilla implementation is. Is there more work to do or is it all in there and now we're just waiting for websites to deploy it?The shortcoming of HSTS is on the deployment side, where on the one hand it purports to help web app developers and deployment teams who falter at security and on the other hand gives those same people all-new ways to falter at security. It's your classic bait-and-switch except this time your site could become unusable.For example, how do I pick a suitable max-age? Suppose I mistake the units for days instead of seconds (seriously? seconds?!?) and set the value to 10. What are the practical effects of that? What happens when I use a value of 0x? If my settings mean I've DoS-ed myself, what can I do to the browser to restore service?The most ambitious of web sites and services will be up for the challenge of doing a proper HSTS implementation but the rest...I don't know. Any thoughts on how widely this will be adopted?From: fhw...@gmail.comSent: Tuesday, September 23, 2014 8:10 PMOK, thanks Matt. So the security improvement is because it's a server config plus persistent memory on the client side.What is the thinking in Firefox (assume Thunderbird will be similar?) for handling of all the different cases that arise with it? I'm thinking of how persistent is the HSTS knowledge, can it be cleared, what errors/warnings might appear, will users be allowed to bypass them, and so forth. Original Message From: Matt PalmerSent: Tuesday, September 23, 2014 5:01 PMOn Tue, Sep 23, 2014 at 01:08:13PM -0500, fhw...@gmail.com wrote: So what is the reason to use HSTS over a server initiated redirect? Seems to me the latter would provide greater security whereas the former is easy to bypass.On the contrary, HSTS is much harder to bypass, because the browserremembers the HSTS setting for an extended period of time. While first useis still vulnerable to a downgrade attack under HSTS, it's only *one* use,whereas the browser is vulnerable to redirect filtering on *every* use. Ifan attacker has enough access to the network to be able to strip the HSTSheader, they also have enough access to be able to block theserver-initiated redirect to HTTPS.- Matt ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: Indicators for high-security features
Hi Anne, Just to clarify, are you saying that effective in FF release ?? that a document obtained via https will allow only https for all subsequent retrievals, images and js, etc. alike? To the larger discussion, I have 2 questions: 1) what is the specific message you'd like to convey to the user beyond what the simple lock icon provides. 2) What action do you intend the user to take based on seeing the new indicator? Thanks. Original Message From: Anne van Kesteren Sent: Saturday, September 20, 2014 2:43 AM To: Hubert Kario Cc: Patrick McManus; mozilla-dev-security-pol...@lists.mozilla.org; Chris Palmer; Richard Barnes Subject: Re: Indicators for high-security features On Fri, Sep 19, 2014 at 2:04 PM, Hubert Kario hka...@redhat.com wrote: AFAIK, images do not trigger mixed content In Firefox Nightly they do at least. ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: Short-lived certs
Hi Jeremy, Could you (or anyone) elaborate a bit on the use cases where short lived certs are desirable? Are there really cases where the extra 50 bytes (or whatever) for the revocation info is too great a burden? Or is the desire really to short circuit the revocation checks? Or...? I'm also wondering what the plan is for handling an expired short term cert. Will the user be given a choice of allowing an exception or does it get special handling? Original Message From: Jeremy Rowley Sent: Thursday, September 4, 2014 12:46 PM To: 'David E. Ross'; mozilla-dev-security-pol...@lists.mozilla.org Subject: RE: Short-lived certs They aren't subject to less stringent security in issuing the certificate. The benefit is that the certificate doesn't include revocation information (smaller size) and doesn't need to check revocation status (faster handshake). The issuance of the certificate still must meet all of the Mozilla root store requirements. Jeremy -Original Message- From: dev-security-policy [mailto:dev-security-policy-bounces+jeremy.rowley=digicert@lists.mozilla.org] On Behalf Of David E. Ross Sent: Thursday, September 4, 2014 11:36 AM To: mozilla-dev-security-pol...@lists.mozilla.org Subject: Re: Short-lived certs On 9/4/2014 3:21 AM, Gervase Markham wrote [in part]: How should we approach the issue of short-lived certs? Spammers change their E-mail addresses quite frequently, using the same address for only a day or two. Hackers also frequently change their residence so as to prevent tracing them. The same is true of distributors of malware. If short-lived certificates are subjected to less stringent security by client applications, I would fear that they would become hacker and malware tools. -- David E. Ross The Crimea is Putin's Sudetenland. The Ukraine will be Putin's Czechoslovakia. See http://www.rossde.com/editorials/edtl_PutinUkraine.html. ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: Code Signing Draft
Thanks for sharing this Jeremy. I'm still reading through it myself but one thing that jumps out at me is the implicit(?) allowing for the same key to be used for SSL and code signing. From a security standpoint that's a horrible idea. I'll elaborate if desired, but I first wanted to find out what the current thinking is among CABF participants regarding this practice. Has there been any discussion? I don't know that Mozilla has an opinion on it? Thanks. Original Message From: Jeremy Rowley Sent: Monday, August 25, 2014 5:46 PM The CAB Forum released a proposed new baseline requirements around code signing today that might be of interest to participants here. You can see the document here: https://cabforum.org/2014/08/25/cabrowser-forum-releases-code-signing-baseline-requirements-public-comment-draft/ Public comments and feedback are welcome. Jeremy ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: Code Signing Draft
Agreed. Enforcing a rule like this would be limited, so here's what I'm hoping to see: 1) Strong, clear, unambiguous wording in the specs so that we can take away the I didn't know argument. Nobody should ever think it's okay to use the same key in multiple ways. 2) Policies and checks put in place within each CA so that at least within their own purview the same key is only ever used once. This would take away the well they let me do it argument. Asking CA's to coordinate with other CA's is probably not a good idea. 3) Auditing procedures that specifically look for these policies and checks. Let's at least get a CA's public attestation that multiple-use keys are not allowed. Maybe some of that is already in place. Do CA's already have checks in place to make sure that a single, unique key is only ever used by one end entity, for those certs that that CA has issued? The idea is to try to give people a clue of how to navigate the code signing minefield even if we can't rigidly enforce certain aspects. Thanks. Original Message From: Ryan Sleevi Sent: Friday, August 29, 2014 1:44 PM On Fri, August 29, 2014 8:04 am, Jeremy Rowley wrote: Good point. I don't think we spell it out, but I don't think anyone wants people using the same keys for both SSL and code signing. CAs are prohibited from using the same intermediate for both SSL and code signing, but we should also add something that requires the Subscriber to use separate sets of keys. Except it's a requirement without teeth. That is, a subscriber could go to CA A with Key 1 and get an SSL certificate, and CA B with Key 1 and get a code-signing certificate. Is CA B expected to monitor public Certificate Transparency logs looking for examples where the private key has been used in an SSL certificate? I agree that the spec doesn't say anything in the current draft, because key hygiene is one of those things you can't enforce (although the CS draft does attempt to do so, at Microsoft's behest, in requiring a HSM - which opens itself open to a whole new licensing landmine field of trying to determine precisely how it's in an HSM short of mailing the customer an HSM) When I read requires, I think MUST, and if you're suggesting that it's not the case, but merely that the draft gently recommends key hygeine much like CAs gently recommend revocation checking, sure. I just think you'll have a hard time embodying that at MUST level. ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: Wildcard cert, no intermediate
In your rush to judgment you arrived at the wrong conclusions, Ryan. No problem, though, as I'll recap my points in a bit. But first:The cert in question has as its root the utn-userfirst-hardware certificate. That appears to be a 2048-bit cert. If the wildcard cert should not have been issued directly under the 2048-bit root should we ask the folks at UTN (Comodo?) to explain what happened here? Are there any controls which are missing? Just curious how other people feel about this.The broader purpose behind my previous email was to raise awareness within the forum for how certain risks and vulnerabilities get combined to attack the Internet populace. I don't think it hurts to share different perspectives.The salient points I hoped to get across:* The inability to revoke endpoint certs is a major hole in Internet security. In the case of wildcard certs the hole is that much larger because of the damage that can be done when they get compromised. Also, having the same cert installed on multiple servers increases the risk.* When an admin account is compromised a lot of things can go south, especially if the account has access to DNS, server configs, private keys. If the account credentials can be used in some way to issue new certs, that can be a concern.* The scenario described is functionally similar to the NSA program QUANTUMINSERT. If you don't have the funding nor equipment of a nation state to back you up you might try this.For those who are interested I would encourage you to read up on network injection as this style of attack goes well beyond simple MITM. For starters here's a good article, though Wired and The Intercept (and others) have good stuff too. https://citizenlab.org/2014/08/cat-video-and-the-death-of-clear-text/I hope this perspective is helpful to people. I would like to know how anyone feels about the cert issue, too. From: Ryan SleeviSent: Wednesday, August 20, 2014 5:43 PMThis doesn't add any useful data to the debate, nor is it accurate.Your original complaint is about a certificate with no intermediate. Thisis permitted (pre-BR), and not (post-BR).Your examples of "doom" that would be caused by this cert apply to allwildcard certs. If you wish to complain about wildcard certs, you'recertainly entitled to, but it's entirely orthogonal. ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: Wildcard cert, no intermediate
I should have included the dates. Validity period is November 2010 to 2015. Anyone at Comodo care to comment? From: Jeremy RowleySent: Tuesday, August 26, 2014 10:26 AM If the cert was issued directly from a 2048-bit root and the issuance date is after the effective date of the BRs, then the cert violates the BRs and there should be an explanation. Although notBefore != issuance date, if the notBefore date is earlier than Feb 2013 and the root is 2048, then then there is a strong indication that the cert was improperly issued. You could also reach out to Comodo directly to alert them to the possibly mis-issued certificate. Jeremy From: dev-security-policy [mailto:dev-security-policy-bounces+jeremy.rowley=digicert@lists.mozilla.org] On Behalf Of fhw...@gmail.com Sent: Tuesday, August 26, 2014 9:10 AM To: ryan-mozdevsecpol...@sleevi.com Cc: mozilla-dev-security-pol...@lists.mozilla.org; Peter Bowen Subject: Re: Wildcard cert, no intermediate In your rush to judgment you arrived at the wrong conclusions, Ryan. No problem, though, as I'll recap my points in a bit. But first: The cert in question has as its root the utn-userfirst-hardware certificate. That appears to be a 2048-bit cert. If the wildcard cert should not have been issued directly under the 2048-bit root should we ask the folks at UTN (Comodo?) to explain what happened here? Are there any controls which are missing? Just curious how other people feel about this. The broader purpose behind my previous email was to raise awareness within the forum for how certain risks and vulnerabilities get combined to attack the Internet populace. I don't think it hurts to share different perspectives. The salient points I hoped to get across: * The inability to revoke endpoint certs is a major hole in Internet security. In the case of wildcard certs the hole is that much larger because of the damage that can be done when they get compromised. Also, having the same cert installed on multiple servers increases the risk. * When an admin account is compromised a lot of things can go south, especially if the account has access to DNS, server configs, private keys. If the account credentials can be used in some way to issue new certs, that can be a concern. * The scenario described is functionally similar to the NSA program QUANTUMINSERT. If you don't have the funding nor equipment of a nation state to back you up you might try this. For those who are interested I would encourage you to read up on network injection as this style of attack goes well beyond simple MITM. For starters here's a good article, though Wired and The Intercept (and others) have good stuff too. https://citizenlab.org/2014/08/cat-video-and-the-death-of-clear-text/ I hope this perspective is helpful to people. I would like to know how anyone feels about the cert issue, too. From: Ryan Sleevi Sent: Wednesday, August 20, 2014 5:43 PM This doesn't add any useful data to the debate, nor is it accurate. Your original complaint is about a certificate with no intermediate. This is permitted (pre-BR), and not (post-BR). Your examples of "doom" that would be caused by this cert apply to all wildcard certs. If you wish to complain about wildcard certs, you're certainly entitled to, but it's entirely orthogonal. ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Wildcard cert, no intermediate
I've encountered a wildcard end-entity certificate on a live server that chains directly to the root cert. There is no intermediate certificate and the root is in the Mozilla trust store. I assume this is a frowned upon practice that will be stopped as the BRs are adopted and such certs expire naturally. There is no reason for such certs to be reissued indefinitely, is there? Beyond this one case I'm wondering if there are any survey data or anecdotes about how common a practice this is (was?). Thanks. ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Q: mixed http/https content
What are the current rules or algorithms in place when dealing with some mixture of http and https content in Firefox?A case I'm thinking about is a drive-by download situation. If the main page is loaded by https but there are subsequent requests for files (images, js, css, fonts, iframes, etc.) or Ajax calls to be made that are only http, will Firefox allow them? Note that I don't care about the form cases where I load the form html using https but submit the form data via http. I care about just the files and content.Thanks in advance. ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: Chromium, EV, and CT
Does Mozilla have a stated plan to include CT in its products? The issues Ben lists sound like reasonable concerns but it seems this is putting the cart before the horse. The linchpin of CT is being able to turn on hard-fail when the SCT is missing or doesn't agree with the logs--or whatever the case may be. I promise you that CT hard-fail will never happen because it requires CA's to be competent (some of whom genuinely are) or end entity cert holders to be interested (some of whom genuinely are) or both. It's just not the reality when you have a massive and complicated website deployment that people can or will be interested. There are too many moving pieces as it is. Should Chrome activate hard-fail you will start to hear people say, that site doesn't work on Chrome for some reason, just use Firefox or Safari or IE. Original Message From: Ryan Sleevi Sent: Tuesday, August 12, 2014 7:05 PM To: dev-security-policy@lists.mozilla.org Reply To: ryan-mozdevsecpol...@sleevi.com Subject: Chromium, EV, and CT I just wanted to alert members of this list of a discussion that has been started on Chromium's ct-policy@ mailing list regarding Chromium's policies for requiring EV certificates be logged in Certificate Transparency Logs. Ben Laurie has started a discussion at https://groups.google.com/a/chromium.org/d/msg/ct-policy/_p8zRz5Em3s/2_0r4YjRQ8sJ about whether or not CAs should be permitted to redact domain names when logging certificates. As you can see from Ben's analysis of the Baseline Requirements and EV Guidelines, this may affect the ability of the public to ensure that CA's are conforming to the EV Guidelines, and thus rely on audits to ensure this. We welcome feedback from all parties, and are particularly interested to hear from those who would like to use the CT logs to better ensure compliance with Mozilla's policies and the competency of auditors, two very relevant discussions happening here. As it presently stands, Chromium's policy prevents such redactions. To help ensure everybody can participate, please avoid cross-posting, and instead comment on the original. Cheers! ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: Chromium, EV, and CT
It is a separate discussion. I wanted only some sort of statement from Mozilla about time frames and anticipated functionalities, if there are any. If the scope of CT is being narrowed to focus only on the use of log files as an auditing and compliance facility, that is something even I might agree with. As scoped out in RFC 6962, however, I would say the benefit to having CT in the browser is not even close to being an obvious win because the real world is not even close to the perfect world. There are just too many gaps. But, as you point out, no one at Google is interested in stopping just because I see its impact as falling short of the dream. I accept that. Original Message From: Ryan Sleevi Sent: Tuesday, August 12, 2014 9:06 PM To: fhw...@gmail.com Reply To: ryan-mozdevsecpol...@sleevi.com Cc: dev-security-policy@lists.mozilla.org Subject: Re: Chromium, EV, and CT On Tue, August 12, 2014 6:49 pm, fhw...@gmail.com wrote: Does Mozilla have a stated plan to include CT in its products? This is a separate discussion, and doesn't affect the ability of Mozilla using of CT logs to detect violations of Mozilla's inclusion policy. Obviously, CT in the client would be a win, but I think that even without such a plan in place, the CT logs provide a valuable tool in ensuring compliance, something that's unfortunately been lacking. The issues Ben lists sound like reasonable concerns but it seems this is putting the cart before the horse. The linchpin of CT is being able to turân on hard-fail when the SCT is missing or doesn't agree with the logs--or whatever the case may be. I promise you that CT hard-fail âwill never happen because it requires CA's to be competent (some of whom genuinely are) or end entity cert holders to be interested (some of whom genuinely are) or both. It's just not the reality when you have a massive and complicated website deployment that people can or will be interested. There are too many moving pieces as it is. â Should Chrome activate hard-fail you will start to hear people say, that site doesn't work on Chrome for some reason, just use Firefox or Safari or IE. As always, we welcome your feedback. However, this doesn't seem to relevant to the question/discussion at hand, nor does your potential future meaningfully affect the factors that weighing CT implementation. As it stands, both Mozilla Firefox and Google Chrome have shown that it is possible to improve the CA ecosystem over time, and with appropriate signals. Similarly, other efforts, such as http://googlewebmastercentral.blogspot.com/2014/08/https-as-ranking-signal.html or new features such as ServiceWorker http://jakearchibald.com/2014/service-worker-first-draft/ , and normative requirements such as http://tools.ietf.org/html/draft-ietf-httpbis-http2-14#section-9.2 , show that there are still opportunities to help encourage sites to adopt stronger security practices. However, that's all neither here nor there. This isn't and wasn't a post about hard-fail CT, but how CT can help Mozilla better regulate it's policies, and the interest in the community of being able to freely and transparently audit CAs to such conformance. Thus assume, if you will, a perfect world where CT was required and embraced by CAs. Would we want these features? Whether yea or nay, best to answer on ct-policy@. ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: New wiki page on certificate revocation plans
DANE will never happen, let's just acknowledge that, if for no other reason than DNSSEC will never happen. It will take years to get enough support for DANE (by both browsers and websites) to even judge how well it works. And there is no guarantee it will work that well. OneCRL itself will be of limited impact because it does not cover end entities. We should acknowledge that there is no possible way to come up with a list of end entities because in reality *all* entities need that protection. I know I rail on this a lot but it's because I've seen the damage it causes. Regarding the trustworthiness of CA's, there is room for improvement here in terms of how we choose to evaluate the CA's and enforce the ideas of security and privacy. We've talked about this before and while I think Kathleen is interested in doing something I'm not sure how much of an appetite the larger Mozilla has (i.e. lawyers). There's probably an issue of time and staff availability. Original Message From: Sebastian Wiesinger Sent: Thursday, August 7, 2014 2:28 AM To: dev-security-policy@lists.mozilla.org Subject: Re: New wiki page on certificate revocation plans * Ryan Sleevi ryan-mozdevsecpol...@sleevi.com [2014-08-07 08:33]: Hi Sebastian, While you raise an important issue, the problem(s) OneCRL sets out to solve are still problems that need solving, and we should not lose sight. I agree with that. And I also agree that we should not lose sight. The difference seems to be what we have set our sights on. :) Now, as for the problem you raise (trusting CAs until you can prove that they have done wrong and Does not keep CAs from issuing certificates that can enable certain entities to mount MITM attacks), it's important to realize and remember that DNSSEC/DANE do not solve these, and in fact, in many ways, make it easier. DNSSEC is still a single hierarchy of trust, much like CAs, and there's still ample opportunity for malfeasance, and there's even more opportunity for key mismanagement and insecure cryptographic practices. DNSSEC also builds a chain of trust but it's a different chain. You can easily tell if someone is manipulating the chain (because it breaks) and the root is under control by multiple parties in multiple countries. If someone tried to rig the system there would be immediately noticeable effects. Key mismanagement and insecure practices are a problem for certificates as well. When adoption of DNSSEC raises people will get used to it and tools will mature to ease implementation and maintenance. It's happening already. I'm not trying to defend CAs or suggest the problem you raise isn't real, but there exist other solutions for this - like Public Key Pinning (which Firefox implements) and Certificate Transparency, which offer more substantial and meaningful benefits over a DANE-based solution. Public Key Pinning is something that is good for a hand full of (big) sites but not really feasible for large scale deployment. DNSSEC/DANE works for everyone. Certificate Transparency on the other hand requires additional infrastructure and active monitoring. With DNSSEC/DANE you need no additional infrastructure (DNS is required anyway) and it makes little difference if the browser checks the DANE record or audits the CT log. Reading the explanation on the CT site (http://www.certificate-transparency.org/how-ct-works) it looks like browsers are not even required to check every certificate while DANE records would be checked for every site/cert. I don't see the substantial and meaningful benefits to be honest. Though DANE seems like a simple solution, it's one filled with errors. And if Dan Veditz's and Patrick McManus' replies on the state of OCSP weren't depressing, DNSSEC is an order of magnitude more depressing. Important to keep in mind when talking long-term vision, which is practically achievable, and what is really long-long-long term vision, which is more theoretical and academic. Like DANE. DANE is not a simple solution but it will solve the problems we have. As for academical, right now mail providers start implementing DANE as MTA software gets support for it. GNUTLS has DANE support and OpenSSL has at least begun implementation. DNSSEC itself is starting to get traction so I don't think it would be a very long-long-long term vision. Of course we need applications supporting DANE and browsers would be a killer feature to increase deployment. Regards Sebastian -- GPG Key: 0x93A0B9CE (F4F6 B1A3 866B 26E9 450A 9D82 58A2 D94A 93A0 B9CE) 'Are you Death?' ... IT'S THE SCYTHE, ISN'T IT? PEOPLE ALWAYS NOTICE THE SCYTHE. -- Terry Pratchett, The Fifth Elephant ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org
Re: New wiki page on certificate revocation plans
Curious to know the process by which cert holders will get their certs added to these lists. How much of that flow and the necessary security measures have been worked out? Original Message From: Richard Barnes Sent: Thursday, August 7, 2014 3:59 PM To: Rob Stradling Cc: mozilla-dev-tech-cry...@lists.mozilla.org; mozilla-dev-security-pol...@lists.mozilla.org Subject: Re: New wiki page on certificate revocation plans On Aug 7, 2014, at 9:47 AM, Rob Stradling rob.stradl...@comodo.com wrote: http://dev.chromium.org/Home/chromium-security/crlsets says: The limit of the CRLSet size is 250KB Have Mozilla decided what the maximum OneCRL size will be? No, we haven't. The need for a limit largely depends on whether we cover EE certificates. If we cover only intermediate CAs, of which there are roughly 1,800, then there is no size issue -- we can include the full SHA-256 digest of every CA certificate and only come to around 56KB. (Or just use a 1800-bit bitmap!) If we choose to cover EE certificates (as CRLSets do), then we will have to impose a size limit. In some initial experiments in representing CRLs with Golomb compressed encoding, we've been able to get down to roughly N bits per entry for 2^-N false positive rate. Since we'll still have OCSP as a fall-back, we can tolerate a high failure rate, maybe as high as 0.5% (2^-9). At that rate, a 250KB limit would fit around 220,000 CRL entries. So we would need to do some experimentation to see how that capacity compares to the size of CRLs in the wild. --Richard On 01/08/14 03:07, Richard Barnes wrote: Hi all, We in the Mozilla PKI team have been discussing ways to improve revocation checking in our PKI stack, consolidating a bunch of ideas from earlier work [1][2] and some maybe-new-ish ideas. I've just pressed save on a new wiki page with our initial plan: https://wiki.mozilla.org/CA:RevocationPlan It would be really helpful if people could review and provide feedback on this plan. There's one major open issue highlighted in the wiki page. We're planning to adopt a centralized revocation list model for CA certificates, which we're calling OneCRL. (Conceptually similar to Chrome's CRLsets.) In addition to covering CA certifcates, we're also considering covering some end-entity (EE) certificates with OneCRL too. But there are some drawbacks to this approach, so it's not certain that we will include this in the final plan. Feedback on this point would be especially valuable. Thanks a lot, --Richard [1] https://wiki.mozilla.org/CA:ImprovingRevocation [2] https://www.imperialviolet.org/2012/02/05/crlsets.html -- Rob Stradling Senior Research Development Scientist COMODO - Creating Trust Online ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: Regarding Mozilla auditors choosen standards
Hi Wallas,Setting aside Ryan's petulance, if I may, I think the simple answer to all your questions can be stated thusly: no one is in charge and we depend on people doing the right things.Mostly I think that works out OK but there's just no escaping that much of the PKI system relies on nothing more than "please don't do that" and "okay I promise I won't". Requirements and specifications and best practices and audits and open discussion forums such as this one all help but if any given actor chooses to lean in a different direction there is little recourse we can take. What's worse is that the rationale for taking any such action is so narrow that only the most egregious cases are ever pursued.The obvious poster child for egregious cases is DigiNotar. Cases which are not so clear cut would have to include the CFCA request under discussion right now and the TeliaSonera situation of the recent past. In both cases the concerns are real and justified and yet the available options seem limited. I'd like to see us improve upon that, but that's a whole other conversation.In any case, I hope this helps answer your questions.From: Ryan SleeviSent: Tuesday, July 29, 2014 10:47 AMOn Tue, July 29, 2014 2:01 am, Wallas Smith wrote: Thank you very much for your precise answers. This helped me to come to new questions :Which you will find already answered athttps://www.mozilla.org/en-US/about/governance/policies/security-group/certs/, as I suspected. 1) According to what I understand, when trying to express the chain of Certificate trust starting from a Mozilla User, the upper trust is placed into Governmental Regulations and/or Professional code of Conduct of auditors. Could you tell me more about the Governmental Regulations you were mentioning ? Also, is there a global regulation which gather all these governmental regulations, and who controls them ? In other words, who is on top of the chain of control ?This was already answered in my previous email, which provided enoughinformation for you to discover the relationship of ETSI and WebTrust (asAudit Frameworks) to the CA/Browser Forum's Baseline Requirements, and howthose flow into the Mozilla requirements.Which is, of course, also answered byhttps://www.mozilla.org/en-US/about/governance/policies/security-group/certs/ 2) If I still understand you well, Mozilla never really check by themselves the good "quality" of a given CA at a specific date (by quality I am not talking about the required content which can be easily checked), but they report their responsibility to Auditors and Governmental Regulations. Do Mozilla still have some exceptional process for checking fully a CA by themselves, that could lead to the removal of a CA in their product?This is also already answered byhttps://www.mozilla.org/en-US/about/governance/policies/security-group/certs/https://www.mozilla.org/en-US/about/governance/policies/security-group/certs/policy/enforcement/ 3) Finally, if Mozilla don't have contract with auditors, do Mozilla have contract(s) with any stratum of what I called the trust chain (with the CA itself or Governmental regulations, or above depending of your answer) to discharge their responsibility in case of failing CA? Who is responsible in case of failing/neglected/wrongly handled CA in front of the law ?Once again, already answered.https://www.mozilla.org/en-US/about/governance/policies/security-group/certs/policy/enforcement/Also, read the CA's CPs/CPSes to understand what liabilities and how theyfit. ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: CFCA Root Inclusion Request
I agree with Ryan: new audit by new auditor. Since PWC did a mediocre job last time why would we expect a different result this time? Original Message From: Ryan Sleevi Sent: Tuesday, August 5, 2014 5:41 PM To: Kathleen Wilson Reply To: ryan-mozdevsecpol...@sleevi.com Cc: mozilla-dev-security-pol...@lists.mozilla.org Subject: Re: CFCA Root Inclusion Request On Tue, August 5, 2014 10:26 am, Kathleen Wilson wrote: On 7/29/14, 2:00 PM, Kathleen Wilson wrote: All, Thank you to those of you who have reviewed and commented on this inclusion request from CFCA. I will appreciate your opinions in response to my questions below regarding how to move forward with this request. Note that the CFCA GT CA root was included in Microsofts program in December 2012, and the CFCA EV ROOT root was included in Microsofts program in May 2013. On a matter of process/procedure, So, shall we proceed with approval/inclusion of the CFCA EV ROOT cert after verifying that CFCA has addressed the issues noted in this discussion? Or, shall we require another audit before we proceed with approval/inclusion of the CFCA EV ROOT cert? Kathleen Kathleen, Given the compliance issues that were identified, and the number of them, it's difficult to believe the auditor matches the criteria of competent party, pursuant to sections 12 - 16 of the Mozilla Inclusion Policy. Per Section 16, it seems the burden is on the CA to establish the competence of the third party. This is somewhat distressing, since the auditor was PricewaterhouseCoopers, whose only other WebTrust audits (per https://www.mozilla.org/en-US/about/governance/policies/security-group/certs/included/ ) is for the SECOM Roots. It's worth noting that the suitability of this auditor has been discussed in the past ( https://groups.google.com/d/msg/mozilla.dev.security.policy/riLXu3ZJNso/HPOvC_5c0sUJ ), and that PricewaterhouseCoopers was also responsible for the Diginotar Audit. While it is ultimately the decision of Mozilla, per the inclusion policy, as to whether the auditor meets criteria, the evidence and experience gathered so far I believe casts a serious shadow. Respectfully, and individually, I think the issues here are egregious enough, and in sufficient number, to request a new audit by a new auditor, pursuant with Mozilla's policies of requiring the CA to establish the competence of the auditor. ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: Turn on hardfail?
OK, sure. Short answer is that I'm not that concerned--at least I don't think I'm that concerned. Regarding single points of failure, I think we'll need to rely on domain owners and server admins to put pressure on their CA's to make sure the system availability for the OCSP responders is 99.9% and higher. Some CA's have already done that and the others will have to follow suit. Regarding privacy, I come down on the cynical side and argue that there is no privacy anyway. Your ISP knows your habits, your government knows your habits, Google definitely knows your habits, hundreds of other sites try to identify your habits. From that standpoint having a CA know your habits is not a significant erosion of privacy. For that matter I wouldn't be surprised if some CAs already collect and sell that information. (We could always ask them!) I would just add that the OCSP stapling approach does clearly help in both regards so it's probably a good idea for some of the main Internet destinations to have that. However even stapling is not a perfect answer to either point of failure nor privacy concerns. Original Message From: Daniel Micay Sent: Wednesday, April 23, 2014 11:39 PM To: fhw...@gmail.com; dev-security-policy@lists.mozilla.org Subject: Re: Turn on hardfail? I'm talking about the DoS vulnerability opened up by making a few OCSP servers a single point of failure for *many* sites. It's also not great that you have to let certificate authorities know about your browsing habits. ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: Convergence (not really MITM detection)
I like the general idea here. It's similar to how you download a file in the background while still giving it the name and directory you want. In this case you are downloading content while simultaneously deciding if it is trustworthy. That said there are 2 issues to consider. The first is that any content you receive can be used against you by installing malware and such. This is the primary (and easiest) way to pwn your device. This risk / attack vector is a far more menacing threat than MITM--for my money anyway. So what this means is that all files received would have to be treated with care before they can be used. This means not only html and javascript of course but also css files and images. Still, I think there are ways this can be managed such that you get the performance benefit without necessarily compromising security. The second issue is one of privacy. Anything you send, including the URL path, can be used to identify you and reveal information about you. This is how email marketers try to figure out if you've read their message. So, when it comes to communicating on a partially secured link, you want to be aware of how much you actually want to reveal. Personally I don't see this risk as being any worse than using the Internet generally, but still something to keep in mind. Like I said, I think this a good idea and is worth developing further. Would be good to get feedback from some of the Mozilla dev's though. Original Message From: John Nagle Sent: Friday, April 18, 2014 2:51 PM Subject: Re: Convergence (really MITM detection) ... One way to ameliorate the performance problem is to display the page before third party validation has been complete, but delay form input, the appearance of the lock icon, and sending of any data from client to server until third party validation checks out. So you can see a login page immediately, but the submit button won't take effect until validation checks out. If it doesn't check out, the user gets an alert, of course, and nothing gets sent. This delay has to include any client to server communication initiated from the page, including cookie replies. Otherwise a fake page can steal credentials stored by the browser. This is probably worth putting into Firefox if any kind of third party cert validation goes in. The alternative, stalling page load and display, would degrade performance as observed by users. ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: Revocation Policy
This an interesting issue Kaspar and I appreciate you raising it. I also personally appreciate you framing it in terms of trust because that's really what is at issue here. The whole idea of revocation is a gaping hole in the PKI landscape. The ability to say don't trust me is so poorly implemented throughout PKI as to be effectively non-existent. If for some reason you need to revoke a cert, you should do so because it's the right thing to do, but the best you can hope for is that some anti-security person doesn't figure out a way to use it anyway. This means that theft and other compromises of private keys remain viable attack vectors for those who wish to do so (government sponsored organizations and otherwise). Private keys and the certs that go with them could be usable well after when people think they become invalid. This also means that we should not be surprised to see an underground market appear that seeks to sell revoked certs. Given that high value internet destinations might have been impacted by the Heartbleed vulnerability this could definitely become a concern. Should such a place appear I would think StartCom - issued certs would easily be included for sale. This also means that all pay to revoke policies should be viewed as anti-security and we need to strongly encourage they be discontinued in short order. If a CA wishes to continue such policies I would question their trustworthiness. Further I think we are reaching the point where browsers have to refuse SSL connections when OCSP validation fails. I think it's getting harder to argue otherwise, but I'll let the Mozilla folks speak to that. - Original Message - From: Kaspar Janßen Sent: Thursday, April 10, 2014 4:12 AM On 10/04/14 10:08, Peter Eckersley wrote: Kaspar, suppose that Mozilla followed your suggestion and removed StartCom's root certificates from its trust store (or revoked them!). What would the consequences of that decision be, for the large number of domains that rely on StartCom certs? I hope that an appropriate policy will force authorities to reconsider their revocation principle. I don't want to harm someone nor I want to work off in any way. The key is that anybody should be able to shout out don't trust me anymore! without a fee. Isn't that part of the trustchain idea? I read a few times that Chrome doesn't even check if a certificate is revoked or not (at least not the default settings). That leads me to the question: Is it mandatory for a CA in mozilla's truststore to have to ability to revoke a certificate or is is only an optional feature provided by some CAs? ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: Exceptions to 1024-bit cert revocation requirement
Well let's be clear about one thing: in Firefox land (as in others) there is no such thing as revocation; there is only changing the code.I think what Kathleen is saying is that starting Jan 1, Mozilla would like to take out the code supporting certs with small keys. What needs to be negotiated then is when end-entity cert holders will be prepared for their small keys to no longer work on _future_ versions of Mozilla products.A 1024-bit cert will always work with FF 24, for example. It may or may not work on version 30. If a cert holder is ok with that, I don't think there is really a problem.PKI gymnastics anyone?From: Rob StradlingSent: Wednesday, December 11, 2013 5:15 PMTo: Kathleen Wilson; mozilla-dev-security-pol...@lists.mozilla.orgSubject: Re: Exceptions to 1024-bit cert revocation requirementOn 11/12/13 22:31, Kathleen Wilson wrote:snip According to https://wiki.mozilla.org/CA:MD5and1024 "All end-entity certificates with RSA key size smaller than 2048 bits must expire by the end of 2013.Kathleen, are you saying that "must expire by the end of 2013" is a "revocation requirement" ?Expiration != Revocation.Is there actually a requirement that says "By the end of 2013, CAs MUST revoke all unexpired certificates with 2048-bit RSA keys" ?If so, where is it written and when was it communicated to the CAs?(If it's not actually written anywhere, then can you actually enforce it?)-- Rob StradlingSenior Research Development ScientistCOMODO - Creating Trust Online___dev-security-policy mailing listdev-security-policy@lists.mozilla.orghttps://lists.mozilla.org/listinfo/dev-security-policy ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: Revoking Trust in one ANSSI Certificate
Let's start with the basics: what is the cert subject, serial number, date info? None of the four browser notices provided any of that. Surely there is no reason to keep it secret, is there?From: Jan SchejbalSent: Monday, December 9, 2013 1:19 PMTo: mozilla-dev-security-pol...@lists.mozilla.orgReply To: jan.schejbal_n...@gmx.deSubject: Re: Revoking Trust in one ANSSI CertificateHi,could we please have the certificates/chains involved in this, and couldthe corresponding bug (I assume there is one) maybe be made public?Especially of interest would be the dates when the certificates wereissued, when they were first used for MitM, when this was reported tothe CA by Google, and when the CA revoked the certificate.From what I understood, the hierarchy was as follows:ANSSI+-Treasury Sub-CA +-MitM-CA (installed on MitM device)+-Fake endpoint certificatesIs this assumption correct? If so:Was the "Treasury Sub-CA" revoked, or only the "MitM-CA"?Which of these certs are the ones blacklisted by Mozilla?The publicly available information about this is currently quitelimited. Having a meaningful debate on that basis is difficult.We already had a similar case once - Trustwave. The differences are thatthey admitted it before getting caught, and that since that incident,everyone remotely involved in PKI management should know that this issomething you don't do.I would really love to see the explanation how someone accidentallyissues and deploys a MitM Sub-CA...Kind regards,Jan___dev-security-policy mailing listdev-security-policy@lists.mozilla.orghttps://lists.mozilla.org/listinfo/dev-security-policy ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: Revoking Trust in one ANSSI Certificate
Brian,I was thinking it would be beneficial if ANSSI would provide a host:port that would have the bad chain installed. This allows for anyone to check if their browser has been updated to un-trust the intermediate.I make this suggestion in addition to the points you raise below, and I think it's fair to ask this of any CA that behaves badly. From: Brian SmithSent: Monday, December 9, 2013 4:15 PMTo: Eddy NiggCc: mozilla-dev-security-pol...@lists.mozilla.orgSubject: Re: Revoking Trust in one ANSSI CertificateOne thing that would really help would be an attempt to document whichpublicly-accessible websites are using certificates that chain (only)to the ANSSI root. I heard the claim that most French publicgovernment websites actually use certificates that chain to adifferent CA. That has led me to wonder how much the ANSSI root isactually used by public websites. Having a list of domains that usecerts that chain to ANSSI root is likely to have some significantbearing on the decisions about what to do. But, it will be a whilebefore I would have time to compile such a list.I think it would also help to document in this thread the ways we knowthat ANSSI is not complying with our CA program. Lack of OCSP AIA URIin the certificates is one example. Are there other ways that ANSSI isnon-compliant?Cheers,BrianOn Mon, Dec 9, 2013 at 1:18 PM, Eddy Nigg eddy_n...@startcom.org wrote: On 12/09/2013 11:12 PM, From Ryan Sleevi: According to https://wiki.mozilla.org/CA:Communications#January_10.2C_2013 (see the Responses section), this CA has indicated that they do not expect to begin operating in full compliance to the Baseline Requirements and to Mozilla's 2.1 Inclusion Policy until Dec 2015/January 2016. Thanks Ryan - then we probably should understand what Mozilla does or intends to do in such cases. Maybe this shows that something must be done (when we are assuming that by today every CA is compliant already and this should not be possible according to BR AND Mozilla's requirements). -- Regards Signer: Eddy Nigg, StartCom Ltd. XMPP:start...@startcom.org Blog:http://blog.startcom.org/ Twitter: http://twitter.com/eddy_nigg ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy-- Mozilla Networking/Crypto/Security (Necko/NSS/PSM)___dev-security-policy mailing listdev-security-policy@lists.mozilla.orghttps://lists.mozilla.org/listinfo/dev-security-policy ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: Mozilla not compliant with RFC 5280
I would hope not! And yet...Firefox has no revocation checking right now (or if you prefer, for the last 17 years). So what's a Firefox user to do...besides not use Firefox? From: Phillip Hallam-BakerSent: Friday, November 8, 2013 11:51 AMTo: Jeremy RowleyCc: fhw...@gmail.com; mozilla-dev-security-pol...@lists.mozilla.orgSubject: Re: Mozilla not compliant with RFC 5280I don't believe there are any parties who you would want as CAs that support the idea of getting rid of revocation checking. On Fri, Nov 8, 2013 at 9:35 AM, Jeremy Rowley jeremy.row...@digicert.com wrote: I imagine every CA would agree with you. OCSP stapling is a great idea, but the number of servers deploying it are very low. I don’t believe any CAs support the idea of getting rid of revocation checking. From: dev-security-policy [mailto:dev-security-policy-bounces+jeremy.rowley=digicert@lists.mozilla.org] On Behalf Of fhw...@gmail.com Sent: Friday, November 08, 2013 6:42 AM To: mozilla-dev-security-pol...@lists.mozilla.org Subject: Re: Mozilla not compliant with RFC 5280 I was hoping to see more responses on this issue. Does that mean people agree it's a problem but aren't sure what to do about it? Is it a small problem because Firefox already does OCSP and all the CA's do too? Or...? Thanks. From: fhw...@gmail.com Sent: Friday, November 1, 2013 5:50 PM To: Matthias Hunstock; mozilla-dev-security-pol...@lists.mozilla.org Subject: Re: Mozilla not compliant with RFC 5280 I think that is correct, Matthias. What's more is that anyone who issues an end-entity cert will be unable to stop FF from using that cert in the future--without OCSP setup--until the expiration date. (I'll need someone to correct me on that.) I gotta believe there are people out there who issue(d) CRL's thinking that they are now protected when in reality they are not. From: Matthias Hunstock Sent: Friday, November 1, 2013 10:46 AM To: mozilla-dev-security-pol...@lists.mozilla.org Subject: Re: Mozilla not compliant with RFC 5280 Am 29.10.2013 19:37, schrieb Kathleen Wilson: The goal is for the revocation-push mechanism to be used instead of traditional CRL checking, for reasons described in the wiki page and the research paper. Everyone with a "self-made" CA will be completely cut off from revocation checking, except there is an OCSP responder? Matthias ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy -- Website: http://hallambaker.com/ ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: Mozilla not compliant with RFC 5280
This is good information, Kathleen, and I'm certainly in favor of making improvements. I do wish there was more info on the report author and any affiliations he might have.That said I can't find clear, unambiguous detail on what CRL capabilities are actually working in Firefox, and for which versions.There was some talk at one time how CRL never worked anyway or some such thing but I think we need clarification on that now.The worst case here is that some capabilities are missing from current (and future) versions and the worst case for missing functionality could be very bad indeed.Thanks. From: Kathleen WilsonSent:Tuesday, October 29, 2013 1:38 PMTo:mozilla-dev-security-pol...@lists.mozilla.orgSubject: Re: Mozilla not compliant with RFC 5280On 10/29/13 5:20 AM, fhw...@gmail.com wrote: Changing the subject line because compliance is at the heart of this issue. I also would like to thank Brian for his comment below, because it seems we're discussing less the merits of CRLs and more rationalizing the cost to implement.snip So...if Mozilla can't implement CRL support because of staffing issues and priorities, that's fine. Actually it's completely understandable. In the meantime, Mozilla is not 5280 compliant--and that should be a big deal.Please see https://wiki.mozilla.org/CA:ImprovingRevocationThere is also an interesting research paper attached to that page about revocation.Folks are working towards adding a revocation-push mechanism so that Firefox preloads certain revocation information for intermediate and end-entity certificates. I started the discussion about which types of revocations should be included for intermediate certs here:https://groups.google.com/d/msg/mozilla.dev.security.policy/cNd16FZz6S8/t3GwjaFXx-kJThere will be a similar discussion for end-entity cert revocations, I just haven't started it yet.The goal is for the revocation-push mechanism to be used instead of traditional CRL checking, for reasons described in the wiki page and the research paper.In my opinion, the sequence in which certain changes (like ripping out the CRL user interface) could have been better, such as happening after the revocation-push mechanism was in place. But, in my opinion, we are heading the right direction -- there will be revocation checking, it just will be done in a better and more efficient way.Kathleen___dev-security-policy mailing listdev-security-policy@lists.mozilla.orghttps://lists.mozilla.org/listinfo/dev-security-policy ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Mozilla not compliant with RFC 5280 (was: Netcraft blog, violations of CABF Baseline ...)
Changing the subject line because compliance is at the heart of this issue. I also would like to thank Brian for his comment below, because it seems we're discussing less the merits of CRLs and more rationalizing the cost to implement.Regarding the merits, here's a simple case that I hope will illustrate the importance of CRLs:- Site admin: someone hacked my server and probably took my private key and SSL certificate - CA: okay, generate a new key pair and send over the signing request and we'll get you a new certificate; in the mean time we'll issue a CRL so nobody uses the old cert anymore- Mozilla: meh, I don't see the big deal, I'm sure everything will be fine if I continue to allow the cert anywaySo, to put it another way, the decision to use a revoked cert is not Mozilla's to make - - the decision to revoke has to be respected. Here's why:- Cert thief: cool, all Firefox users will still recognize this cert so now I can sell it on the black market! Since this cert is for a high value target, I should be able to get some good money for it. I'll start the bidding at $50,000.So...if Mozilla can't implement CRL support because of staffing issues and priorities, that's fine. Actually it's completely understandable. In the meantime, Mozilla is not 5280 compliant--and that should be a big deal. ... I hope you can understandhow a software engineer would have trouble arguing in favor of such anexpensive feature as CRL fetching (or even OCSP fetching) without avalid argument in favor of doing it. Right now we're lacking validarguments for doing it.Cheers,Brian ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy