Re: Certificate with Debian weak key issued by Let's Encrypt
A report regarding this incident has been published on the Let's Encrypt community site: https://community.letsencrypt.org/t/2017-09-09-late-weak-key-revocation/42519 The text is copied here: On July 16, 2017 it was reported to Let’s Encrypt by researcher Hanno Böck that it was possible to get a certificate using a key known to be generated using the weak Debian random number generator. A specific certificate was given as an example. It so happens that Let’s Encrypt was already working on enhanced weak key checking which would have prevented the issuance in questions and deployment was imminent. Those mitigations were deployed to our production infrastructure on July 27, 2017. Let’s Encrypt was already checking for some types of weak keys as required by the Baseline Requirements, but we were not checking for the particular type of weak key that was reported to us on July 16, 2017. The Baseline Requirements specify that weak key checking must be done but they do not specify a particular algorithm, therefore Let’s Encrypt weak key checking was formally compliant both before and after the weak key mitigations deployed on July 27, 2017. However, we are always happy to improve the quality of our weak key checker. The Baseline Requirements do, however, require Let’s Encrypt to ensure that certificates are revoked if the associated private key is known to be compromised. We should have revoked the certificate referenced in the report from July 16, 2017, within 24 hours of receiving the report. We did not revoke the certificate within 24 hours of the report due to two contributing factors: the team was focused on improving weak key checking and the certificate was issued to a security researcher for testing purposes only. It was revoked on September 9, 2017, at 23:49 UTC, after the reporter posted publicly about the issue. As a result of this late revocation we have reviewed and improved our processes for handling incoming reports. ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Let's Encrypt 2017.09.08 CAA Checking Algorithm Incident
On Friday September 8, 2017, at 10:04pm US Pacific time, Let's Encrypt received a report pointing out a certificate that should not have been issued per CAA RFC 6844 [1]. When CAA checking became mandatory on September 8, 2017, it only allowed the CAA checking algorithm specified in RFC 6844. Since our launch in late 2015, prior to any CAA checking requirements, Let's Encrypt had implemented the CAA checking algorithm specified in erratum 5065 [2]. Let's Encrypt did not move to the RFC 6844 algorithm on September 8, which meant we became non-compliant. It was possible to issue a certificate allowed under erratum 5065 and not allowed under RFC 6844. We believe the algorithm specified in erratum 5065 is superior, and it's what should have been specified in RFC 6844. There appears to be near-consensus on this in the Web PKI community (at least among those who have discussed the issue), including the CAA IETF working group. There have been many discussions on this topic in the CA community, and it seems very likely that a ballot will pass soon which makes the erratum 5065 algorithm compliant. Based on PKI community discussions, it was our understanding that implementing the erratum 5065 algorithm would be allowed by root programs after the September 8, 2017 Baseline Requirements deadline for CAA came into effect. Our understanding was incorrect, and we should have sought explicit public dispensation for our divergence from the Baseline Requirements before the deadline. CAs should not assume that divergences from the Baseline Requirements are allowed without explicit public permission from root programs. Anything less would set a bad precedent and open the door to abuse. A change to bring our CAA checking algorithm into compliance was deployed to production shortly before 17:30 UTC on September 14, 2017. The certificate [3] cited by the reporter was revoked within 24 hours of the report. We have publicly asked [4] the Mozilla and Google root programs for permission to deploy the erratum 5065 CAA checking algorithm immediately while we work on getting a ballot passed to change the CA/B Forum Baseline Requirements. [1] https://tools.ietf.org/html/rfc6844 [2] https://www.rfc-editor.org/errata/eid5065 [3] https://crt.sh/?sha256=C396951C4C594897BE11B09494DD567B00A0A946735F3DECC01A9D966A179F41 [4] https://groups.google.com/forum/#!msg/mozilla.dev.security.policy/9y-XTajmOCw/5hicEUHqAAAJ We have made this information available on our community site as well: https://community.letsencrypt.org/t/2017-09-08-caa-checking-algorithm-incident/42516 ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Let's Encrypt 2017.09.08 Expired DNSSEC Response Incident
On September 8, 2017, Let’s Encrypt received a report from researcher Andrew Ayer that we accepted an expired DNSSEC RRSIG during certificate issuance. The RRSIG was very recently expired (< 1hr). This violates RFC 4033 Section 8.1 [1]: “The signatures associated with signed zone data are only valid for the time period specified by these fields in the RRSIG RRs in question.” and RFC 4034 Section 3.1.5 [2]: “The RRSIG record MUST NOT be used for authentication prior to the inception date and MUST NOT be used for authentication after the expiration date.” This happened because the Let’s Encrypt DNS resolver used a default "grace period" of 1 hour for DNSSEC RRSIGs to help with clock skew. The certificate [1] was revoked and a fix was deployed less than 24 hours after receiving the report. The grace period for RRSIG expiration was disabled. We believe that the risk to relying parties based on validating stale DNSSEC records was extremely low. A hypothetical attacker would have to take over an IP address pointed to by a previously signed zone, and the proper owner of that zone would have had to change the zone to point to a new IP address within less than an hour, in order for the stale signature to make any material difference in validation. [1] https://tools.ietf.org/html/rfc4033#section-8.1 [2] https://tools.ietf.org/html/rfc4034#section-3.1.5 [3] https://crt.sh/?sha256=435F08B5A9536E2B8F91AB8970FF9F8D93A1A0A5529C2D8388A10FA59FF3758C This is information is also available on our community site: https://community.letsencrypt.org/t/2017-09-08-expired-dnssec-response-incident/42517 ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: Old roots to new roots best practice?
Hi Ben, While I wasn't trying to suggest the reasoning was the same, I was trying to highlight that for many implementations, the revocation of a single certificate (where there may exist multiple cross-signs) induces enough non-determinism to effectively constitute revoking all of them. That is, clients that encounter the revoked cert - which cannot reliably be predicted - may treat the entire chain as revoked even if alternative, unrevoked paths exist. This should mean that CAs should be aware of, and cautious of, such revocations. The best mitigation to this is avoiding a large number of cross-signs and rotating keys or names often. On Tue, Sep 19, 2017 at 12:28 AM Ben Wilson wrote: > Ryan, > Could you please explain what you mean by saying that if you revoke a > single > certificate that it is akin to revoking all variations of that certificate? > I don't think I agree. There are situations where the certificate is > revoked for reasons (e.g. issues of certificate format/content) that have > nothing to do with distrusting the underlying key pair. > Thanks, > Ben > > > -Original Message- > From: dev-security-policy > [mailto:dev-security-policy-bounces+ben=digicert@lists.mozilla.org] On > Behalf Of Ryan Sleevi via dev-security-policy > Sent: Sunday, September 17, 2017 7:57 PM > To: userwithuid > Cc: mozilla-dev-security-policy > > Subject: Re: Old roots to new roots best practice? > > Hi there, > > I agree, Gerv's remarks are a bit confusing with respect to the concern. > You are correct that the process of establishing a new root generally > involves the creation of a self-signed certificate, and then any > cross-signing that happens conceptually creates an 'intermediate' - so you > have a key shared by a root and an intermediate. > > This is not forbidden; indeed, you can see in my recent suggestions to > Symantec/DigiCert, it can and often is the best way for both compatibility > and interoperability. Method #2 that you mentioned, while valid, can bring > much greater compatibility challenges, and thus requires far more careful > planning and execution (and collaboration both with servers and in > configuring AIA endpoints) > > However, there is a criticism to be landed here - and that's using the same > name/keypair for multiple intermediates and revoking one/some of them. This > creates all sorts of compatibility problems in the ecosystem, and is thus > unwise practice. > > As an example of a compatibility problem it creates, note that RFC5280 > states how to verify a constructed path, but doesn't necessarily specify > how > to discover that path (RFC 4158 covers many of the strategies that might be > used, but note, it's Informational). Some clients (such as macOS and iOS, > up > to I believe 10.11) construct a path first, and then perform revocation > checking. If any certificate in the path is rejected, the leaf is rejected > - > regardless of other paths existing. This is similar to the behaviour of a > number of OpenSSL and other (embedded) PKI stacks. > Similarly, applications which process their own revocation checks may only > be able to apply it to the constructed path (Chrome's CRLSets are somewhat > like this, particularly on macOS platforms). Add in caching of > intermediates > (like mentioned in 4158), and it quickly becomes complicated. > > For this reason - if you have a same name/key pair, it should generally be > expected that revoking a single one of those is akin to revoking all > variations of that certificate (including the root!) > > Note that all of this presumes the use of two organizations here, and > cross-signing. If there is a single organization present, or if the > 'intermediate' *isn't* intended to be a root, it's generally seen as an > unnecessary risk (for the reasons above). > > Does that help explain? > > > On Sun, Sep 17, 2017 at 11:37 AM, userwithuid via dev-security-policy < > dev-security-policy@lists.mozilla.org> wrote: > > > Forgot the links: > > > > [1] https://groups.google.com/forum/#!topic/mozilla.dev. > > security.policy/hNOJJrN6WfE > > [2] https://groups.google.com/forum/#!msg/mozilla.dev. > > security.policy/RJHPWUd93xE/RqnC3brRBQAJ > > [3] https://crt.sh/?spkisha256=fbe3018031f9586bcbf41727e417b7 > > d1c45c2f47f93be372a17b96b50757d5a2 > > [4] https://crt.sh/?spkisha256=82b5f84daf47a59c7ab521e4982aef > > a40a53406a3aec26039efa6b2e0e7244c1 > > [5] https://crt.sh/?spkisha256=706bb1017c855c59169bad5c1781cf > > 597f12d2cad2f63d1a4aa37493800ffb80 > > [6] https://crt.sh/?spkisha256=f7cd08a27aa9df0918b4df5265580c > > cee590cc9b5ad677f134fc137a6d57d2e7 > > [7] https://crt.sh/?spkisha256=60b87575447dcba2a36b7d11ac09fb > > 24a9db406fee12d2cc90180517616e8a18 > > [8] https://crt.sh/?spkisha256=d3b8136c20918725e848204735755a > > 4fcce203d4c2eddcaa4013763b5a23d81f > > [9] https://bugzilla.mozilla.org/show_bug.cgi?id=1311832 > > ___ > > dev-security-policy mailing list > > dev-security-policy@lists.mozi
RE: Old roots to new roots best practice?
Ryan, Could you please explain what you mean by saying that if you revoke a single certificate that it is akin to revoking all variations of that certificate? I don't think I agree. There are situations where the certificate is revoked for reasons (e.g. issues of certificate format/content) that have nothing to do with distrusting the underlying key pair. Thanks, Ben -Original Message- From: dev-security-policy [mailto:dev-security-policy-bounces+ben=digicert@lists.mozilla.org] On Behalf Of Ryan Sleevi via dev-security-policy Sent: Sunday, September 17, 2017 7:57 PM To: userwithuid Cc: mozilla-dev-security-policy Subject: Re: Old roots to new roots best practice? Hi there, I agree, Gerv's remarks are a bit confusing with respect to the concern. You are correct that the process of establishing a new root generally involves the creation of a self-signed certificate, and then any cross-signing that happens conceptually creates an 'intermediate' - so you have a key shared by a root and an intermediate. This is not forbidden; indeed, you can see in my recent suggestions to Symantec/DigiCert, it can and often is the best way for both compatibility and interoperability. Method #2 that you mentioned, while valid, can bring much greater compatibility challenges, and thus requires far more careful planning and execution (and collaboration both with servers and in configuring AIA endpoints) However, there is a criticism to be landed here - and that's using the same name/keypair for multiple intermediates and revoking one/some of them. This creates all sorts of compatibility problems in the ecosystem, and is thus unwise practice. As an example of a compatibility problem it creates, note that RFC5280 states how to verify a constructed path, but doesn't necessarily specify how to discover that path (RFC 4158 covers many of the strategies that might be used, but note, it's Informational). Some clients (such as macOS and iOS, up to I believe 10.11) construct a path first, and then perform revocation checking. If any certificate in the path is rejected, the leaf is rejected - regardless of other paths existing. This is similar to the behaviour of a number of OpenSSL and other (embedded) PKI stacks. Similarly, applications which process their own revocation checks may only be able to apply it to the constructed path (Chrome's CRLSets are somewhat like this, particularly on macOS platforms). Add in caching of intermediates (like mentioned in 4158), and it quickly becomes complicated. For this reason - if you have a same name/key pair, it should generally be expected that revoking a single one of those is akin to revoking all variations of that certificate (including the root!) Note that all of this presumes the use of two organizations here, and cross-signing. If there is a single organization present, or if the 'intermediate' *isn't* intended to be a root, it's generally seen as an unnecessary risk (for the reasons above). Does that help explain? On Sun, Sep 17, 2017 at 11:37 AM, userwithuid via dev-security-policy < dev-security-policy@lists.mozilla.org> wrote: > Forgot the links: > > [1] https://groups.google.com/forum/#!topic/mozilla.dev. > security.policy/hNOJJrN6WfE > [2] https://groups.google.com/forum/#!msg/mozilla.dev. > security.policy/RJHPWUd93xE/RqnC3brRBQAJ > [3] https://crt.sh/?spkisha256=fbe3018031f9586bcbf41727e417b7 > d1c45c2f47f93be372a17b96b50757d5a2 > [4] https://crt.sh/?spkisha256=82b5f84daf47a59c7ab521e4982aef > a40a53406a3aec26039efa6b2e0e7244c1 > [5] https://crt.sh/?spkisha256=706bb1017c855c59169bad5c1781cf > 597f12d2cad2f63d1a4aa37493800ffb80 > [6] https://crt.sh/?spkisha256=f7cd08a27aa9df0918b4df5265580c > cee590cc9b5ad677f134fc137a6d57d2e7 > [7] https://crt.sh/?spkisha256=60b87575447dcba2a36b7d11ac09fb > 24a9db406fee12d2cc90180517616e8a18 > [8] https://crt.sh/?spkisha256=d3b8136c20918725e848204735755a > 4fcce203d4c2eddcaa4013763b5a23d81f > [9] https://bugzilla.mozilla.org/show_bug.cgi?id=1311832 > ___ > dev-security-policy mailing list > dev-security-policy@lists.mozilla.org > https://lists.mozilla.org/listinfo/dev-security-policy > ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy smime.p7s Description: S/MIME cryptographic signature ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: FW: StartCom inclusion request: next steps
On Monday, 18 September 2017 15:50:16 UTC+1, Franck Leroy wrote: > This control that StartCom was not allowed to use our path was technical in > place by the fact that I was the only one to have the intermediate cross > signed certificates, stored (retained) in my personal safe. I see. Three (groups of) questions as someone who does not operate a public CA: When the cross signature certificate was signed did this result in some sort of auditable record of the signing? A paper trial, or its electronic equivalent - so that any audit team would be aware that the certificate existed, regardless of whether they were present when it was created ? (If so) Was this record inadequate to reproduce the certificate itself, for example just consisting of a serial number and other facts ? Many important functions of a CA are protected by "no lone zone" type practices, but would it be possible for you to retrieve the certificate from this safe on your own, without oversight by other employees ? I suspect all the above questions have answers that would be obvious to me if I had worked for a public CA but I hope you will humour me with answers anyway. ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: FW: StartCom inclusion request: next steps
Le lundi 18 septembre 2017 14:52:27 UTC+2, Ryan Sleevi a écrit : > On Mon, Sep 18, 2017 at 8:12 AM, Inigo Barreira <> > wrote: > Then they misissued a CA certificate and failed to disclose it, and we > should start an incident report into it. Hello In April 2017 the mozilla policy in force (v2.4) stated: “The CA with a certificate included in Mozilla’s CA Certificate Program MUST disclose this information before any such subordinate CA is allowed to issue certificates.” Our understanding in April was that as long as StartCom is not allowed by Certinomis to issue EE certs, the disclosure was not mandated immediately. This control that StartCom was not allowed to use our path was technical in place by the fact that I was the only one to have the intermediate cross signed certificates, stored (retained) in my personal safe. As soon as Certinomis has authorized StartCom to use the path to our root, I disclosed the certificates with the audit reports in the CCADB, and send the certificates to Inigo. May be I misunderstood the Mozilla requirements v2.4, and as I already said in previous post, I do apologize for it. But it was not my intention not to enforce the policy; I personally took care that StartCom could not be able to use the path to our root until a full BR audit assessment report was provided. Regards Franck Leroy ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: PROCERT issues
On 11/09/17 12:03, Gervase Markham wrote: > Thank you for this initial response. It is, however, far less detailed > than we would like to see. I have not had any further updates from PROCERT. I have tried to reflect their responses from this email here: https://wiki.mozilla.org/CA:PROCERT_Issues We hope to conclude the discussion at the end of this week, although I am having minor surgery on Wednesday, so it may be next week. Gerv ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: FW: StartCom inclusion request: next steps
On Mon, Sep 18, 2017 at 8:12 AM, Inigo Barreira wrote: > > We are not seeking to identify personal blame. We are seeking to > understand what, if any, improvements have been made to address such > issues. In reading this thread, I have difficulty finding any discussion > about the steps that StartCom has proposed to improve its awareness of the > expectations placed upon it as a potential participant in the Mozilla > store. Regardless of who bears responsibility for that, the absence of a > robust process - and, unfortunately, the absence of a deep understanding - > does mean that the restablishing of trust can present a significant risk to > the community. > > > > I think I´ve posted everything we did to improve our systems. I replied to > every error posted in the crt.sh explaining what happened and what we did > to fix it for not having the same issue again, but will try to recap here > again. > > > > - Test certificates. We issued test certificates in production > to test the CT behaviour. After the checking those certs were revoked, > within minutes. This was due to an incorrect configuration in the EJBCA > roles that was changed and updated accordingly for not allowing anyone to > issue certs from the EJBCA directly > > - Use of unallowed curves. We issued certificates with P-521 > which is not allowed by Mozilla. We revoked all those certs and configure > the system to not allow it. This remediation was put into production on > mid-july not issuing certs with that curve. > > - RSA parameter not included. We issued one certificate which no > RSA parameter included. We revoked that certificate and started an > investigation. The EJBCA system didn´t check the keys, concretely for this > issue. We developed a solution to check the CSR files properly before > sending to sign > > - Country code not allowed. We issued one certificate with > country code ZR for Zaire, which does not exist officially. We revoked the > cert and checked our internal country code database with the ISO one. We > made the correspondent changes. The cert was reissued with the right code > representing the Democratic Republic of Congo. > > > > Furthermore, we have added x509lint and cablint to our issuance process. > We have integrated crt.sh tool into our CMS system. We have developed a CSR > checking tool. We have updated the EJBCA system to the latest patch, > 6.0.9.5 which also came with a Key (RSA and ECC) validator, and we are also > willing to integrate the zlint once is getting more stable. We have applied > all these tools and we are not misissuing certificates. > Unfortunately, I am not sure how to more effectively communicate that this pattern of issues indicates an organization failure in the review of, application of, and implementation of the Baseline Requirements. Through both coding practices and issuing practices, security and compliance are not being responded to as systemic objectives, but rather as 'one offs', giving the impression of 'whack-a-mole' and ad-hoc response. For example, the country code failure indicates a more deeper failing - did you misunderstand the BRs? Were they not reviewed? Was the code simply not implemented correctly? With the RSA parameters, it similarly indicates a lack of attention. I greatly appreciate the use of and deployment of x509lint and cablint, but those merely offer technical checking that, as an aspiring trusted root CA, you should have already been implementing - whether your own or using those available COTS. The continued approach to issue-and-revoke rather than holistically review the practices and take every step possible to ensure compliance - particularly at a CA that was previously distrusted due to non-compliance - is a particularly egregious oversight that hasn't been responded to. In every response, it still feels as if you're suggesting these are one-offs and coding errors, when the concern being raised is how deeply indicative they are of systemic failures from top to bottom, from policy to technology, from oversight to implementation. Rather than demonstrate how beyond reproach StartCom is, it feels like an excessive emphasis is being put on the ineffective revocation of these certificates, while ignoring the issues that lead to them being issued in the first place - both from policy and from code. It´s not like disagreements, but the example was about a root certificate > private key in a USB stick, so IMO that example starts with a very > problematic issue, because it´s about a root private key and in a USB stick > left in the table, while the issues Startcom did was about end entity > certificates, and nothing related to private keys and not in the root, > that´s what I meant with “quality”. > I understand what you meant. I'm suggesting the community has a perception that the issues StartCom is presently being faced with is as egregious and as serious. I understand you disagree it's as significant. The suggestion is
Re: StartCom inclusion request: next steps
On Monday, September 18, 2017 at 11:38:57 AM UTC+1, Inigo Barreira wrote: > > > > I want to give you some words from one of the "community side" (this is a > > personal opinion and may vary from other opinions inside the community). > > > > Trust is not something that you get, it is something that you earn. > > True > > > StartCom was distrusted because of serious issues with their old PKI and now > > had the chance to restart - there are serious issues again. I don't think > > that > > the "community" wants rogue CAs on its list just because they restarted with > > new certificates. > > > > - The fact that you were cross-signed by Certnomis before you had valid > > WebTrust Audits and the permission to issue trusted certificates again and > > that the only thing which prevented you from using the trust path is a > > PUBLIC > > certificate? Is the only thing that prevents me from entering your > > datacenter > > a sign which tells me not to do so and the fact that you did not tell me > > where > > your datacenter is located? > > > > - Startcom operates/operated multiple CT Log Server itself. There is > > absolutely > > no reason to use trusted certificates for testing purposes if he does have a > > testing infrastructure. It would be easy for you to add one of your testing > > roots to your CT Logs and then test your CT behaviour. I don't think that > > Googles CTs are different from your own ones. Though your certificates might > > not have been trusted at that time, they would be now and as Gerv said, test > > certificates are not allowed. If you did not care about compliance at that > > time, why should you care about it now? > > Those certificates were not trusted at that time and can´t be now because > they were revoked within minutes. > > > > > - There is a reason why Best Practices are called best practices. Why did > > you > > reuse your key in root and intermediate certificates? > > Because there is nomoney for additional HSMs? Because you don't know how to > > generate new > > keys? An explanation would be great. > > A new thread has started about this. It´s not forbidden. > > > > > - P-521 are forbidden by Mozilla. Even if there is a discussion to change > > this it > > does not allow you to take that as a permission to test it. The fact that > > these > > certificates were reported as unrevoked at the time of reporting (as far as > > I > > remember) does imply that you do not monitor your certificate issuances for > > policy compliance at all. What do you do to ensure that all of your > > certificates > > are compliant with all requirements at all times? > > At the time of application, the certificates were revoked and countermeasures > set and since then there´s no more issues. We have implemented cablint, > crt.sh, ... and some other tools into our issuance process and still trying > to improve much more. > I´m not trying to excuse we had issues but we corrected them. > > > > > - What internal audits have you done to ensure the integrity of your > > systems? > > If something so critically as the permission to issue certificates in EJBCA > > is > > only noticed after you explicitly looked for it, what happens if someone > > removes all of your security mechanisms? You will find that out too after > > you > > misissued thousands of certificates? Quis custodiet ipsos custodes. > > Despite all the terrible systems we have, etc. we haven´t misissued thousands > of certificates, nor hundreds. The issues we had, have been fixed. Those > test certs issued directly from the EJBCA was a mistake, explained many > times. I have nothing to add to what I´ve already said. It was not a good > decission, not a good practice, and it´s forbidden. > > > > > - The incidents with Diginotar should have made clear that secure, well > > audited and hardened code is absolutely necessary as well as reliable logs. > > The fact that these flaws where not found by your internal team and only > > discovered after an external company tested your systems is deeply > > concerning. What have you done now and what will you do to ensure that > > your systems won't be abused? How do you make sure that the code your > > people write in the future is safe and how do you detect security problems > > if > > you were unable to do so at the first time? > > This is a different example, Diginotar was attacked and the attacker was able > to enter in their systems, and this is not what happened with StartCom. As > said, the code that went live is not the same that was audited the first time > and has been improved since then. The audits are just for that, and we will > continue doing yearly security audits to improve our systems. Why not open-source the code on GitHub and let us be the judge of the improvements made to your systems code? Lets encrypt does this and works successfully. > > > > Though I would love to see StartCom up and running again, I have to agree > > with James that a
RE: StartCom inclusion request: next steps
> > I want to give you some words from one of the "community side" (this is a > personal opinion and may vary from other opinions inside the community). > > Trust is not something that you get, it is something that you earn. True > StartCom was distrusted because of serious issues with their old PKI and now > had the chance to restart - there are serious issues again. I don't think that > the "community" wants rogue CAs on its list just because they restarted with > new certificates. > > - The fact that you were cross-signed by Certnomis before you had valid > WebTrust Audits and the permission to issue trusted certificates again and > that the only thing which prevented you from using the trust path is a PUBLIC > certificate? Is the only thing that prevents me from entering your datacenter > a sign which tells me not to do so and the fact that you did not tell me where > your datacenter is located? > > - Startcom operates/operated multiple CT Log Server itself. There is > absolutely > no reason to use trusted certificates for testing purposes if he does have a > testing infrastructure. It would be easy for you to add one of your testing > roots to your CT Logs and then test your CT behaviour. I don't think that > Googles CTs are different from your own ones. Though your certificates might > not have been trusted at that time, they would be now and as Gerv said, test > certificates are not allowed. If you did not care about compliance at that > time, why should you care about it now? Those certificates were not trusted at that time and can´t be now because they were revoked within minutes. > > - There is a reason why Best Practices are called best practices. Why did you > reuse your key in root and intermediate certificates? > Because there is nomoney for additional HSMs? Because you don't know how to > generate new > keys? An explanation would be great. A new thread has started about this. It´s not forbidden. > > - P-521 are forbidden by Mozilla. Even if there is a discussion to change > this it > does not allow you to take that as a permission to test it. The fact that > these > certificates were reported as unrevoked at the time of reporting (as far as I > remember) does imply that you do not monitor your certificate issuances for > policy compliance at all. What do you do to ensure that all of your > certificates > are compliant with all requirements at all times? At the time of application, the certificates were revoked and countermeasures set and since then there´s no more issues. We have implemented cablint, crt.sh, ... and some other tools into our issuance process and still trying to improve much more. I´m not trying to excuse we had issues but we corrected them. > > - What internal audits have you done to ensure the integrity of your systems? > If something so critically as the permission to issue certificates in EJBCA is > only noticed after you explicitly looked for it, what happens if someone > removes all of your security mechanisms? You will find that out too after you > misissued thousands of certificates? Quis custodiet ipsos custodes. Despite all the terrible systems we have, etc. we haven´t misissued thousands of certificates, nor hundreds. The issues we had, have been fixed. Those test certs issued directly from the EJBCA was a mistake, explained many times. I have nothing to add to what I´ve already said. It was not a good decission, not a good practice, and it´s forbidden. > > - The incidents with Diginotar should have made clear that secure, well > audited and hardened code is absolutely necessary as well as reliable logs. > The fact that these flaws where not found by your internal team and only > discovered after an external company tested your systems is deeply > concerning. What have you done now and what will you do to ensure that > your systems won't be abused? How do you make sure that the code your > people write in the future is safe and how do you detect security problems if > you were unable to do so at the first time? This is a different example, Diginotar was attacked and the attacker was able to enter in their systems, and this is not what happened with StartCom. As said, the code that went live is not the same that was audited the first time and has been improved since then. The audits are just for that, and we will continue doing yearly security audits to improve our systems. > > Though I would love to see StartCom up and running again, I have to agree > with James that all of these issues do not enwake trust into you and instead > produce more uncertainties if StartCom is really able to run a PKI itself. > But as > I said before, this is a personal opinion :) > > > Am Freitag, 15. September 2017 16:38:25 UTC+2 schrieb Inigo Barreira: > > > > Yes, you´re right, that was on the table and also suggested by > > > > Mozilla, but the issue was that people from 360 are used to code > > > > in PHP and the old one was in Java and som