Re: Incident report D-TRUST: syntax error in one tls certificate

2018-12-03 Thread Jakob Bohm via dev-security-policy
On 04/12/2018 05:38, Nick Lamb wrote:
> On Tue, 4 Dec 2018 01:39:05 +0100
> Jakob Bohm via dev-security-policy
>  wrote:
> 
>> A few clarifications below
>> Interesting.  What is that hole?
> 
> I had assumed that you weren't aware that you could just use these
> systems as designed. Your follow-up clarifies that you believe doing
> this is unsafe. I will endeavour to explain why you're mistaken.
> 

Which systems?

> But also I specifically endorse _learning by doing_. Experiment for
> yourself with how easy it is to achieve auto-renewal with something like
> ACME, try to request renewals against a site that's configured for
> "stateless renewal" but with a new ("bad guy") key instead of your real
> ACME account keys.
> 

I prefer not to experiment with live certificates.  Anyway, this was 
never intended to focus on the specifics of ACME, since OC issuance 
isn't ACME anyway.

So returning to the typical, as-specified-in-the-BRs validation 
challenges.  Those generally either do not include the CSR in the 
challenge, or do so in a manner that would involve active checking 
rather than just trivial concatenation.  These are the kind of 
challenges that require the site owner to consider IF they are in a 
certificate request process before responding.

> 
>> It certainly needs the ability to change private keys (as reusing
>> private keys for new certificates is bad practice and shouldn't be
>> automated).
> 
> In which good practice document can I read that private keys should be
> replaced earlier than their ordinary lifetime if new certificates are
> minted during that lifetime? Does this document explain how its authors
> imagine the new certificate introduces a novel risk?
> 
> [ This seems like breakthrough work to me, it implies a previously
> unimagined weakness in, at least, RSA ]
> 

Aligning key and certificate lifetime is generally good practice.

See for example NIST SP 1800-16B Prelim Draft 1, Section 5.1.4 which has 
this to say:

  "... It is possible to renew a certificate with the same public and 
  private keys (i.e., not rekeying during the renewal process). 
  However, this is only recommended when the private key is contained 
  with a hardware security module (HSM) validated to Federal Information 
  Processing Standards (FIPS) Publication 140-2 Level 2 or above"

And the operations I discuss are unlikely to purchase an expensive HSM 
that isn't even future proof. (I have checked leading brands of end site 
HSMs, and they barely go beyond current recommended key strengths).

> You must understand that bad guys can, if they wish, construct an
> unlimited number of new certificates corresponding to an existing key,
> silently. Does this too introduce an unacceptable risk ? If not, why is
> the risk introduced if a trusted third party mints one or more further
> certificates ?
> 
> No, I think the problem here is with your imaginary "bad practice".
> You have muddled the lifetime of the certificate (which relates to the
> decay in assurance of subject information validated and to other
> considerations) with the lifetime of the keys, see below.
> 
>> By definition, the strength of public keys, especially TLS RSA
>> signing keys used with PFS suites, involves a security tradeoff
>> between the time that attackers have to break/factor the public key
>> and the slowness of handling TLS connections with current generation
>> standard hardware and software.
> 
> This is true.
> 
>> The current WebPKI/BR tradeoff/compromise is set at 2048 bit keys
>> valid for about 24 months.
> 
> Nope. The limit of 825 days (not "about 24 months") is for leaf
> certificate lifetime, not for keys. It's shorter than it once was not
> out of concern about bad guys breaking 2048-bit RSA but because of
> concern about algorithmic agility and the lifetime of subject
> information validation, mostly the former.

825 Days = 24 months plus ~94 days slop, in practice CAs map this two 
payment for 2 years validity and some allowance for overlap during 
changeover.

> 
> Subscribers are _very_ strongly urged to choose shorter, not longer
> lifetimes, again not because we're worried about 2048-bit RSA (you will
> notice there's no exemption for 4096-bit keys) but because of agility
> and validation.
> 
> But choosing new keys every time you get a new certificate is
> purely a mechanical convenience of scheduling, not a technical necessity
> - like a fellow who schedules an appointment at the barber each time he
> receives a telephone bill, the one thing has nothing to do with the
> other.
> 

See above NIST quote.

> 
>> It requires write access to the private keys, even if the operators
>> might not need to see those keys, many real world systems don't allow
>> granting "install new private key" permission without "see new
>> private key" permission and "choose arbitrary private key" permission.
>>
>> Also, many real world systems don't allow installing a new
>> certificate for an existing key without reinstalling the 

Re: Incident report Certum CA: Corrupted certificates

2018-12-03 Thread Wojciech Trapczyński via dev-security-policy

Thank you. The answers to your questions below.

On 04.12.2018 00:47, Jakob Bohm via dev-security-policy wrote:

On 03/12/2018 12:06, Wojciech Trapczyński wrote:

Please find our incident report below.

This post links to https://bugzilla.mozilla.org/show_bug.cgi?id=1511459.

---

1. How your CA first became aware of the problem (e.g. via a problem
report submitted to your Problem Reporting Mechanism, a discussion in
mozilla.dev.security.policy, a Bugzilla bug, or internal self-audit),
and the time and date.

10.11.2018 10:10 UTC + 0 – We received a notification from our internal
monitoring system concerning issues with publishing CRLs.

2. A timeline of the actions your CA took in response. A timeline is a
date-and-time-stamped sequence of all relevant events. This may include
events before the incident was reported, such as when a particular
requirement became applicable, or a document changed, or a bug was
introduced, or an audit was done.

(All times in UTC±00:00)

10.11.2018 10:10 – We received a notification from our internal
monitoring system for issuing certificates and CRLs concerning issues
with publishing CRLs. We started verification.
10.11.2018 12:00 – We established that one of about 50 CRLs has
corrupted digital signature value. We noticed that this CRL has a much
larger size that others. We verified that in short period of time over
30 000 certificates had been added to this CRL.
10.11.2018 15:30 – We confirmed that the signing module has a trouble
with signing CRL greater than 1 MB. We started working on it.
10.11.2018 18:00 – We disabled the automatic publication of this CRL. We
verified that others CRLs have correct signature.
11.11.2018 07:30 – As part of the post-failure verification procedure,
we started the inspection of whole system including all certificates
issued at that time.
11.11.2018 10:00 – We verified that some parts of issued certificates
have corrupted digital signature.
11.11.2018 10:40 – We established that one from a few working in
parallel signing modules was producing corrupted signatures. We turned
it off.
11.11.2018 18:00 – We confirmed that the reason for the corrupted
signature of certificates was a large CRL which prevented further
correct operation of that signing module.
11.11.2018 19:30 – We left only one working signing module which prevent
further mis-issuances.
19.11.2018 11:00 – We deployed on production an additional digital
signature verification in external module, out of the signing module.
19.11.2018 21:00 – We deployed on production a new version of the
signing module which correctly handle a large CRL.



Question 1: Was there a period during which this issuing CA had no
   validly signed non-expired CRL due to this incident?



Between 10.11.2018 01:05 (UTC±00:00) and 14.11.2018 07:35 (UTC±00:00) we 
were serving one CRL with corrupted signature.



Question 2: How long were ordinary revocations (via CRL) delayed by
   this incident?



There was no delay in ordinary revocations. All CRLs were generating and 
publishing in accordance with CABF BR.



Question 3: Was Certum's OCSP handling for any issuing or root CA affected
   by this incident (for example, were any OCSP responses incorrectly
   signed?, were OCSP servers not responding?  were OCSP servers returning
   outdated revocation data until the large-CRL signing was operational on
   2018-11-19 21:00 UTC ?)



No, OCSP was not impacted. We were serving correct OCSP responses all 
the time.



3. Whether your CA has stopped, or has not yet stopped, issuing
certificates with the problem. A statement that you have will be
considered a pledge to the community; a statement that you have not
requires an explanation.

11.11.2018 17:47

4. A summary of the problematic certificates. For each problem: number
of certs, and the date the first and last certs with that problem were
issued.

355.

The first one: 10.11.2018 01:26:10
The last one: 11.11.2018 17:47:36

All certificates were revoked.

5. The complete certificate data for the problematic certificates. The
recommended way to provide this is to ensure each certificate is logged
to CT and then list the fingerprints or crt.sh IDs, either in the report
or as an attached spreadsheet, with one list per distinct problem.

Full list of certificates in attachment.

6. Explanation about how and why the mistakes were made or bugs
introduced, and how they avoided detection until now.

The main reason for the corrupted operation of the signing module was
the lack of proper handling of a large CRL, greater than 1 MB. At the
moment when the signing module received such a large list for signing it
was not able to sign it correctly. In addition, the signing module
started to incorrectly sign the remaining objects received for signing
later, i.e. after receiving a large CRL for signature.

Due to the fact that at the time when problem occurred we were using
simultaneously several signing modules, the problem did not affect all
certificates issued at that time

Re: Incident report D-TRUST: syntax error in one tls certificate

2018-12-03 Thread Nick Lamb via dev-security-policy
On Tue, 4 Dec 2018 01:39:05 +0100
Jakob Bohm via dev-security-policy
 wrote:

> A few clarifications below
> Interesting.  What is that hole?

I had assumed that you weren't aware that you could just use these
systems as designed. Your follow-up clarifies that you believe doing
this is unsafe. I will endeavour to explain why you're mistaken.

But also I specifically endorse _learning by doing_. Experiment for
yourself with how easy it is to achieve auto-renewal with something like
ACME, try to request renewals against a site that's configured for
"stateless renewal" but with a new ("bad guy") key instead of your real
ACME account keys.


> It certainly needs the ability to change private keys (as reusing
> private keys for new certificates is bad practice and shouldn't be
> automated).

In which good practice document can I read that private keys should be
replaced earlier than their ordinary lifetime if new certificates are
minted during that lifetime? Does this document explain how its authors
imagine the new certificate introduces a novel risk?

[ This seems like breakthrough work to me, it implies a previously
unimagined weakness in, at least, RSA ]

You must understand that bad guys can, if they wish, construct an
unlimited number of new certificates corresponding to an existing key,
silently. Does this too introduce an unacceptable risk ? If not, why is
the risk introduced if a trusted third party mints one or more further
certificates ?

No, I think the problem here is with your imaginary "bad practice".
You have muddled the lifetime of the certificate (which relates to the
decay in assurance of subject information validated and to other
considerations) with the lifetime of the keys, see below.

> By definition, the strength of public keys, especially TLS RSA
> signing keys used with PFS suites, involves a security tradeoff
> between the time that attackers have to break/factor the public key
> and the slowness of handling TLS connections with current generation
> standard hardware and software.

This is true.

> The current WebPKI/BR tradeoff/compromise is set at 2048 bit keys
> valid for about 24 months.

Nope. The limit of 825 days (not "about 24 months") is for leaf
certificate lifetime, not for keys. It's shorter than it once was not
out of concern about bad guys breaking 2048-bit RSA but because of
concern about algorithmic agility and the lifetime of subject
information validation, mostly the former.

Subscribers are _very_ strongly urged to choose shorter, not longer
lifetimes, again not because we're worried about 2048-bit RSA (you will
notice there's no exemption for 4096-bit keys) but because of agility
and validation.

But choosing new keys every time you get a new certificate is
purely a mechanical convenience of scheduling, not a technical necessity
- like a fellow who schedules an appointment at the barber each time he
receives a telephone bill, the one thing has nothing to do with the
other.


> It requires write access to the private keys, even if the operators
> might not need to see those keys, many real world systems don't allow
> granting "install new private key" permission without "see new
> private key" permission and "choose arbitrary private key" permission.
> 
> Also, many real world systems don't allow installing a new
> certificate for an existing key without reinstalling the matching
> private key, simply because that's the interface.
> 
> Traditional military encryption systems are built without these 
> limitations, but civilian systems are often not.

Nevertheless.

I'm sure there's a system out there somewhere which requires you to
provide certificates on a 3.5" floppy disk. But that doesn't mean
issuing certificates can reasonably be said to require a 3.5" floppy
disk, it's just those particular systems.

> This is why good CAs send out reminder e-mails in advance.  And why 
> one should avoid CAs that use that contact point for infinite spam 
> about new services.

They do say that insanity consists of doing the same thing over and
over and expecting different results.

> The scenario is "Bad guy requests new cert, CA properly challenges 
> good guy at good guy address, good guy responds positively without 
> reference to old good guy CSR, CA issues for bad guy CSR, bad guy 
> grabs new cert from anywhere and matches to bad guy private key, 
> bad guy does actual attack".

You wrote this in response to me explaining exactly why this scenario
won't work in ACME (or any system which wasn't designed by idiots -
though having read their patent filings the commercial CAs on the whole
may be taken as idiots to my understanding)

I did make one error though, in using the word "signature" when this
data is not a cryptographic signature, but rather a "JWK Thumbprint".

When "good guy responds positively" that positive response includes
a Thumbprint corresponding to their ACME public key. When they're
requesting issuance this works fine because they use their ACME keys
for 

Re: Incident report D-TRUST: syntax error in one tls certificate

2018-12-03 Thread Jakob Bohm via dev-security-policy
A few clarifications below

On 30/11/2018 10:48, Nick Lamb wrote:
> On Wed, 28 Nov 2018 22:41:37 +0100
> Jakob Bohm via dev-security-policy
>  wrote:
> 
>> I blame those standards for forcing every site to choose between two
>> unfortunate risks, in this case either the risks prevented by those
>> "pinning" mechanisms and the risks associated with having only one
>> certificate.
> 
> HTTPS Key Pinning (HPKP) is deprecated by Google and is widely
> considered a failure because it acts as a foot-gun and (more seriously
> but less likely in practice) enables sites to be held to ransom by bad
> guys.
> 
> Mostly though, what I want to focus on is a big hole in your knowledge
> of what's available today, which I'd argue is likely significant in
> that probably most certificate Subscribers don't know about it, and
> that's something the certificate vendors could help to educate them
> about and/or deliver products to help them use.
> 

Interesting.  What is that hole?

>> Automating certificate deployment (as you often suggest) lowers
>> operational security, as it necessarily grants read/write access to
>> the certificate data (including private key) to an automated, online,
>> unsupervised system.
> 
> No!
> 
> This system does not need access to private keys. Let us take ACME as
> our example throughout, though nothing about what I'm describing needs
> ACME per se, it's simply a properly documented protocol for automation
> that complies with CA/B rules.

It certainly needs the ability to change private keys (as reusing private 
keys for new certificates is bad practice and shouldn't be automated).

This means that some part of the overall automated system needs the ability 
to generate fresh keys, sign CSRs, and cause servers to switch to those new 
keys.

And because this discussion entails triggering all that at an out-of-schedule 
time, having a "CSR pre-generation ceremony" every 24 months (the normal 
reissue schedule for EV certs) will provide limited ability to handle 
out-of-schedule certificate replacement (because it is also bad practice to 
have private keys with a design lifetime of 24 months laying around for 48 
months prior to planned expiry).


> 
> The ACME CA expects a CSR, signed with the associated private key, but
> it does not require that this CSR be created fresh during validation +
> issuance. A Subscriber can as they wish generate the CSR manually,
> offline and with full supervision. The CSR is a public document
> (revealing it does not violate any cryptographic assumptions). It is
> entirely reasonable to create one CSR when the key pair is minted and
> replace it only in a scheduled, predictable fashion along with the keys
> unless a grave security problem occurs with your systems.
> 
> ACME involves a different private key, possessed by the subscriber/
> their agent only for interacting securely with ACME, the ACME client
> needs this key when renewing, but it doesn't put the TLS certificate key
> at risk.
> 
> Certificates are public information by definition. No new risk there.
> 

By definition, the strength of public keys, especially TLS RSA signing 
keys used with PFS suites, involves a security tradeoff between the 
time that attackers have to break/factor the public key and the slowness 
of handling TLS connections with current generation standard hardware and 
software.

The current WebPKI/BR tradeoff/compromise is set at 2048 bit keys valid 
for about 24 months.



> 
>> Allowing multiple persons to replace the certificates also lowers
>> operational security, as it (by definition) grants multiple persons
>> read/write access to the certificate data.
> 
> Again, certificates themselves are public information and this does not
> require access to the private keys.

It requires write access to the private keys, even if the operators might 
not need to see those keys, many real world systems don't allow granting 
"install new private key" permission without "see new private key" 
permission and "choose arbitrary private key" permission.

Also, many real world systems don't allow installing a new certificate 
for an existing key without reinstalling the matching private key, simply 
because that's the interface.

Traditional military encryption systems are built without these 
limitations, but civilian systems are often not.


> 
>> Under the current and past CA model, certificate and private key
>> replacement is a rare (once/2 years) operation that can be done
>> manually and scheduled weeks in advance, except for unexpected
>> failures (such as a CA messing up).
>   
> This approach, which has been used at some of my past employers,
> inevitably results in systems where the certificates expire "by
> mistake". Recriminations and insistence that lessons will be learned
> follow, and then of course nothing is followed up and the problem
> recurs.
> 
> It's a bad idea, a popular one, but still a bad idea.

This is why good CAs send out reminder e-mails in advance.  And why 

Re: Incident report Certum CA: Corrupted certificates

2018-12-03 Thread Jakob Bohm via dev-security-policy
On 03/12/2018 12:06, Wojciech Trapczyński wrote:
> Please find our incident report below.
> 
> This post links to https://bugzilla.mozilla.org/show_bug.cgi?id=1511459.
> 
> ---
> 
> 1. How your CA first became aware of the problem (e.g. via a problem 
> report submitted to your Problem Reporting Mechanism, a discussion in 
> mozilla.dev.security.policy, a Bugzilla bug, or internal self-audit), 
> and the time and date.
> 
> 10.11.2018 10:10 UTC + 0 – We received a notification from our internal 
> monitoring system concerning issues with publishing CRLs.
> 
> 2. A timeline of the actions your CA took in response. A timeline is a 
> date-and-time-stamped sequence of all relevant events. This may include 
> events before the incident was reported, such as when a particular 
> requirement became applicable, or a document changed, or a bug was 
> introduced, or an audit was done.
> 
> (All times in UTC±00:00)
> 
> 10.11.2018 10:10 – We received a notification from our internal 
> monitoring system for issuing certificates and CRLs concerning issues 
> with publishing CRLs. We started verification.
> 10.11.2018 12:00 – We established that one of about 50 CRLs has 
> corrupted digital signature value. We noticed that this CRL has a much 
> larger size that others. We verified that in short period of time over 
> 30 000 certificates had been added to this CRL.
> 10.11.2018 15:30 – We confirmed that the signing module has a trouble 
> with signing CRL greater than 1 MB. We started working on it.
> 10.11.2018 18:00 – We disabled the automatic publication of this CRL. We 
> verified that others CRLs have correct signature.
> 11.11.2018 07:30 – As part of the post-failure verification procedure, 
> we started the inspection of whole system including all certificates 
> issued at that time.
> 11.11.2018 10:00 – We verified that some parts of issued certificates 
> have corrupted digital signature.
> 11.11.2018 10:40 – We established that one from a few working in 
> parallel signing modules was producing corrupted signatures. We turned 
> it off.
> 11.11.2018 18:00 – We confirmed that the reason for the corrupted 
> signature of certificates was a large CRL which prevented further 
> correct operation of that signing module.
> 11.11.2018 19:30 – We left only one working signing module which prevent 
> further mis-issuances.
> 19.11.2018 11:00 – We deployed on production an additional digital 
> signature verification in external module, out of the signing module.
> 19.11.2018 21:00 – We deployed on production a new version of the 
> signing module which correctly handle a large CRL.
> 

Question 1: Was there a period during which this issuing CA had no 
  validly signed non-expired CRL due to this incident?

Question 2: How long were ordinary revocations (via CRL) delayed by 
  this incident?

Question 3: Was Certum's OCSP handling for any issuing or root CA affected 
  by this incident (for example, were any OCSP responses incorrectly 
  signed?, were OCSP servers not responding?  were OCSP servers returning 
  outdated revocation data until the large-CRL signing was operational on 
  2018-11-19 21:00 UTC ?)

> 3. Whether your CA has stopped, or has not yet stopped, issuing 
> certificates with the problem. A statement that you have will be 
> considered a pledge to the community; a statement that you have not 
> requires an explanation.
> 
> 11.11.2018 17:47
> 
> 4. A summary of the problematic certificates. For each problem: number 
> of certs, and the date the first and last certs with that problem were 
> issued.
> 
> 355.
> 
> The first one: 10.11.2018 01:26:10
> The last one: 11.11.2018 17:47:36
> 
> All certificates were revoked.
> 
> 5. The complete certificate data for the problematic certificates. The 
> recommended way to provide this is to ensure each certificate is logged 
> to CT and then list the fingerprints or crt.sh IDs, either in the report 
> or as an attached spreadsheet, with one list per distinct problem.
> 
> Full list of certificates in attachment.
> 
> 6. Explanation about how and why the mistakes were made or bugs 
> introduced, and how they avoided detection until now.
> 
> The main reason for the corrupted operation of the signing module was 
> the lack of proper handling of a large CRL, greater than 1 MB. At the 
> moment when the signing module received such a large list for signing it 
> was not able to sign it correctly. In addition, the signing module 
> started to incorrectly sign the remaining objects received for signing 
> later, i.e. after receiving a large CRL for signature.
> 
> Due to the fact that at the time when problem occurred we were using 
> simultaneously several signing modules, the problem did not affect all 
> certificates issued at that time. Our analysis shows that the problem 
> affected about 10% of all certificates issued at that time.
> 
> We have been using this signing module for a few last years and at the 
> time of its implementation the tests did not in

Incident report Certum CA: Corrupted certificates

2018-12-03 Thread Wojciech Trapczyński via dev-security-policy

Please find our incident report below.

This post links to https://bugzilla.mozilla.org/show_bug.cgi?id=1511459.

---

1. How your CA first became aware of the problem (e.g. via a problem 
report submitted to your Problem Reporting Mechanism, a discussion in 
mozilla.dev.security.policy, a Bugzilla bug, or internal self-audit), 
and the time and date.


10.11.2018 10:10 UTC + 0 – We received a notification from our internal 
monitoring system concerning issues with publishing CRLs.


2. A timeline of the actions your CA took in response. A timeline is a 
date-and-time-stamped sequence of all relevant events. This may include 
events before the incident was reported, such as when a particular 
requirement became applicable, or a document changed, or a bug was 
introduced, or an audit was done.


(All times in UTC±00:00)

10.11.2018 10:10 – We received a notification from our internal 
monitoring system for issuing certificates and CRLs concerning issues 
with publishing CRLs. We started verification.
10.11.2018 12:00 – We established that one of about 50 CRLs has 
corrupted digital signature value. We noticed that this CRL has a much 
larger size that others. We verified that in short period of time over 
30 000 certificates had been added to this CRL.
10.11.2018 15:30 – We confirmed that the signing module has a trouble 
with signing CRL greater than 1 MB. We started working on it.
10.11.2018 18:00 – We disabled the automatic publication of this CRL. We 
verified that others CRLs have correct signature.
11.11.2018 07:30 – As part of the post-failure verification procedure, 
we started the inspection of whole system including all certificates 
issued at that time.
11.11.2018 10:00 – We verified that some parts of issued certificates 
have corrupted digital signature.
11.11.2018 10:40 – We established that one from a few working in 
parallel signing modules was producing corrupted signatures. We turned 
it off.
11.11.2018 18:00 – We confirmed that the reason for the corrupted 
signature of certificates was a large CRL which prevented further 
correct operation of that signing module.
11.11.2018 19:30 – We left only one working signing module which prevent 
further mis-issuances.
19.11.2018 11:00 – We deployed on production an additional digital 
signature verification in external module, out of the signing module.
19.11.2018 21:00 – We deployed on production a new version of the 
signing module which correctly handle a large CRL.


3. Whether your CA has stopped, or has not yet stopped, issuing 
certificates with the problem. A statement that you have will be 
considered a pledge to the community; a statement that you have not 
requires an explanation.


11.11.2018 17:47

4. A summary of the problematic certificates. For each problem: number 
of certs, and the date the first and last certs with that problem were 
issued.


355.

The first one: 10.11.2018 01:26:10
The last one: 11.11.2018 17:47:36

All certificates were revoked.

5. The complete certificate data for the problematic certificates. The 
recommended way to provide this is to ensure each certificate is logged 
to CT and then list the fingerprints or crt.sh IDs, either in the report 
or as an attached spreadsheet, with one list per distinct problem.


Full list of certificates in attachment.

6. Explanation about how and why the mistakes were made or bugs 
introduced, and how they avoided detection until now.


The main reason for the corrupted operation of the signing module was 
the lack of proper handling of a large CRL, greater than 1 MB. At the 
moment when the signing module received such a large list for signing it 
was not able to sign it correctly. In addition, the signing module 
started to incorrectly sign the remaining objects received for signing 
later, i.e. after receiving a large CRL for signature.


Due to the fact that at the time when problem occurred we were using 
simultaneously several signing modules, the problem did not affect all 
certificates issued at that time. Our analysis shows that the problem 
affected about 10% of all certificates issued at that time.


We have been using this signing module for a few last years and at the 
time of its implementation the tests did not include creation of the 
signature for such large CRL. None of our CRLs for SSL certificates have 
exceeded 100 KB so far. Such a significant increase in the size of one 
of the CRLs was associated with the mass revocation of certificates by 
one of our partner (revocations was due to business reasons). In a short 
time, almost 30,000 certificates were found on the CRL, what is 
extremely rare.


All issued certificates were unusable due to corrupted signature.

7. List of steps your CA is taking to resolve the situation and ensure 
such issuance will not be repeated in the future, accompanied with a 
timeline of when your CA expects to accomplish these things.


We have deployed a new version of the signing module that correctly 
signs large CRLs. Fr