Jakob, Online CA signing keys for something like OCSP signing is a bad idea and don't worry we wont do that; we are looking at doing is using pre-produced OCSP for issuing CAs issued certificates; in this model all cache-misses and root responses would be VA delegated still.
This protects the CA key material yet keeps the large majority of the responses much smaller due to the left out keys. For this to be a viable model, as you point out frequency of response generation is one factor, as is the validity period of the issuing CA. CAs are required to produce responses every 7 days, we comply with that but as part of our new infrastructure investment we will be bringing that time down quite a bit; the largest issue here being time skew on the broader internet. This introduces practical limits that mean that you cant be "too fresh" on your revocation times. It also means producing fresher responses 100s of times a day isn't of much value, you can of course update the changes in the cached responses set to be accurate but fresher / shorter lived responses end up breaking things for a reasonably large % of users. I believe this approach addresses most of the concerns you mentioned bellow, a few exceptions: You state that pre-produced OCSP responses cant span multiple certids (practically), while this is true we receive exactly zero such requests to do and testing shows that major clients don't even handle the case correctly when such responses are returned (geotrust does this); they simply store the larger response many times even if they already had a signed response that was valid covering that certid. Ryan -----Original Message----- From: owner-openssl-us...@openssl.org [mailto:owner-openssl-us...@openssl.org] On Behalf Of Jakob Bohm Sent: Friday, June 14, 2013 3:10 PM To: openssl-users@openssl.org Subject: Re: Why CA-signed OCSP responders are a bad idea [WAS:Is it me or is ocsp.comodoca.com doing something wrong?] On 6/13/2013 1:50 AM, Ryan Hurst wrote: > They are doing a CA signed OCSP response, this is legitimate. > > We will do this in the not so distant future as well for many of our > responses also. > Please don't! As a knowledgeable GlobalSign customer I would prefer that you keep your root private keys as secure as possible. Using CA signed OCSP responses implies some serious and almost impossible to mitigate security risks: -+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ -+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ For a "CRL-style" CA signed OCSP responder: - I define this as an OCSP responder that returns ONLY pregenerated responses, which it receives a a regular batch of responses for all issued certificates from the offline CA. 1. The short validity times expected by OCSP clients requires that new response batches are created 100 times per day or more, while the need to remain compatible with older (RFC2560) clients requires each response in the batch to cover only one certificate. Together, these two factors provides attackers with an easily collected collection of millions of signatures on algorithmically predictable data issued by the root key. This is exactly the kind of data needed for most otherwise infeasible/theoretical attacks on a public/private key pair. 2. The only way to make the signed data in a pregenerated OCSP response unpredictable is to include a random salt in a non-standard extension in each response. To avoid leaking the state of the internal RNG of the CA's HSM, such salt must be generated by a trusted RNG unrelated to the one in the primary HSM. 3. Due to shortcomings of the OCSP protocol documents, response batches can only be pregenerated ahead of time by including false time information in them, (lack of clarity means that some clients might reject producedAt values before thisUpdate) at best one could use the CRL reference extension to indicate the true generation time, but only if a real CRL is also generated as part of the batch. This makes it hard to take the root key holding HSM temporarily offline for security issues or any other good reason. This issue persists in RFC6960. 4. Due to shortcomings and lack of foresight in the OCSP protocol documents, the generation and return of responses using the SHA-1 hash algorithm will probably remain necessary at least until June 2021 (the 10 year anniversary of RFC6277), and responses using the SHA-256 hash algorithm until an unknown date after the year 2023 (since no RFC has yet been issued specifying any other must-accept algorithms). Furthermore, there are no must-accept algorithms using any contemporary version of DSA, nor using any algorithms not from the NSA-designed MD4-derived old SHA algorithms. There is not even a requirement that clients must accept the algorithm used to sign the certificate being checked. This issue persists in RFC6960. 5. There is no feasible way to pregenerate negative responses for never- issued certificates. Most notably, such negative responses cannot cover a range of unissued serial numbers and standard practice uses serial numbers so long that it is virtually impossible to even store one bit of response data for each unused serial number, let alone transmit such responses. The closest potential solution would be to use a streaming algorithm to generate but not store a sequence of SingleResponse for all serial numbers from X to Y, compute and store the surrounding parts of a BasicOCSPResponse holding that sequence, then recreate it on the fly when sending this response to a requestor, however the transmission bandwidth of such a monster response would still be prohibitive. This issue is present in RFC6960. 6. A "CRL-style" CA signed OCSP responder cannot respond to requests covering more than one certificate, since doing so requires the generation and storage of responses for almost infinitely many sets of certificates. If the client implements RFC6960, it is possible to simply send back a pre-signed response which is essentially the entire CRL in a different format, however the protocol does not provide this information in the Request, so this cannot be done until June 2023 (10 year anniversary of RFC6960). -+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ -+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ For a truly online CA-signed OCSP responder, things are even worse: - I define this as an OCSP responder which generates responses on the fly and signs them with the CA root key. 1. It requires the CA root key to be available to at least one system (HSM or otherwise) with access to the CA private key to be online in a manner which completely prevents human vetting of requests (because the root key must be used at very short notice 24x7). This provides an avenue for seriously motivated attackers to work their way through all the security layers one by one until they reach the private key. The GlobalSign root keys are such valuable targets to both criminal and rogue government government groups that a STYX style operation to capture them must always be expected. 2. It requires the HSM storing the root key to never be offline, at all. This makes it impossible to take the root key holding HSM temporarily offline for security issues or any other good reason. 3. Due to shortcomings and lack of foresight in the OCSP protocol documents, the generation and return of responses using the SHA-1 hash algorithm will probably remain necessary at least until June 2021 (the 10 year anniversary of RFC6277), and responses using the SHA-256 hash algorithm until an unknown date after the year 2023 (since no RFC has yet been issued specifying any other must-accept algorithms). Furthermore, there are no must-accept algorithms using any contemporary version of DSA, nor using any algorithms not from the NSA-designed MD4-derived old SHA family. There is not even a requirement that clients must accept the algorithm used to sign the certificate being checked. This issue persists in RFC6960. 4. It allows, by design, anonymous remote users to request a huge number of sample signatures made with the root private key, which is exactly the unobtainable part of many known theoretical attacks, and this is likely to be the case of many yet-to-be known future attacks. The standard countermeasures against such attacks are likely to be ineffective in the case of a high profile OCSP responder: 4.1 Limiting the request rate to a safe value is not possible, an OCSP responder for a large CA must by definition process millions of requests per week, each with subsecond response time. 4.2 Watching for suspect request patterns and taking the system offline until the attack fades away. Taking an OCSP responder offline for even a few minutes will have devastating consequences that preclude taking such measures on a mere suspicion (and when it is no longer just a suspicion, it is probably too late). 4.3 Blocking IPs that issue suspiciously many requests will be ineffective (because anyone going after such a high value target is likely to use botnets or other request IP scattering techniques), and counterproductive (because it will most likely detect and falsely block major proxies, online virus scanning services etc.). 4.4 Salting the responses with random or other attacker-uncontrolled data requires the ability to know which aspects of such salting will prevent which attacks. And high profile CAs need to be prepared for unknown attacks on their private key. Random salts also provide attackers with excellent insights into the RNG employed, thus any security weakness in the RNG design (even if it is backed by a true quantum phenomenon hardware RNG) can be more readily exploited by attackers. 4.5 Rejecting requests for the status of never issued certificates at a front end without (indirect) root key access may reduce the attackers flexibility in choosing data to be signed, but there are still plenty of issued certificates that can be harvested from public certificate uses, collated and sorted on their bit patterns relative to the needs of the attempted attack. 4.6 Caching responses, so each certificate only gets a freshly signed OCSP response once every few minutes, severely limits the ability of OCSP clients to verify that the response is current, and cannot cope with some of the protocol requirements (request nonces, multiple certificates in one request, negative responses for never issued certificates), while still providing lots of data (100.000 unique signatures per issued cert per year of validity if you cache each response for 5 minutes). And note that the attacker will have the full root cert lifetime to gather up sample responses for input to his cryptanalysis. -+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ -+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ Contrast this with the security properties of a delegated OCSP responder: 1. Delegated OCSP signing certs can be routinely revoked and replaced as often as needed (you will need to set up a second OCSP responder URL, with each URL the sole source of OCSP checking of certificates issued to responders at the other URL). 2. Spare and backup responders can be prepared, complete with their own private keys and certificates ahead of time. 3. Because their hardware lifetime is not tied to the (old) HSM holding a 30+ year key, delegated OCSP responders can deploy new security measures without having to work around hardware limitations of the existing HSMs used. In fact, even if no HSM implementations of a new security measure are available, an ordinary server with an ultra- short-lifespan OCSP-responder certificate can do the job with little overall risk until a more robust implementation is available. 4. Because there is no requirement that all responses are signed with the same key, delegated OCSP responders can be set up as redundant systems with multiple servers and sites handling the same OCSP URL, allowing maintenance to be done without taking all the clients offline. 5. In case of non-compromised root key loss (imagine if the root key facility had been in in Dresden last week or New York during Sandy or the 2001 attack), a delegated OCSP responder with a pre-issued non-expiring certificate can continue to serve customers while a new root key is brought online, deployed into browser trust stores and finally used to issue new replacement certificates. For good measure, supplement with a set of similar delegated CRL signing certs, although I suspect the latter would not be usable with many existing clients. 6. In case of root key compromise, a handful of almost-never-expiring OCSP certificates can be pregenerated and stored at secure off-site locations. In case of disaster, these can be installed on OCSP servers that return "key compromise, certificate revoked" for all requests that mention the lost CA. These (along with pre-signed non-expiring revoke-all CRLs) can be used even if access to the compromised root private key was lost in the disaster, e.g. due to a too late triggering of a self-destruct mechanisms. Remember to not use all the pre-issued OCSP certificates at once, hold some of them back in case the online ones are compromised. Enjoy Jakob -- Jakob Bohm, CIO, Partner, WiseMo A/S. http://www.wisemo.com Transformervej 29, 2730 Herlev, Denmark. Direct +45 31 13 16 10 This public discussion message is non-binding and may contain errors. WiseMo - Remote Service Management for PCs, Phones and Embedded ______________________________________________________________________ OpenSSL Project http://www.openssl.org User Support Mailing List openssl-users@openssl.org Automated List Manager majord...@openssl.org ______________________________________________________________________ OpenSSL Project http://www.openssl.org User Support Mailing List openssl-users@openssl.org Automated List Manager majord...@openssl.org