Joe Orton wrote:
On Mon, Feb 25, 2008 at 10:54:58PM +0000, Dr Stephen Henson wrote:

If it could hold (potentially) larger objects or large numbers of small objects then it could help make the CRL code more usable.

I'm not sure exactly what you're referring to there (caching CRL lookup results?), but it depends on what exactly you mean by "large" and "small", in any case. shmcb might need to be tuned differently to be useful for caching small numbers of large objects; Google says memcache will handle objects up to 1MB by default; so, quite "large".


Well the current CRL strategy has a few problems. It ignores critical extensions but that's a separate issue...

Many CRLs have short lifetimes and need to be updated fairly often which causes problems when the server needs to be restarted each time.

Some CRLs can be quite large, running into several MB.

In a multi process server it isn't very efficient loading a large CRL each time it is needed.

The encoded CRL could be shared between all processes but that still results in an overhead in that the CRL would need to be parsed and verified repeatedly.

A better strategy is to load and verify the CRL *once* and then index the revoked certificate entries.

Then subsequent processes just need to check to see if a revoked entry exists for the client certificate in question.

Loading in a new CRL would need to lock the whole thing while it was updated of course.

Well that's one strategy... another would be to use OCSP exclusively and have a local OCSP responder driven by CRLs.

Steve.
--
Dr Stephen N. Henson. Senior Technical/Cryptography Advisor,
Open Source Software Institute: www.oss-institute.org
OpenSSL Core team: www.openssl.org

Reply via email to