> I can imagine that one might get the same certificate > from several source, but I'm pretty sure it could be resolved but > applying a little bit of automagic intelligence and tossing all > duplicates except for the copy that has the highest trust attached to > it.
I was assuming this would be done by the application. It tries to store a cert, and if it gets an EEXISTS error (or the functional equivalent) then it's up to the application to decide what to do next. Of course, this opens the whole can-o-worms of "what constitutes a duplicate cert?" Is it an exact match, or matching I+SN, or some other criteria? BTW, this example also brings up the need for a "replace" operation. If you want to replace an existing CA cert, e.g., because someone tried to sneak through a time extension with the same SN, you would want an atomic operation since an intelligent backend may invalidate any certs signed by a deleted CA cert. > Trust, BTW, could rather easily be handled by attaching internal > attributes to certificates with extra information. Those attributes > are not part of the certificate itself, of course. Was that > approximately the way you saw this being done as well? What will this do to the whole-cert hash value? (I assume that the whole-cert hash is computed as the SHA-1 hash on the ASN.1 encoding of the cert... something that I can compute with ASN1_write_bio(), a mem BIO and a sha1 BIO. Or by another library crunching on an DER-encoded certificate in the underlying database.) > I would rather see that applications can request > indexes on a rather flexible set of basic types, and request searches > according to basically whatever. The application can always make a request according to any criteria, the only issue is if the time required for the results. If it's on a hashed key, you get results in O(1). If you have to scan through all records in the database your results take O(n). That's why I want to use I+SN as the primary key - I can anticipate far more searches on that key than the whole-cert hash. If you're talking about fewer than a few hundred certs, the extra complexity required for secondary keys probably isn't worth the effort. If you have more than few thousand certs, you'll probably be looking at a RDBMS for other reasons anyway. In these cases you would want to create a secondary index on every searchable quantity anyway. Even if most sites fall into the middle, secondary indices (esp. of non-unique keys) are always a pain to get right. It's too easy to overlook a single error or race condition and get a flaky database. ______________________________________________________________________ OpenSSL Project http://www.openssl.org Development Mailing List [EMAIL PROTECTED] Automated List Manager [EMAIL PROTECTED]