Martin Bartosch wrote:

As far as I can see we have had a race condition between two certificate
issuance operations.

Setting:
1 User A requested a certificate via a "Basic Request"
2 User B requested a certificate via a "Server Request"
3 both requests were approved
4 Certificate A was issued
5 Issuance of certificate B failed; the CA complains that the public
  key has already been used.

I assume that 4 and 5 were happening simultaneously, causing a race
condition. What happened was that the certificate issuance fetched
the request, determined the (likely) serial number of the next
certificate (by looking at the serial file) and wrote the request
to a file with this number.
If this happens in two distinct instances before the first certificate
is issues, the first request file is overwritten with the second.

Let's have a look at the code (0.9.2 branch):

lib/functions/crypto-utils.lib:

sub crypto_write_tmp_csr
{
    our ($errno, $errval, $tools);
    my $csr = $_[0];

    my $tmpdir = getRequired ('TempDir');
    my $ser    = crypto_get_next_cert_serial ();

    ##// Let's save the request body to a temp file
    if (not $tools->saveFile( FILENAME => "$tmpdir/${ser}.req",
                              DATA     => $csr->getParsed()->{BODY}."\n"
)) ...

and

sub crypto_create_cert
{
    our ($errno, $errval, $tools, $cryptoShell);
...
    my $ser         = crypto_get_next_cert_serial ();
...
    if ($days or $notafter or $notbefore) {
        ## Issue the Certificate with individual lifetime
        if ( not $token->issueCert(
                         REQFILE     => "$tmpdir/${ser}.req",
...


If I am not mistaken, it is dangerous to write the request file
to a file that is based on the assumed number of the next certificate,
this extends the time frame in which a race condition with OpenSSL
cert issuance can happen.
At least the request file should be written to a unique temp file
here.

With the existing implementation it is very hard to avoid race
conditions, because the OpenSSL index.txt and serial files are
actually used by OpenCA. So there is little we can do in 0.9.2, I think.

openca_0_9_2 was not designed for parallel certificate issuing. The problem is that OpenSSL's ca tool is not designed for parallel access (see OpenSSL's docs). I tried to fix this in the HEAD some time ago.

I have had a look at the code in CVS HEAD and it seems to me that
the same or a similar problem is still there, but I think we can
improve the situation here:

The HEAD code gets the new serial from the database via sequence generators. This is safe even with parallel access to the CA. The only remaining problem are the OpenSSL files (serial, index.txt and index.txt.attr).

Always write temporary request files to true temporary files unrelated
to the assumed serial number (0.9.2 and HEAD)

Why we should do this? The serials are safe because of the databases.

openssl.txt is already cleared before issuance, but this might
still lead to race conditions when issuing to certs with identical
DN. (HEAD)

The major problem is not the identical DN. The major problem is the parallel write and read access to the files. The write result is undefined but this is not problematic. The real problem is the read access by two processes on index.txt and serial.

serial is read and written by OpenSSL and might also lead to race
conditions.

This is the biggest problem.

How about the following approach:

Before each cert issuance do the following:
1 write request file to a unique temp file
2 create an empty unique temp file (index.txt dummy)
3a set a mutex
3b get the next serial number from the *database*
3c create a unique temp file (serial dummy) and write the expected
   serial number to the file
4 copy the required openssl config file to a temp file and change
  the 'database' entry to the file created in 2 and the 'serial'
  entry to the file created in 3.
5 issue the certificate with the temporary config file
6 clear the mutex

I don't think we can do without the mutex without possibly creating
'holes' in the serial number sequence (if issuance for one certificate
fails but succeeds for a second that is processed simultaneously).
If we are ready to accept holes in the sequence, we can use a (database)
sequence for the serial number instead of the mutex. This is obviously
not an issue for CAs that use "concealed" (i. e. random) serial numbers.

1 and 3b are already implemented. The major issue is the implementation of the temporary OpenSSL files. I don't think that a mutex is a good idea. It makes the things complicated and the databases do a nice job for us (BTW SQLite does not allow parallel write access). I think the solution is good but without the mutex. If we need a 100 percent safe unique subject index then we must implement a mutex lock.

Anything we can do in this regard for 0.9.2?

How about a lock file? If the lock is present then the process is aborted with a warning. It is not nice but 100 percent safe.

Michael
--
_______________________________________________________________

Michael Bell                    Humboldt-Universitaet zu Berlin

Tel.: +49 (0)30-2093 2482       ZE Computer- und Medienservice
Fax:  +49 (0)30-2093 2704       Unter den Linden 6
[EMAIL PROTECTED]   D-10099 Berlin
_______________________________________________________________

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

Reply via email to