Hello,

Does anyone know whether it's possible to create a multi-process HTTPS connection pool for unix?

It is possible to create a multi-process HTTP connection pool using Unix domain sockets to pass open file descriptors across processes. The problem is that the state of a HTTPS connection is not referenceable by only the file descriptor.

I have a few ideas...

One idea:

(1) Get open file descriptor from pool (unix domain sockets)
(2) Call SSL_connect
(3) Use it...
(4) Call SSL_shutdown
(5) Return open file descriptor to pool (unix domain sockets)

HTTPS servers will probably just close the socket after the SSL_shutdown. Is there any reason to think they would call SSL_accept instead? This wouldn't work, would it?

Another idea:

(1) Get open file descriptor from pool  (unix domain sockets)
(2) Get associated SSL* and SSL_CTX* from shared memory
(3) The file descriptor number from from (1) may be different from the file descriptor number in (2) even though they both refer to the same underlying file descriptor. Somehow change the SSL's BIO to use the new file descriptor number without disrupting any of internal state.
(4) Use it...
(5) Return SSL* and SSL_CTX* to shared memory
(6) Return open file descriptor to pool. (unix domain sockets)

The big concern here is that the SSL and SSL_CTX structs must only reference shared memory. Is there a way to override malloc/free in the openssl library so that only shared memory would be used in the SSL and SSL_CTX structs? (I'm assuming I would write my own variable length, reclaiming shared memory-backed allocator)

Also, can the file descriptor number in the the SSL connection's underlying BIO be changed without side-effect?

Any other ideas?

Thanks

Josh

______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
User Support Mailing List                    [email protected]
Automated List Manager                           [EMAIL PROTECTED]

Reply via email to