Hello all.  I'm very confused by the following problem.

I have some server code that used blocking sockets and OpenSSL.  All worked 
well.  Then I converted the server code to use non blocking sockets.  I then 
reimplemented the OpenSSL layer to use the non blocking sockets and memory bio. 
 It uses the transparent handshaking that OpenSSL can do, so in the 
non-blocking case I never call SSL_connect or SSL_accept.  I just set things up 
by calling either SSL_set_connect_state() or SSL_set_accept_state().

The non blocking client side seems to work fine.  But the server side code will 
not get beyond the initial handshake from a client.

OpenSSL always returns SSL_R_NO_SHARED_CIPHER from SSL_read.  I turned on 
CIPHER_DEBUG in the OpenSSL source and what I found was the following:

Blocking Socket Code output (works )

...
rt=0 rte=0 dht=0 ecdht=0 re=1 ree=1 rs=0 ds=0 dhr=0 dhd=0
1:[00000001:00000001:00000101:00000085]0x1003a9698:AES256-SHA
...

( This is where the server successfully picks a cipher)

Non Blocking Socket Code 

....
rt=0 rte=0 dht=0 ecdht=0 re=0 ree=0 rs=0 ds=0 dhr=0 dhd=0
0:[00000001:00000001:00000100:00000084]0x1003cacb8:AES256-SHA
....


I built the self signed certificate in exactly the same way in both cases, so 
I'm not sure at all why it's failing and what the difference is in these 
debugging statements.

I'm no OpenSSL expert, so if anyone has any idea what I'm doing wrong in the 
non-blocking case, I'd be very happy to hear about it.

Thanks,

Scott______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
User Support Mailing List                    openssl-users@openssl.org
Automated List Manager                           majord...@openssl.org

Reply via email to