To answer my own question again:

The problem was solved when I realized that doing a select() on the socket file descriptor is not enough to know when there is data to read...

VERY IMPORTANT: You must also check SSL_pending(...) to see if there is data which openssl has readily available for reading.

A note about this in the documentation for SSL_read() would be nice (or at *least* a see also)


So it wasn't a hesitation to sent out data, but *my* hesitation to read it.


-- Davy



Davy Durham wrote:

I'm using openssl in some code that very much expects data to get sent when the write operation occurs.

I *think* I'm noticing openssl hesitating to write data sometimes. I'm not ruling out it being my doing yet, but when I remove openssl from the layers of code, I'm not seeing the problem.

Nagle's algorithm is disabled.

Could this be happening? (Not that I am but,) If I wanted to write 1 byte messages, should they get sent as I write them? If not, is there way to force this behavior? Is there a flush operation that I have to call? Is there a way to make it always do that?

I know data has to be written in records, but can a record be 1 byte?


______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
User Support Mailing List                    [EMAIL PROTECTED]
Automated List Manager                           [EMAIL PROTECTED]

Reply via email to