Mark wrote:
There is one added complication in that the protocol is a datagram
protocol at a
higher level (although it uses TCP).  I am concerned that the whole
protocol could
block if there is not enough data to encrypt a whole outgoing message
but the peer cannot
continue until it gets the message.

SSL_write() can be for any length the API datatype allows (I think it is currently a C data type of 'int'). If you use SSL_write() with 1 byte lengths you will get an encoded SSL protocol packet sent over the write with a single byte of application-data payload. This would not be very efficient use of SSL (since you'd have many bytes of SSL overhead per byte of application-data).


The sending side is allowed to merge more application-data together under such circumstances that forward flow-control is not allowing the fresh new data we are currently holding to be sent in the "first attempt at transmission" to happen immediately AND the user makes an API call to write more data. What is not allowed is for the stack to hold onto the data (possibly forever) in the hope that the user will make an API call to write more data.

I've tried to choose my words carefully in the above paragraph, so that the words equally apply to TCP as SSL. In the case of SSL since it is done over a reliable streaming-transport there no such thing as a "first attempt at transmission" since it is reliable; there is only a single action to commit data into the TCP socket. But it is possible for the TCP socket to not be accepting data just yet (due to flow-control). It would be that conceptual boundary this that relates to.

Also one difference between TCP and SSL is that TCP has octet-boundary sequences/acknowledgments but in SSL all data is wrap up into packetized-chunks. This means TCP other optimizations it can make with regards to retransmissions make it more efficient. Those things don't apply to SSL.




If you use larger writes (to SSL_write()) then this is chunked up into the largest possible packets the protocol allows and those are sent over the wire.

It is presumed that every SSL_write() requires a flush (at TCP level this mechanism is called a "Push"). This basically means the data needs to flush to the reading API at the far end on exactly the byte boundary (or more) data than you sent. This mean you have a guarantee to not starve the receiving side of data that the sending API has sent/committed. This is true at both the TCP and SSL levels.

If you think about it the SSL level could not make the guarantee easily if the lower level did not also provide that guarantee.



Providing you use non-blocking APIs there is no way things can block (meaning now way for your application to no be in control at all times to make a decision), this means socket<>SSL is using non-blocking it also means the SSL<>your_datagram_protocol is using non-blocking paradigm.

The only issue you then need to look at is starvation (imagine if the receiving side was in a loop to keep reading until there was no more data, but due to the CPU time need to do the data processing in that loop it was possible for the sending side to keep the receiving side stocked full of data). If you just looped until you had no more data from SSL_read() (before servicing the SSL_write() side) then the SSL_write would be starved.

So you might want to only loop like this a limited number of times, or automatically break out of trying to decode/process more data in order to service the other half a little bit.



Now there is another issue which isn't really a blocking one, it is more a "deadlock". This is where due to your IO pump design and the interaction between the upper levels of your application and the datagram/SSL levels you ended up designing your application such that the same thread is used to both service the IO pump and the upper levels of the application (the data processing). This is possible but requires careful design. For whatever reason the upper levels stalled/blocked waiting for IO, and this means your thread of execution lost control and starved the IO pump part from doing its work (because its the same thread).

Everything that happens on the IO pump part needs to be non-blocking, if you use the same thread to service the upper levels of your application then you must know for sure they are non-blocking. Otherwise you are best separating the threads here the IO pump and the upper levels.

Often this is best because it frees up the constriction about what you can do in an upper level, it does not matter any more what you do there, call/use whatever library you want without regard for blocking behavior. You can also use a single IO pump thread to manage multiple connections if you want (and performance allows) then you need to think about per 'SSL *' IO starvation, i.e. make sure you service everyone a little bit as you go round-robin.



Darryl
______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
User Support Mailing List                    openssl-users@openssl.org
Automated List Manager                           majord...@openssl.org

Reply via email to