Lutz Jaenicke wrote:
Thor Lancelot Simon wrote:
On Fri, Aug 01, 2008 at 03:49:01PM +0200, Lutz Jaenicke wrote:
This leads to another problem, actually:

A malicious peer which sends data as fast as it can can get _more_ data
into the socket buffer while the application is trying to "read to
completion".  This can deny service to _other_ peers.

This type of fairness has to be implemented by the application.
This will include modifying the event handling.

Exactly, simply process only so many IO's per SSL handle at any one time.


I do this for multi-threaded server like applications, there are 2 basic cases:

* You received a unit-of-work (i.e. your application called SSL_read() and got everything it needed to carry out some further processing), you execute that unit-of-work and always force yourself onto looking at the next stream of data. Even if there is more data to look at on the one you just processed the unit-of-work for. Sometimes you might want to ensure you flush any data that was written due to the unit-of-work processing (before looking for the next unit of work) this stops write request-response dead-locks.

* You called SSL_read() 3 times but you were not able to decode a unit-of-work, i.e. there is still more data needed to assemble a valid unit-of-work, so again your force yourself onto looking at the next stream of data. Even if there is more data to look at to assemble the unit of work. This stops someone staring others with very small packet sizes and huge units of work.


YMMV

Darryl
______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
Development Mailing List                       [email protected]
Automated List Manager                           [EMAIL PROTECTED]

Reply via email to