On Mon, Dec 19, 2011 at 11:00 PM, Mark Ellzey <mtho...@strcpy.net> wrote: > On Mon, Dec 19, 2011 at 09:57:08PM -0500, Nick Mathewson wrote: >> I agree with the part of your fix where we don't ignore the extra data >> whose presence is indicated by a nonzero SSL_pending. >> >> But I think these fixes all have some problem(s) in common: They >> ignore the reason for the logic that initially sets n_to_read, and >> instead have the behavior for the filtering SSL bufferevent be "read >> aggressively, as much as possible!" >> >> That isn't correct behavior. Because of rate-limiting[1], >> watermarks[2], fairness/starvation[3], and loop-avoidance[4], we >> *don't* want to read aggressively from a single underlying bufferevent >> until it's out of data. >> >> [1][2] This behavior can make the SSL bufferevent aggressively overrun >> rate limits and watermarks. >> > > IMHO, SSL + rate-limiting is going to be screwy no matter what. AFIAK > OpenSSL does not have a good way to inform the user just how much total > data was read before decryption.
Actually, it does; the BIO_number_written and BIO_number_read functions return the total number of bytes written/read on a BIO. > We cannot assume the sizes either. > (think SSL_MODE_RELEASE_BUFFERS) > > Watermarks, on the other hand, I agree. We should give a hard look at > how we can fix this. Here's what I think will work: Instead of setting the n_to_read based on the number of bytes currently in the underlying bufferevent (as in your fix), we only do so if reading is enabled, reading is not suspended, and the watermark is not overrun. Why is that safe? Because each of these conditions causes consider_reading to get called again once it is no longer triggered. If reading is enabled again, be_openssl_enable will call consider_reading. If reading is unsuspended, bufferevent_unsuspend_read will call be_openssl_enable, etc. And if the buffer is drained below its watermark, then bufferevent_inbuf_wm_cb calls bufferevent_wm_unsuspend_read, which then calls bufferevent_unsuspend_read, etc. So the only case where we want to immediately try reading more from the underlying buffer is when there is more data to read, and nothing about our current status prevents us from trying to read it. What's more, we already have a function that returns a reasonable amount of data to read, but returns 0 in the cases above: bytes_to_read(). I've commited a possible patch as 5b4b8126de52c; can folks please test? I'd like to get a new stable release out soon if possible. Now, this still has the problems below, but I think that we can probably survive without fixing them in 2.0.x. Doing SSL over a paired bufferevent makes sense for testing, but is a little nutty. The resource starvation issue is annoying, but the underlying socket-based bufferevent should prevent stuff from getting _too_ bad there. >> [3] If a bunch of data has arrived on a the underlying bufferevent, we >> probably don't want to aggressively do SSL on it until its gone >> without giving other events a chance to fire. >> >> [4] If the underlying bufferevent is a paired bufferevent, draining it >> could cause it to refill itself from its partner, invoking callbacks >> that might refill the partner, etc. -- Nick *********************************************************************** To unsubscribe, send an e-mail to majord...@freehaven.net with unsubscribe libevent-users in the body.