[EMAIL PROTECTED] wrote: >Well, my main concern is if there are things down the line which buffer large >portions of data before sending them out, it would generate "bursty" network >traffic, which I want to avoid. Part of the reason I'm doing this is because I >want to have more smooth control of network utilization so it doesn't impact >other services or requests.. > >I had seen some notes about the content-length filter, for example, setting >aside the entire response until it got the end of it, which if my filter was >before it would completely defeat the behavior of my rate limiting.. >
In the specific case of the content-length filter, the problem is now fixed; as of a change I committed yesterday, it no longer tries to buffer the entire response. In general, I think we should avoid letting any filter buffer an unbounded amount of data like the C-L filter used to do. >>if you are basing the rate limiting on something on the request I would >>suggest you write a request hook, (somewhere after the request headers >>have been read.. forget the name for the moment) and make it set a note >>in the connection record. (or maybe use the apr_pool_userdata_set(pool) >>call it's faster) >> Another possibility would be to create a new metadata bucket type. In a request-level hook or filter, insert a metadata bucket that describes the appropriate bandwidth-throttling rules for the buckets that follow. Then you can use a connection-level filter to do the actual throttling; that filter, which won't otherwise have access to request-level information, can look at the metadata buckets to figure out what bandwidth limit to apply. Brian
