[EMAIL PROTECTED] wrote:
> On Thu, Aug 29, 2002 at 08:06:45AM -0700, Ian Holsman wrote:
> 
>>your trying to limit traffic based on what kind of request/request path 
>>it has ?
> 
> 
> Yes, actually based on vhost, URI, directory, file type, size, user, time of
> day, etc, etc.. pretty much anything you can think of.  It also supports
> multiple overlapping bandwidth restrictions and will restrict traffic based on
> what other requests are currently being served to ensure the cumulative rate
> for any given bandwidth limit is never exceeded.
hmm
you might run into trouble on filetype/size (anything which you need the 
response for) as there is no hook >after< the handler.
> 
> I've got the core code working in a test harness, now I just need to put it
> into an apache module..
> 
> 
>>>I could implement this as an AP_FTYPE_CONTENT_SET filter, which would make the
>>>most sense from a configuration and decision-making standpoint (since I have
>>>access to request information), but one of the questions I have about this is
>>>whether other buffering and such later in the filter chain (such as with
>>>transcode/connection/etc filters) would render any attempts at rate control at
>>>this level moot, or at least seriously degraded.
>>
>>yes, but from what I can see if you are trying to slow down the request 
>>with your filter, this should not be a major drama.
> 
> 
> Well, my main concern is if there are things down the line which buffer large
> portions of data before sending them out, it would generate "bursty" network
> traffic, which I want to avoid.  Part of the reason I'm doing this is because I
> want to have more smooth control of network utilization so it doesn't impact
> other services or requests..
> 
> I had seen some notes about the content-length filter, for example, setting
> aside the entire response until it got the end of it, which if my filter was
> before it would completely defeat the behavior of my rate limiting..
> 
> 
>>if you are basing the rate limiting on something on the request I would 
>>suggest you write a request hook, (somewhere after the request headers 
>>have been read.. forget the name for the moment) and make it set a note
>>in the connection record. (or maybe use the apr_pool_userdata_set(pool) 
>>call it's faster)
> 
> 
> Ah, thank you..  Yeah, I realize now I should have been thinking in terms of a
> request hook rather than a filter for the decision-making process, but aside
> from that little detail this is basically what I was envisioning.  I wasn't
> aware of apr_pool_userdata_set or connection record notes, I'll go look into
> that.  It sounds like it should do very much what I'm looking for.
> 
> One last question:  Because it keeps track of what other requests are currently
> being served, my implementation needs to know when serving a request has been
> completed, as well.  Obviously, this could pose some problems with coordination
> between when request-processing is considered finished and when the data
> actually goes out over the net.  What I would really like to do is consider a
> request "finished" once the last of its data goes out.  Is there an appropriate
> hook or something for doing stuff when this happens, or should I just look for
> an EOS or something go through my limiting filter and do the processing there?
> 
I'm not sure if a EOS gets as far as the CORE-OUT if it does that is 
what you'll need to check for.
> I'm still getting the hang of a lot of this architecture, but I'd like to do
> things The Correct Way(tm) if possible :)
> 
> 
>>The only potential downside I can with implementing it this way is if 
>>you have 2 small requests which get sent out together, you will get the 
>>rate limit of the 2nd one.
> 
> 
> As long as they're small, I don't think anybody will care that much, so I can
> live with that.
> 
> Thanks a lot for the help,
> 
> -alex
> 


Reply via email to