You definitely would not want to stall the client after you’ve made an origin 
connection. I.e. you only want to stall the UA.

— Leif 

> On Jun 24, 2019, at 14:42, Dk Jack <dnj0...@gmail.com> wrote:
> 
> Also, stalling will not be good, if you have CDN upstream from you that is 
> pipelining requests on the same connection for multiple clients. In our 
> plugin we tried delay request, but we couldn’t solve http 1.1 pipeline issue. 
> Hence ended up going with the simple approach of deny. 
> 
> Dk. 
> 
>> On Jun 24, 2019, at 1:12 PM, Leif Hedstrom <zw...@apache.org> wrote:
>> 
>> 
>> 
>>> On Jun 24, 2019, at 13:12, Weixi Li (BLOOMBERG/ PRINCETON) 
>>> <wli...@bloomberg.net> wrote:
>>> 
>>> Hi Dk,
>>> 
>>> Thanks a lot for the example. It is really inspiring. I have a question 
>>> though: in case of "bucket->consume()" returning false, the code would 
>>> return a 429 response. In our use-case, we don't want to deny the client 
>>> with a 429 (but want to queue the request instead). How to handle the false 
>>> case then? Should addPlugin() add the plugin itself? (That's why I was 
>>> asking about queue and async timer). 
>> 
>> Oh do you want to stall the client, not deny? Depending on the plugin and 
>> the hook, you can probably just reschedule the HttpSM to come back on say 
>> 500ms and try again, rather than let it continue like you normally would. 
>> Have to make sure they don’t stall fore ever though, and perhaps have a 
>> priority such that older clients are given tokens before new ones.
>> 
>> Should be doable, but definitely more complex.
>> 
>> — Leif 
>>> 
>>> Thanks,
>>> Weixi 
>>> 
>>> From: dev@trafficserver.apache.org At: 06/24/19 01:28:40To:  Weixi Li 
>>> (BLOOMBERG/ PRINCETON ) ,  dev@trafficserver.apache.org
>>> Subject: Re: Implementing Rate-limiting in forward proxy mode
>>> 
>>> Not sure you need queues. Here's a sample implementation based on
>>> TokenBucket which should do what you want...
>>> 
>>> 
>>> void
>>> handleReadRequestHeadersPreRemap(Transaction& transaction)
>>> {
>>> std::string host;
>>> Headers& headers = transaction.getClientRequest().getHeaders();
>>> Headers::iterator ii = headers.find("Host");
>>> 
>>> if (ii != headers.end()) {
>>>  host = (*ii).values();
>>> }
>>> 
>>> if (!host.empty()) {
>>>  auto bucket = map.find(host);   // assumes u created a token bucket per
>>> site.
>>>                                  // std::map<std::string,
>>> std::shared_ptr<TokenBucket>>
>>>                                  // map.emplace("www.cnn.com",
>>>                                  //             std::make_shred<
>>> TokenBucket>(rate, burst));
>>>                                  // rate is number of req.s/min.
>>>                                  // burst can be same as rate.
>>> 
>>>  if (bucket->consume(1)) {       // you are requesting 1 token.
>>>    transaction.resume();
>>>  } else {
>>>    // see CustomResponseTransactionPlugin in atscppapi examples.
>>>    transaction.addPlugin(new
>>> CustomResponseTransactionPlugin(transaction, 429,
>>>                          "Too Many Requests", "Too Many Requests"));
>>>    return;
>>>  }
>>> }
>>> 
>>> transaction.resume();
>>> }
>>> 
>>> 
>>> On Sun, Jun 23, 2019 at 8:04 PM Weixi Li (BLOOMBERG/ PRINCETON) <
>>> wli...@bloomberg.net> wrote:
>>> 
>>>> Hi Dk/Eric/Leif,
>>>> 
>>>> Thanks for the advice. Our use-case simply requires sending out the queued
>>>> requests at a fixed rate (although variable by site). My plan was simply
>>>> put all incoming transactions in separate queues by site, and then invoke
>>>> "transaction.resume()" at fixed interval via a timer for each queue. Would
>>>> the delay (calling "transaction.resume()" asynchronously) disrupt the ATS
>>>> event processing? If yes, what's correct way to avoid that? If no, is there
>>>> any example plugin implementing such a timer (was thinking about AsyncTimer
>>>> in atscppapi)?
>>>> 
>>>> The TokenBucket is a very interesting data structure, but I've not figured
>>>> out how to apply it yet. Will look into it a bit more.
>>>> 
>>>> Thanks to Eric's point, TS_HTTP_SEND_REQUEST_HDR_HOOK will indeed be too
>>>> late. But for this use-case the "rate-limit" only applies to cache-misses
>>>> and revalidations (fresh cache-hits should be served as fast as possible).
>>>> Therefore the hook probably should be TS_HTTP_CACHE_LOOKUP_COMPLETE_HOOK.
>>>> The plugin then lets the transaction resume immediately if the cache hit is
>>>> fresh, or queue the transaction otherwise.
>>>> 
>>>> Still learning ATS, let me know if I'm wrong. Thanks!
>>>> 
>>>> 
>>>> From: dev@trafficserver.apache.org At: 06/21/19 23:11:03To:  Weixi Li
>>>> (BLOOMBERG/ PRINCETON ) ,  dev@trafficserver.apache.org
>>>> Subject: Re: Implementing Rate-limiting in forward proxy mode
>>>> 
>>>> Uhm! why async timers? You'd want to implement a leaky/token bucket per
>>>> site. Check out...
>>>> 
>>>> https://github.com/rigtorp/TokenBucket
>>>> 
>>>> It's single header file lock free implementation for token bucket and it
>>>> works very well...
>>>> 
>>>> 
>>>> On Fri, Jun 21, 2019 at 7:38 PM Weixi Li (BLOOMBERG/ PRINCETON) <
>>>> wli...@bloomberg.net> wrote:
>>>> 
>>>>> What a great community! So many good tips in such a short time!
>>>>> 
>>>>> Especially the atscppai, I would've never noticed it. the async examples
>>>>> look very promising.
>>>>> 
>>>>> It looks like the following might be necessary (let me know if I'm
>>>> wrong):
>>>>> * A hook to TS_HTTP_SEND_REQUEST_HDR_HOOK
>>>>> * A map of queues (one queue per rate-limited site)
>>>>> * A map of async timers (one timer per queue)
>>>>> 
>>>>> I will study the ATS code more to understand the event and threading
>>>> model
>>>>> better.
>>>>> 
>>>>> Thank you all.
>>>>> 
>>>>> From: dev@trafficserver.apache.org At: 06/21/19 19:52:44To:
>>>>> dev@trafficserver.apache.org
>>>>> Subject: Re: Implementing Rate-limiting in forward proxy mode
>>>>> 
>>>>> I have implemented rate-limit in my plugin using atscppapi. We are using
>>>>> ats in
>>>>> security context for mitigation. If the request matches certain criteria
>>>>> (ip,
>>>>> method, host, uri and header values) then we apply rate-limit to that ip.
>>>>> 
>>>>> Dk.
>>>>> 
>>>>>> On Jun 21, 2019, at 3:15 PM, Leif Hedstrom <zw...@apache.org> wrote:
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>>> On Jun 21, 2019, at 16:09, Weixi Li (BLOOMBERG/ PRINCETON)
>>>>> <wli...@bloomberg.net> wrote:
>>>>>>> 
>>>>>>> Hi team,
>>>>>>> 
>>>>>>> We are experimenting with ATS in *forward* proxy mode. Our use-case
>>>>> requires
>>>>> a rate-limiting component that enforces rules based on the destination.
>>>>>>> 
>>>>>>> For example:
>>>>>>> 
>>>>>>> For all incoming requests targeting "www.cnn.com", we want to limit
>>>>> the
>>>>> outgoing rate to be 10 requests per minute; for "www.reddit.com", we
>>>> want
>>>>> the
>>>>> rate to be 20 requests per minute; and so on. If there were more requests
>>>>> than
>>>>> the limit specified, the requests must be queued before they could go
>>>> out.
>>>>>> 
>>>>>> Seems very straight forward to implement as a plugin. For example the
>>>>> geo_acl
>>>>> plugin might be a good start, since it limits access based on source IP.
>>>>>> 
>>>>>> Would be interesting to hear more about your use case too, it’s always
>>>>> exciting to hear about different solutions that people use ATS for. Maybe
>>>>> at
>>>>> the next ATS summit? :-)
>>>>>> 
>>>>>> Cheers,
>>>>>> 
>>>>>> — Leif
>>>>>>> 
>>>>>>> Is it possible to implement this requirement using a plugin?
>>>>>>> 
>>>>>>> If not, we wouldn't mind forking the code and modifying whichever
>>>> parts
>>>>> that
>>>>> would be necessary. But which are the potentially relevant components?
>>>>>>> 
>>>>>>> If any experts could give us some pointers on the design, that would
>>>> be
>>>>> really appreciated.
>>>>>>> 
>>>>>>> Thanks,
>>>>>>> Weixi
>>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>> 
>>>> 
>>>> 
>>> 
>>> 
>> 

Reply via email to