Hello,
i didn't really get what has been changed in this example, and why.
On Tue, Jun 9, 2020 at 9:46 AM Igor Cicimov <[email protected]>
wrote:
> Modify your frontend from the example like this and let us know what
> happens:
>
> frontend proxy
> bind *:80
> stick-table type ip size 100k expire 15s store http_req_rate(10s)
>
sticky table is now here
> http-request track-sc0 src table Abuse
>
but this refers to the other one , do I've to keep this? is it better to
have it here or shared?
use_backend api_delay if { sc_http_req_rate(0) gt 30 }
>
this is measuring that in the last 10s there are more than 30 requests,
uses the table in this proxy here, not the abuse
> use_backend api
>
> backend api
> server api01 api01:80
> server api02 api02:80
> server api03 api03:80
>
> backend api_delay
> tcp-request inspect-delay 500ms
> tcp-request content accept if WAIT_END
> server api01 api01:80
> server api02 api02:80
> server api03 api03:80
>
> Note that as per the sliding window rate limiting from the examples you
> said you read this limits each source IP to 30 requests for the last time
> period of 30 seconds. That gives you 180 requests per 60 seconds.
>
In this example, and what I did before, it seems the same behaviour (or at
least per my understanding).
so that, if a user does more than 30 requests in 10 seconds then the rest
are slowed down by 500ms.
right?
it does not really imply that there's a max number of calls per minute. in
fact, if the users makes 500 calls in parallel from the same IP
- the first 30 are executed
- the other 470 are executed but with a "penalty" of 500s
but that's it. Did i get it correctly?
--
*Stefano Tranquillini, *CTO/Co-Founder @ chino.io
*Need to talk? book a slot <http://bit.ly/2LdXbZQ>*
*Please consider the environment before printing this email - **keep it
short <http://five.sentenc.es/> *