Returning on the topic, i'm trying a "smarter" solution trying to implement
a leaky bucket with a window, as nginx is doing.
what i've to do is to store per user the request per minute in current
minute and previous minute. i've done in a lua script with a matrix, but
i'm quite sure it's not the best solution.
I've a couple of questions that I can't make my head around.

- is it possible in a LUA script to access/modify the sticky table? if yes,
how can i do it?
- can i pass to activity value for reference? what's the way? right now the
only way to access information from HA in lua is to use http-request
set-var and then txn:get_var('txn..')
- a lua script that has a global matrix (matrix = {} {}) is it shared with
all the other instances/processes of haproxy?
- how does lua/haproxy cope with threads sleeping?

thanks by



On Thu, Jun 11, 2020 at 8:21 AM Igor Cicimov <[email protected]>
wrote:

> Glad you found a solution that works for you. I personally don't see any
> issues with this since lua is lightweight and haproxy is famous for
> efficient resource management. So all should be good under "normal" usage
> and by normal I mean a traffic and usage pattern you expect from your app
> users that non maliciously overstep your given limits. I cannot say what
> will happen in case of a real DDOS attack and how much this buffering can
> hurt you :-/, you might want to wait for a reply from one of the more
> knowledgeable users or the devs.
>
> On Tue, Jun 9, 2020 at 10:38 PM Stefano Tranquillini <[email protected]>
> wro
>
>> I may have found a solution, that's a bit more elegant (to me)
>>
>> The idea is to use a lua script to do some weighted sleep depending on
>> data.
>> the question is: "is this idea good or bad"? especially, will the
>> "core.msleep"  have implications on performance for everybody?
>> If someone uses all the connections available it will stuck all the
>> users, right?
>>
>> said so, i should cap/limit the number of connections for each user at
>> the same time. but that's another story. (i guess i can create an acl with
>> OR condition, if it's 30 request in 10 sec or 30 open connections)
>> going back to the beginning.
>>
>> my lua file
>>
>> function delay_request(txn)
>> local number1 = tonumber(txn:get_var('txn.sc_http_req_rate'))
>> core.msleep(50 * number1)
>> end
>>
>> core.register_action("delay_request", { "http-req" }, delay_request, 0);
>>
>> my frontend
>>
>> frontend proxy
>> bind *:80
>>
>> stick-table type ip size 100k expire 10s store http_req_rate(10s)
>> http-request track-sc0 src
>> http-request set-var(txn.sc_http_req_rate) sc_http_req_rate(0)
>> http-request lua.delay_request if { sc_http_req_rate(0) gt 30 }
>> use_backend api
>>
>> Basically if there are more than 30 request per 10 seconds, i will make
>> them wait 50*count (so starting from 1500ms up to whatver they keep
>> insisting)
>> does it make sense?
>> do you see performance problems?
>>
>> On Tue, Jun 9, 2020 at 11:12 AM Igor Cicimov <
>> [email protected]> wrote:
>>
>>> On Tue, Jun 9, 2020 at 6:48 PM Stefano Tranquillini <[email protected]>
>>> wrote:
>>>
>>>> Hello,
>>>> i didn't really get what has been changed in this example, and why.
>>>>
>>>> On Tue, Jun 9, 2020 at 9:46 AM Igor Cicimov <
>>>> [email protected]> wrote:
>>>>
>>>>> Modify your frontend from the example like this and let us know what
>>>>> happens:
>>>>>
>>>>> frontend proxy
>>>>>     bind *:80
>>>>>     stick-table type ip size 100k expire 15s store http_req_rate(10s)
>>>>>
>>>>
>>>> sticky table is now here
>>>>
>>>>
>>>>>     http-request track-sc0 src table Abuse
>>>>>
>>>> but this refers to the other one , do I've to keep this? is it better
>>>> to have it here or shared?
>>>>
>>>>     use_backend api_delay if { sc_http_req_rate(0) gt 30 }
>>>>>
>>>>
>>>> this is measuring that in the last 10s there are more than 30 requests,
>>>> uses the table in this proxy here, not the abuse
>>>>
>>>>
>>>>>     use_backend api
>>>>>
>>>>> backend api
>>>>>     server api01 api01:80
>>>>>     server api02 api02:80
>>>>>     server api03 api03:80
>>>>>
>>>>> backend api_delay
>>>>>     tcp-request inspect-delay 500ms
>>>>>     tcp-request content accept if WAIT_END
>>>>>     server api01 api01:80
>>>>>     server api02 api02:80
>>>>>     server api03 api03:80
>>>>>
>>>>> Note that as per the sliding window rate limiting from the examples
>>>>> you said you read this limits each source IP to 30 requests for the last
>>>>> time period of 30 seconds. That gives you 180 requests per 60 seconds.
>>>>>
>>>>
>>>> Yes sorry that's typo should had been
>>>
>>> frontend proxy
>>>     bind *:80
>>>     stick-table type ip size 100k expire 15s store http_req_rate(10s)
>>>     http-request track-sc0 src
>>>     use_backend api_delay if { sc_http_req_rate(0) gt 30 }
>>>     use_backend api
>>>
>>>> In this example, and what I did before, it seems the same behaviour (or
>>>> at least per my understanding).
>>>> so that, if a user does more than 30 requests in 10 seconds then the
>>>> rest are slowed down by 500ms.
>>>> right?
>>>>
>>>>
>>> Correct.
>>>
>>>
>>>> it does not really imply that there's a max number of calls per minute.
>>>> in fact, if the users makes 500 calls in parallel from the same IP
>>>>
>>>
>>> It implies indirectly, if there are 30 per 10 seconds then there can be
>>> a maximum of 180 per minute.
>>>
>>>>
>>>> - the first 30 are executed
>>>> - the other 470 are executed but with a "penalty" of 500s
>>>>
>>>> but that's it. Did i get it correctly?
>>>>
>>>
>>> Yes. If they get executed in the same period of 10 seconds. You can play
>>> with the numbers and adjust to your requirements. You can delay them as in
>>> your example or drop them.
>>>
>>> Haproxy have more examples in other articles like Bot net protection one
>>> and one about stick tables that I also highly recommend for reading. You
>>> might find some interesting info that can help your case.
>>>
>>>>
>>>> --
>>>> *Stefano Tranquillini, *CTO/Co-Founder @ chino.io
>>>> *Need to talk? book a slot <http://bit.ly/2LdXbZQ>*
>>>> *Please consider the environment before printing this email - **keep
>>>> it short <http://five.sentenc.es/> *
>>>>
>>>>
>>>>
>>>
>>
>> --
>> *Stefano Tranquillini, *CTO/Co-Founder @ chino.io
>> *Need to talk? book a slot <http://bit.ly/2LdXbZQ>*
>> *Please consider the environment before printing this email - **keep it
>> short <http://five.sentenc.es/> *
>>
>>
>>
>
> --
>
> Igor Cicimov  | Senior DevOps Engineer
>
> t  +61 (1) 300-362-667
>
> e  [email protected]
>
> w www.encompasscorporation.com
>
> a  Level 10, 117 Clarence Street, Sydney, NSW, Australia 2000
>


-- 
*Stefano Tranquillini, *CTO/Co-Founder @ chino.io
*Need to talk? book a slot <http://bit.ly/2LdXbZQ>*
*Please consider the environment before printing this email - **keep it
short <http://five.sentenc.es/> *

Reply via email to