On 08.06.20 14:28, Stefano Tranquillini wrote:
> Hi thanks for the reply
>
> why the set-priority is a better choice? 
> will it just limit the connection in case there's need while it 
> will not limit the connection per se?
> i mean, if the system is capable of supporting 600 calls, with 
> the set priority it will still process the 600 calls rather than 
> limit the user to a max of 100 per minute

Well as far as I know have hapox not the feauture to "delay" a
connecstion except to move it in the request questue

> On Mon, Jun 8, 2020 at 1:27 PM Aleksandar Lazic <[email protected] 
> <mailto:[email protected]>> wrote:
> 
>     On 08.06.20 09:15, Stefano Tranquillini wrote:
>     >
>     >
>     > On Sun, Jun 7, 2020 at 11:11 PM Илья Шипицин <[email protected] 
> <mailto:[email protected]> <mailto:[email protected] 
> <mailto:[email protected]>>> wrote:
>     >
>     >
>     >
>     >     вс, 7 июн. 2020 г. в 19:59, Stefano Tranquillini <[email protected] 
> <mailto:[email protected]> <mailto:[email protected] 
> <mailto:[email protected]>>>:
>     >
>     >         Hello all,
>     >
>     >         I'm moving to HA using it to replace NGINX and I've a question 
> regarding how to do a Rate Limiting in HA that enables queuing the requests 
> instead of closing them.
>     >
>     >         I was able to limit per IP following those examples: 
> https://www.haproxy.com/blog/four-examples-of-haproxy-rate-limiting/ . 
> However, when the limit is reached, the users see the error and connection is 
> closed.
>     >
>     >         Since I come from NGINX, it has this handy feature 
> https://www.nginx.com/blog/rate-limiting-nginx/ where connections that exceed 
> the threshold are queued. Thus the user will still be able to do the calls 
> but be delayed without him getting errors and keep the overall number of 
> requests within threshold.
>     >
>     >         Is there anything similar in HA? It should limit/queueing the 
> user by IP.
>     >
>     >         To explain with an example, we have two users |Alice|, with ip 
> |A.A.A.A| and |Bob| with ip |B.B.B.B| The threshold is |30r/minute|.
>     >
>     >         So in 1 minute:
>     >
>     >           * Alice does 20 requests. -> that's fine
>     >           * Bob does 60 requests. -> the system caps the requset to 30 
> and then process the other 30 later on (maybe also adding timeout/delay)
>     >           * Alice does 50 request -> the first 40 are fine, the next 10 
> are queued.
>     >           * Bob does 20 requests -> they are queue after the one above.
>     >
>     >         I saw that it can be done in general, by limiting the 
> connections per host. But this will mean that it's cross IP and thus, if 500 
> is the limit
>     >         - Alice  does 1 call
>     >         - Bob does 1000 calls
>     >         - Alice does another 1 call
>     >         - Alice will be queued, that's not what i would like to have.
>     >
>     >         is this possible? Is there anything similar that can be done?
>     >
>     >
>     >     it is not cross IP.  I wish nginx docs would be better on that.
>     >
>     > What do you mean?
>     > in nginx i do
>     > limit_req_zone $binary_remote_addr zone=prod:10m rate=40r/m;
>     > and works
>     >
>     >     first, in nginx terms it is limited by zone key. you can define key 
> using for example $binary_remote_addr$http_user_agent$ssl_client_ciphers
>     >     that means each unique combination of those parameters will be 
> limited by its own counter (or you can use nginx maps to construct such a 
> zone key)
>     >
>     >     in haproxy you can see and example of
>     >
>     >     # Track client by base32+src (Host header + URL path + src IP)
>     >
>     >     http-requesttrack-sc0 base32+src
>     >
>     >     which also means key definition may be as flexible as you can 
> imagine.
>     >
>     >
>     > the point is, how can i cap the number of requests for a single user to 
> 40r/minute for example? or any number.
>     >
>     > What I was able to do is to slow it down in this way, but it does not 
> ensure the cap per request, it only adds 500ms to each call.
>     >
>     > frontend proxy
>     >     bind *:80
>     >     # ACL function declarations
>     >     acl is_first_level src_http_req_rate(Abuse) ge 30 
>     >     use_backend api_delay if is_first_level
>     >     use_backend api
>     >
>     > backend api
>     >     server api01 api01:80  
>     >     server api02 api02:80
>     >     server api03 api03:80
>     >
>     > backend api_delay
>     >     tcp-request inspect-delay 500ms
>     >     tcp-request content accept if WAIT_END
>     >     server api01 api01:80  
>     >     server api02 api02:80
>     >     server api03 api03:80
>     >
>     > backend Abuse
>     >     stick-table type ip size 100k expire 15s store http_req_rate(10s)
> 
>     I would try to use "http-request set-priority-class" and/or
>     "http-request set-priority-offset" for this.
>     
> http://cbonte.github.io/haproxy-dconv/2.1/configuration.html#4.2-http-request%20set-priority-class
>     
> http://cbonte.github.io/haproxy-dconv/2.1/configuration.html#4.2-http-request%20set-priority-offset
> 
>     ```
>     acl is_first_level src_http_req_rate(Abuse) ge 30
>     http-request set-priority int(20) if is_first_level
> 
>     ```
> 
>     In the mailing list archive is a example how to use it
>     https://www.mail-archive.com/[email protected]/msg29915.html
> 
>     Sorry that I can't give you a better solution but I never used it
>     so it would be nice to get feedback if this options works for your
>     use case
> 
>     >
>     >         Thanks
>     >         --
>     >         *Stefano*
>     > --
>     > Stefano
> 
>     Regards
>     Aleks
> 
> 
> 
> -- 
> *Stefano Tranquillini, *CTO/Co-Founder @ chino.io <http://chino.io/> 
> /Need to talk? book a slot <http://bit.ly/2LdXbZQ>/
> /Please consider the environment before printing this email - //keep it short 
> <http://five.sentenc.es/> /
> 
> 


Reply via email to