Hi Baptiste,So i wanted to understand why SC0 should be used instead of SRC,
the definition says SRC would track connections from the source IP where as SC0
would be tracking over all connections??
sc0_conn_cur
Returns the current amount of concurrent connections tracking the same
tracked counters. This number is automatically incremented when tracking
begins and decremented when tracking stops. See also src_conn_cur.
src_conn_cur
Returns the current amount of concurrent connections initiated from the
current incoming connection's source address in the current proxy's
stick-table or in the designated stick-table. If the address is not found,
zero is returned. See also sc/sc0/sc1/sc2_conn_cur.
From: Baptiste <[email protected]>
To: Amol <[email protected]>
Cc: HAproxy Mailing Lists <[email protected]>
Sent: Monday, August 17, 2015 4:33 AM
Subject: Re: Regarding using HAproxy for rate limiting
Hi Amol,
For example, this one:
# Shut the new connection as long as the client has already 40 opened
tcp-request connection reject if { src_conn_cur ge 40 }
Should be written
# Shut the new connection as long as the client has already 40 opened
tcp-request connection reject if { sc0_conn_cur ge 40 }
Baptiste
On Mon, Aug 17, 2015 at 4:53 AM, Amol <[email protected]> wrote:
> Hi Baptiste,
> I tried to read about SC0 and SRC, but i am not quite sure what i would gain
> by changing SRC to SCO for the acl paramters? did u have some example to
> explain?
>
> Thanks
>
> ________________________________
> From: Amol <[email protected]>
> To: Baptiste <[email protected]>
> Cc: HAproxy Mailing Lists <[email protected]>
> Sent: Friday, August 14, 2015 2:06 PM
>
> Subject: Re: Regarding using HAproxy for rate limiting
>
> Hi Baptiste,
> Yes sorry i might have confused you with some questions but to answer your
> questions
>
> "here, the question is: what kiils your server exactly?
> A high number of queries from a single users or whatever the number of
> users?
> I'm trying to understand what you need..."
> Yes i am trying to protect against high number of requests from a single
> user who can use API's or even mis-configure API's to generate high load.
>
> reposting the configuration
>
> frontend www-https
> bind xx.xx.xx.xx:443 ssl crt xxxx.pem ciphers AES128+EECDH:AES128+EDH
> no-sslv3 no-tls-tickets
>
> # Table definition
> stick-table type ip size 100k expire 30s store
> gpc0,conn_cur,conn_rate(3s),http_req_rate(10s),http_err_rate(10s)
>
> # Allow clean known IPs to bypass the filter
> tcp-request connection accept if { src -f /etc/haproxy/whitelist.lst }
>
> # this is sending data defined in the stick-table and storing it the
> stick-table since by default nothing is restored in it
> tcp-request connection track-sc0 src
>
> # Shut the new connection as long as the client has already 40 opened
> tcp-request connection reject if { src_conn_cur ge 40 }
>
> # if someone has more than 40 connections in over a period of 3 seconds,
> REJECT
> tcp-request connection reject if { src_conn_rate ge 40 }
>
> # tracking connections that are not rejected from clients that don't have
> 10 connections/don't have 10 connections/3 seconds
> #tcp-request connection reject if { src_get_gpc0 gt 0 }
>
> acl abuse_err src_http_err_rate ge 10
> acl flag_abuser_err src_inc_gpc0 ge 0
> acl abuse src_http_req_rate ge 250
> #acl flag_abuser src_inc_gpc0 ge 0
> #tcp-request content reject if abuse_err flag_abuser_err
> #tcp-request content reject if abuse flag_abuser
>
> use_backend backend_slow_down if abuse flag_abuser
> use_backend backend_slow_down if abuse_err flag_abuser_err
> default_backend www-backend
>
> backend www-backend
> balance leastconn
> cookie BALANCEID insert indirect nocache secure httponly
> option httpchk HEAD /xxx.php HTTP/1.0
> redirect scheme https if !{ ssl_fc }
> server A1 xx.xx.xx.xx:80 cookie A check
> server A2 yy.yy.yy.yy:80 cookie B check
>
> backend backend_slow_down
> timeout tarpit 2s
> errorfile 500 /etc/haproxy/errors/429.http
> http-request tarpit
>
>
> ------
>
> Yes i will check out the difference between SC0 and SRC paramters in config
>
> regarding this .....
>> What i am doing here is that if the http_req_rate > 250 then i want to
>> send
>> them to a another backend which gives them a rate limiting message or if
>> the
>> number of concurrent connections are > 4, then i want to rate limit their
>> usage and allow on 40 connections to come in.
>
> i was trying to make 2 points i guess i should have been more clear...
> So i was saying that based on my config i am trying to achieve 2 things
>
> 1) to rate limit a client with high number of http requests in a certain
> time span (http_req_rate)
> 2) to rate limit a client with high number of concurrent connections in the
> certain time span. (src_conn_cur and src_conn_rate )
>
> Thanks once again for looking into this.
>
>
>
>
>
> ________________________________
> From: Baptiste <[email protected]>
> To: Amol <[email protected]>
> Cc: HAproxy Mailing Lists <[email protected]>
> Sent: Friday, August 14, 2015 1:40 PM
> Subject: Re: Regarding using HAproxy for rate limiting
>
> Hi Amol,
>
> On Fri, Aug 14, 2015 at 4:16 PM, Amol <[email protected]> wrote:
>> Hello,
>> I am been trying to configure my Haproxy for rate limiting our customer
>> usage, and wanted to know/understand some of my options
>> what i am trying to achieve is to throttle any clients requests/api calls
>> that can take lead to high load and can kill my servers.
>
> here, the question is: what kiils your server exactly?
> A high number of queries from a single users or whatever the number of
> users?
> I'm trying to understand what you need...
>
>
>> First of all here is my configuration i have so far from reading a few
>> articles
>>
>> frontend www-https
>> bind xx.xx.xx.xx:443 ssl crt xxxx.pem ciphers AES128+EECDH:AES128+EDH
>> no-sslv3 no-tls-tickets
>>
>> # Table definition
>> stick-table type ip size 100k expire 30s store
>> gpc0,conn_cur,conn_rate(3s),http_req_rate(10s),http_err_rate(10s)
>> # Allow clean known IPs to bypass the filter
>> tcp-request connection accept if { src -f /etc/haproxy/whitelist.lst }
>> # this is sending data defined in the stick-table and storing it the
>> stick-table since by default nothing is restored in it
>> tcp-request connection track-sc0 src
>> # Shut the new connection as long as the client has already 10 opened
>> tcp-request connection reject if { src_conn_cur ge 40 }
>> # if someone has more than 100 connections in over a period of 3
>> seconds,
>> REJECT
>> tcp-request connection reject if { src_conn_rate ge 40 }
>> # tracking connections that are not rejected from clients that don't
>> have
>> 10 connections/don't have 10 connections/3 seconds
>> #tcp-request connection reject if { src_get_gpc0 gt 0 }
>>
>> acl abuse_err src_http_err_rate ge 10
>> acl flag_abuser_err src_inc_gpc0 ge 0
>> acl abuse src_http_req_rate ge 250
>> #acl flag_abuser src_inc_gpc0 ge 0
>> #tcp-request content reject if abuse_err flag_abuser_err
>> #tcp-request content reject if abuse flag_abuser
>>
>> use_backend backend_slow_down if abuse
>> #use_backend backend_slow_down if flag_abuser
>> use_backend backend_slow_down if abuse_err flag_abuser_err
>> default_backend www-backend
>>
>> backend www-backend
>> balance leastconn
>> cookie BALANCEID insert indirect nocache secure httponly
>> option httpchk HEAD /xxx.php HTTP/1.0
>> redirect scheme https if !{ ssl_fc }
>> server A1 xx.xx.xx.xx:80 cookie A check
>> server A2 yy.yy.yy.yy:80 cookie B check
>>
>> backend backend_slow_down
>> timeout tarpit 2s
>> errorfile 500 /etc/haproxy/errors/429.http
>> http-request tarpit
>
> you should use the sc0_conn_* functions instead of src_conn_* since
> you're tracking over sc0.
> Also, please repost your configuration with comments updated. For now,
> some comments doesn't match the statement you configured, which makes
> it hard to follow up.
>
>> What i am doing here is that if the http_req_rate > 250 then i want to
>> send
>> them to a another backend which gives them a rate limiting message or if
>> the
>> number of concurrent connections are > 4, then i want to rate limit their
>> usage and allow on 40 connections to come in.
>
> Please be more accurate on the context.
> Furthermore, you mix rate-limiting and concurrent connections for the
> same purpose in your sentence and I'm really confused about the real
> goal you want to achieve.
>
>
>> Please feel free to critique my config. Now on to questions,
>>
>> 1) is rate limiting based on IP a good way to do this or has anyone tried
>> of
>> other ways?
>
> The closest to the application layer the best.
> If you have a cookie or whatever header we can use to perform rate
> limiting, then it would be much better than source IP.
>
>> 2) Am i missing anything critical in the configuration?
>
> no idea as long as I still don't know what your primary goal was.
>
>> 3) when does the src_inc_gpc0 counter really increment? does it increment
>> for every subsequent request from the client in the given timeframe, i
>> have
>> seen it goes from 0 to 6 during my test but wasn't sure about it
>
> Each event may update a counter, such as a new connection or a new
> HTTP request coming in.
>
>
>
>
>> 4) can i not rate limit by just adding the maxconn to the server in the
>> backend or will that throttle everyone instead of the rogue IP...
>
>
> This will prevent your server from running too many request in
> parallel and then being overloaded.
> You can mix both technics. server's maxconn to protect servers against
> a huge load generated by many clients running 1 request + the
> configuration you setup above to prevent a single user to generate too
> many request and taking too much connections allowed by the maxconn.
>
>
> Baptiste
>
>
>>
>>
>>
>
>
>
>