On 28.09.2016 14:14, Dmitry Sivachenko wrote:
On 28 Sep 2016, at 10:49, Stephan Müller <[email protected]> wrote:
Hi,
i want to configure a rate limit (say 100 http req/sec) for each backend server
like this:
listen front
bind :80
balance leastconn
server srv1 127.0.0.1:8000 limit 100
server srv2 127.0.0.2:8000 limit 100
As far i can see rate limiting is only supported for frontends [1].
However,a long time ago, someone asked about the same question [2]. The
proposed solution was a multi tier load balancing having an extra proxy per
backend server, like this:
listen front
bind :80
balance leastconn
server srv1 127.0.0.1:8000 maxconn 100 track back1/srv
server srv2 127.0.0.2:8000 maxconn 100 track back2/srv
listen back1
bind 127.0.0.1:8000
rate-limit 10
server srv 192.168.0.1:80 check
listen back2
bind 127.0.0.2:8000
rate-limit 10
server srv 192.168.0.2:80 check
Is there a better (new) way to do that? The old thread mentioned, its on the
roadmap for 1.6.
As far as I understand, "track" only affects health checks. Otherwise servers
with the same name in different backend work independently.
So servers in your first frontend (:80) will have no ratelimit.
Yes, those srv1 and srv2 are not limited, they are only for "looping". A
request can take one of the two paths. Note that haproxy is twice in
there, thats what i meant with two tier balancing:
--(unlimited)-->[haproxy:front]--(max 10 req/s, max 100
con)-->[haproxy:back1]--(unlimited)--> [192.168.0.1]
--(unlimited)-->[haproxy:front]--(max 10 req/s, max 100
con)-->[haproxy:back2]--(unlimited)--> [192.168.0.2]
I dont need to track [haproxy:back*], but the real backend servers.
Therefore track "back1/srv".
In the end it would be nice to get rid of this loop, but i dont know how
to configure that.