> On 28 Sep 2016, at 10:49, Stephan Müller <muell...@math.hu-berlin.de> wrote: > > Hi, > > i want to configure a rate limit (say 100 http req/sec) for each backend > server like this: > > listen front > bind :80 > balance leastconn > server srv1 127.0.0.1:8000 limit 100 > server srv2 127.0.0.2:8000 limit 100 > > As far i can see rate limiting is only supported for frontends [1]. > However,a long time ago, someone asked about the same question [2]. The > proposed solution was a multi tier load balancing having an extra proxy per > backend server, like this: > > listen front > bind :80 > balance leastconn > server srv1 127.0.0.1:8000 maxconn 100 track back1/srv > server srv2 127.0.0.2:8000 maxconn 100 track back2/srv > > listen back1 > bind 127.0.0.1:8000 > rate-limit 10 > server srv 192.168.0.1:80 check > > listen back2 > bind 127.0.0.2:8000 > rate-limit 10 > server srv 192.168.0.2:80 check > > Is there a better (new) way to do that? The old thread mentioned, its on the > roadmap for 1.6. >
As far as I understand, "track" only affects health checks. Otherwise servers with the same name in different backend work independently. So servers in your first frontend (:80) will have no ratelimit.