Hello,
I have a web hosting cluster, and I would like to rate limit by vhost
(i.e. no more than 50 connections per second on www.domain1.com, for
example). I found a way to do so, and I'd like to get your feeling
about it :
1) create a frontend to get http connections, create ACLs to filter
domain by domain, and redirect each domain (ACL) to a specific backend
:
frontend http-in
bind :80
mode http
...
acl dom1 hdr_end(host) -i .domain1.com
acl dom2 hdr_end(host) -i .domain2.com
...
acl domN hdr_end(host) -i .domainN.com
use_backend b_dom1 if dom1
use_backend b_dom2 if dom2
...
use_backend b_domN if domN
2) Create all backends (one per domain), and create a filter with
be_sess_rate to redirect to drop excessive requests away :
backend b_dom1
mode http
balance roundrobin
acl too_much_requests be_sess_rate gt 50
redirect location http://192.168.56.103/tryagainlater.html if
too_much_requests
server web1 192.168.56.102:80 check
server web2 192.168.56.103:80 check
server web3 192.168.56.104:80 check
server web4 192.168.56.105:80 check
=> It works, and that's pretty cool. But I have two questions :
* Is there a cleverer way to do it ? I mean : if I have 2000 domains
hosted on the cluster, it means 2000 ACL and 2000 backend sections :
Not really easy to maintain... Do I have a generic way to handle
domains ?
* Is that method really usable for so much domains ? I guess that this
kind of ACLs will need a *lot* of CPU to handle several hundreds of
requests by second for the 2000 hosted domains (and also a lot of
memory ?).
Any idea, or experience sharing, would be highly appreciated :-)
Fabien