Actually my requirements are as follows: To block all further connections if they make more than 100 api calls in a given minute.
So if they make 100 api calls in the span of 55 seconds, block all further calls for the next 5 seconds. Can I do this? And if I do, it should limit my bandwidth then correct? On Sun, Jan 8, 2012 at 2:24 AM, Willy Tarreau <[email protected]> wrote: > On Sat, Jan 07, 2012 at 07:11:02PM -0500, S Ahmed wrote: > > I was reading this: http://blog.serverfault.com/2010/08/26/1016491873/ > > > > A bit confused, the link to the src is version 1.5 but version 1.4 seems > to > > have a modified date of sept 2011 while 1.5 is august 2010. > > The most recent 1.5 dates September 10th 2011 (1.5-dev7). There is no > need to compare 1.4 and 1.5 release dates, as 1.5 is the development > branch and 1.4 is the stable branch. So new 1.4 versions are released > when bugs are discovered, regardless of the 1.5 development cycle. > > > Is this an addon module or? Is it being maintained? > > 1.5 is still in active development, but since the development is not > fast, we try to ensure good stability for each new development release > so that people who rely on it can safely use it. > > > Ok my main question is this: > > > > When a given ipaddress is rate limited and connections are dropped, how > > exactly is this happening? Will it drop the connection during the TCP > > handshake and as a result only the http header will be sent from the > client > > and not body? > > Connections are not necessarily dropped. You can drop them, you can send > them to a specific backend, you can delay them, you have various options. > You can match an IP's connection rate using an ACL and you can do many > things using ACLs. What is not doable at the moment (delayed to 1.6) is > to shape the traffic. > > > I'm using this where clients are sending http requests and they don't > > really have control over it, so I have to rate limit them as best as I > can. > > With this rate limiting, will this save be bandwith also since I don't > have > > to rate limit at the application level? > > Well, you have to undestand that however you do it, rate limiting does not > save bandwidth but *increases* bandwidth usage : whether you force to send > smaller packets or you drop and force to retransmit, in the end for the > same > amount of payload exchanged, more packets will have to be exchanged when > the > traffic is shaped. Rate limiting must not be used to save bandwidth, but to > protect servers and to ensure a fair share of the available bandwidth > between > users. > > > i.e. b/c at the application level I > > have to load the header and body to determine if I should reject the > > request. > > If you want to act on connections, in my opinion the best solution would > be to only delay those who are abusing. Basically you have two server farms > (which can be composed of the very same servers), one farm for normal users > and the other one for abusers. The abusers one has a very small limit on > concurrent connections per server. Once a user is tagged as an abuser, send > it to this farm and he will have to share his access with the other ones > like him, waiting for an available connection from a small pool. This will > also mechanically make his connection rate fall, and allow him to access > via the normal pool next time. > > What is recommended is also to consider that above a certain rate, you're > facing a really nasty user that you want to kick off your site, and then > you drop the connection as soon as you see them on the doorstep. > > > (Ok as I wrote this it seems the link from the blog entry is to haproxy > > itself, so this is built in the core?) > > Yes, this is built in 1.5-dev. You can download 1.5-dev7 or even the > latest daily snapshot and you'll get that. > > Willy > >

