nginx would be more suitable for something like this. It even has a redis
plugin:
http://wiki.nginx.org/HttpRedis

Perhaps you can achieve your functionality with the redis_next_upstream
parameter.


Sergej

On Tue, May 8, 2012 at 4:39 AM, S Ahmed <[email protected]> wrote:

> I agree it will add overheard for each call.
>
> Well would there a way for me to somehow tell haproxy from my application
> to block a particular url, and then send another api call to allow traffic
> from that url?
>
> That would be really cool to have an API where I could do this from.
>
> I know haproxy has rate limiting as per:
> http://blog.serverfault.com/2010/08/26/1016491873/
>
> But wondering if one could have more control over it, like say you have
> multiple haproxy servers and you want to synch them, or simply the
> application layer needs to decide when to drop a url connection or when to
> accept.
>
>
> On Mon, May 7, 2012 at 7:39 PM, Baptiste <[email protected]> wrote:
>
>> On Tue, May 8, 2012 at 12:26 AM, S Ahmed <[email protected]> wrote:
>> > I'm sure this isn't possible but it would be cool if it is.
>> >
>> > My backend services write to redis, and if a client reaches a certain
>> > threshold, I want to hard drop all further requests until x minutes have
>> > passed.
>> >
>> > Would it be possible, for each request, haproxy performs a lookup in
>> redis,
>> > and if a 0 is returned, drop the request completly (hard drop), if it
>> is 1,
>> > continue processing.
>> >
>> >
>>
>>
>> It would introduce latency in the request processing.
>> Why would you need such way of serving your request?
>>
>> By the way, this is not doable with HAProxy.
>> Well, at least, not out of the box :)
>> Depending on your needs, you could hack some dirty scripts which can
>> sync your redis DB with HAProxy server status through the stats
>> socket.
>>
>> cheers
>>
>
>

Reply via email to