Re: ip_nonlocal_bind=1 set but sometimes get "cannot bind socket" on reload (-sf)

2016-02-10 Thread Burhan S Ahmed
Chris Riley  writes:

> 
> Hello,
> I'm seeing some inconsistent/strange behavior with HAProxy (1.5.14 and 
1.6.1) not being able to bind to a socket despite 
'net.ipv4.ip_nonlocal_bind = 1' being set. HAProxy starts up without 
issue initially but after several reloads, the command 'service haproxy 
reload' starts failing and reports that HAProxy "cannot bind socket" for 
each of the listens/frontends, even for IPs that ARE on that server. The 
existing HAProxy process contiues to run without picking up the new 
changes.
> After the reload initially fails all subsequent 'service haproxy 
reload' commands also fail. Running 'service haproxy restart' restarts 
and immediately binds to the IPs:ports specified in each listen/frontend 
that it just complained that it could not bind to.
> 
> Here's some background info. There are two servers (lb-01 and lb-02). 
Virtual IPs are managed by keepalived (v1.2.19) in two vrrp_instances. 
Each vrrp_instance contains half of the total virtual IPs. The first 
vrrp_instance has lb-01 defined as MASTER and lb-02 as BACKUP and the 
second vrrp_instance has lb-02 defined as MASTER and lb-01 as BACKUP. 
This allows each server to act as failover for the other server. This 
was tested extensively while I was adding support for ip rules to 
keepalived and works without issue. All of HAProxy's configuration is 
stored in consul (v0.5.2). consul-template (v0.11.1) writes out 
/etc/haproxy/haproxy.cfg using the data in consul and then consul-
template calls 'service haproxy reload'. The OS is CentOS 6.4 and the 
kernel version is 2.6.32-358.23.2.el6.x86_64.
> 
> Here is an example of what I'm seeing (actual IPs have been 
substituted). 192.168.10.0/24 IPs are assigned to eth0 and 
192.168.200.0/24 IPs are assigned to eth1. (output is from lb-02)
> 
> 
> Reloading haproxy: [ALERT] 301/141300 (25939) : Starting proxy 
haproxy-stats: cannot bind socket [192.168.10.27:80]
> [ALERT] 301/141300 (25939) : Starting proxy haproxy-fe1: cannot bind 
socket [192.168.200.100:80]
> [ALERT] 301/141300 (25939) : Starting proxy haproxy-fe2: cannot bind 
socket [192.168.200.120:80]
> [ALERT] 301/141300 (25939) : Starting proxy haproxy-fe3: cannot bind 
socket [192.168.200.110:80]
> 
> 
> What's strange is that HAProxy is already listening to these IPs:port 
so it seems to be some kind of race condition. Of these IPs, 
192.168.10.27 is statically assigned to eth0 and is the only IP assigned 
to that interface. 192.168.200.110 and 192.168.200.120 are assigned to 
eth1 on lb-02. 192.168.200.100 is assigned to eth1 on lb-01. Without 
setting 'net.ipv4.ip_nonlocal_bind = 1' I would expect to see "cannot 
bind socket" for 192.168.200.100 but it doesn't make any sense that 
HAProxy also reports that it cannot bind on IPs:ports that are assigned 
to that server.
> 
> Does anyone have ideas as to why this might occur?
> 
> Best Regards,
> Chris Riley
> 


I had a similar issue. I was using keepalived between 2 HAPROXY Servers. 
Issue came out to be SELINUX. Has to modify /etc/selinux/config and 
change the value.

from 
SELINUX=enforcing 
to 
SELINUX=permissive

Hope it Helps. 

After that restarted the server. And the HAPROXY started working. 







advise on updating backends for 0-downtime

2013-07-03 Thread S Ahmed
Hi,

Say I have 3 backend servers running my website.

I want to update the servers, but do it in a way where I don't have any
downtime.

So say I have 3 new backend servers that I start and have the updated code
on it, how can I update the haproxy config file without having any downtime?

Is this possible?


Re: how to update config w/o stopping the haproxy service

2013-06-15 Thread S Ahmed
Hi,

The goal was to swap the haproxy configuraiton file with a new updated one
w/o any downtime.


On Sun, Apr 28, 2013 at 1:54 PM, Christian Ruppert id...@qasl.de wrote:

 On 04/28/13 at 01:00PM -0400, S Ahmed wrote:
  Hi,
 
  1.  Is there a way to update the config file without having to stop/start
  the haproxy service? e.g. when I need to update the ip addresses of the
  backend servers (using ec2)
 
  2. During migrations, say I have 10 backend servers, what if I want to
 stop
  taking requests for 5 of the 10 servers, is the best way to update the
  config and just remove them?  Or is there a smoother transition somehow
  that won't causes errors during the transition?
  i.e. would it be possible to finish the requests, but stop responding to
  new requests for those 5 servers I want to take offline.

 See https://code.google.com/p/haproxy-docs/wiki/disabled

 You can restart HAProxy by e.g.:-D -p /var/run/haproxy.pid -f
 /etc/haproxy.cfg -sf $(cat
 /var/run/haproxy.pid)
 Alternatively you could use the control socket by using socat:
 https://code.google.com/p/haproxy-docs/wiki/UnixSocketCommands

 So e.g. disable server backend1/server1
 Or even via the stats interface with stats admin if 

 --
 Regards,
 Christian Ruppert



Re: urls in stick-table, any timeline?

2013-04-26 Thread S Ahmed
Is this in the latest stable release?


On Thu, Apr 25, 2013 at 11:38 AM, Baptiste bed...@gmail.com wrote:

 Hi,

 So basically, you want to rate limit on the URL including the query string.
 something like:

 frontend webservice
 [...]
  acl url_to_protect path /something/object /something_else/whatever
  stick-table type string len 128 size 1m expire 10m store gpc0
  tcp-request content track-sc1 url if url_to_protect
  tcp-request content reject if { sc1_get_gpc0 gt 0 }
 [...]

 backend webservice
 [...]
  acl url_to_protect path /something/object /something_else/whatever
  stick-table type string len 128 size 1m expire 10m store http_req_rate(1m)
  tcp-request content track-sc2 url if url_to_protect
  acl abuser sc2_http_req_rate gt 100
  acl mark_as_abuser sc1_inc_gpc0 gt 0
  tcp-request content reject if abuser mark_as_abuser
 [...]

 Basically, you're going to track only the URLs which matches the
 url_to_protect acl.
 On the frontend, you just store the URLs + a counter (gpc0) which will
 be used to track the banned URLs.
 In the backend, you store the URLs + the HTTP request rate over 1
 minute associated to each of them. A couple of ACLs monitor the
 request rate and increments gpc0 from frontend table if the req rate
 is over a limit (100 in this case).

 Note that it can be easy to forge a URL, so you could drop all the
 URLs which does not look like regular.
 I mean /other?id=123 and /other?id=123foo=bar will be 2 URLs for the same
 user.
 Otherwise, instead of tracking on 'url' you can track a single url
 parameter like using 'urlp(id)' .

 I have not tested the configuration above, I made it out of my head,
 any issues, please let me know.

 And please let me know if it works in your case.

 Baptiste



 On Thu, Apr 25, 2013 at 4:49 PM, S Ahmed sahmed1...@gmail.com wrote:
  Each client (might be upto 100K of them) will have a unique URL, let me
  clarify the url:
 
  client#123
  api.example.com/some_thing/other?clientId=123
 
  client#123
  api.example.com/some_thing/other?clientId=124
 
  etc.
 
  So each client has a unique URL, but the source IP of the client might be
  different as they will be connecting to this service from multiple
 servers.
 
  I want to rate limit each client individually, giving them e.g. 100
 requests
  per minute, if they go over, just drop the connection as fast as
 possible.
 
  Also, is it possible to rate limit each client with a different rate
 limit?
  e.g. some may be 50 requests per minute, while others maybe be 100.
 
 
  On Thu, Apr 25, 2013 at 1:24 AM, Baptiste bed...@gmail.com wrote:
 
  Hi,
 
  Last question: Will you have one URL per client? I mean will the query
  string change with each client?
  Then do you want to rate limit each client individually or do you want
  to rate limit the number of call to the script named other as a
  whole in your example?
 
  Baptiste
 
  On Wed, Apr 24, 2013 at 7:49 PM, S Ahmed sahmed1...@gmail.com wrote:
   Nice!
  
   Is this in the latest 1.4 release or a dev release?
  
   I need to rate limit on a URL (that includes query string values)
 like:
  
   api.example.com/some_thing/other?id=asdf234asdf234id2=asdf234234
  
   Multiple sources are possible, but I don't care of the source IP I
 just
   want
   to rate limit on the URL (customers will be using this endpoint from
   their
   servers, but I want to rate limit to x number of requests per minute,
 if
   they go over, just drop the request until the rate limit has expired).
  
  
   On Tue, Apr 23, 2013 at 1:39 AM, Baptiste bed...@gmail.com wrote:
  
   Hi Ahmed,
  
   Yes, it has been implemented.
   You can store a URL and rate limited on it.
  
   Baptiste
  
   On Mon, Apr 22, 2013 at 11:15 PM, S Ahmed sahmed1...@gmail.com
 wrote:
Hello,
   
Has this feature been released yet by any change? :)
   
Again my initial request was to do:
   
I was told that soon you will be able to store a URL in a
stick-table,
so I
could block a particular url and then remove the block by making a
request.
   
I situation is I will be blocking up to 250K urls (for rate
 limiting,
hard
drop any connection.for a given period of time, then I need to
un-block
them).
   
Any rough timelines on when this might be released?
   
   Soon, you'll be able to store an URL in a stick-table, so you'll be
   able to update a gpc counter by setting up a particular header on
 the
   server side which tells HAProxy to block this request.
   For the cancellation of this blocking system, you could request the
   URL with a particular header to unblock it.
   
   
On Tue, Aug 21, 2012 at 2:04 PM, Baptiste bed...@gmail.com
 wrote:
   
Hey,
   
Nothing coming right now.
Maybe for Christmas :)
   
cheers
   
On Tue, Aug 21, 2012 at 5:19 PM, S Ahmed sahmed1...@gmail.com
wrote:
 Hello,

 Any updates or guestimates on if sticky-table feature will be
 released?

 Just haven't been watching

Re: urls in stick-table, any timeline?

2013-04-24 Thread S Ahmed
Nice!

Is this in the latest 1.4 release or a dev release?

I need to rate limit on a URL (that includes query string values) like:

api.example.com/some_thing/other?id=asdf234asdf234id2=asdf234234

Multiple sources are possible, but I don't care of the source IP I just
want to rate limit on the URL (customers will be using this endpoint from
their servers, but I want to rate limit to x number of requests per minute,
if they go over, just drop the request until the rate limit has expired).


On Tue, Apr 23, 2013 at 1:39 AM, Baptiste bed...@gmail.com wrote:

 Hi Ahmed,

 Yes, it has been implemented.
 You can store a URL and rate limited on it.

 Baptiste

 On Mon, Apr 22, 2013 at 11:15 PM, S Ahmed sahmed1...@gmail.com wrote:
  Hello,
 
  Has this feature been released yet by any change? :)
 
  Again my initial request was to do:
 
  I was told that soon you will be able to store a URL in a stick-table,
 so I
  could block a particular url and then remove the block by making a
 request.
 
  I situation is I will be blocking up to 250K urls (for rate limiting,
 hard
  drop any connection.for a given period of time, then I need to un-block
  them).
 
  Any rough timelines on when this might be released?
 
 Soon, you'll be able to store an URL in a stick-table, so you'll be
 able to update a gpc counter by setting up a particular header on the
 server side which tells HAProxy to block this request.
 For the cancellation of this blocking system, you could request the
 URL with a particular header to unblock it.
 
 
  On Tue, Aug 21, 2012 at 2:04 PM, Baptiste bed...@gmail.com wrote:
 
  Hey,
 
  Nothing coming right now.
  Maybe for Christmas :)
 
  cheers
 
  On Tue, Aug 21, 2012 at 5:19 PM, S Ahmed sahmed1...@gmail.com wrote:
   Hello,
  
   Any updates or guestimates on if sticky-table feature will be
 released?
  
   Just haven't been watching this list for a while and curious if there
   has
   been any progress.
  
   Appreciate it!
  
   On Sun, Jun 24, 2012 at 1:28 AM, Willy Tarreau w...@1wt.eu wrote:
  
   Hi,
  
   On Thu, Jun 21, 2012 at 05:16:22PM -0400, S Ahmed wrote:
I was told that soon you will be able to store a URL in a
stick-table,
so I
could block a particular url and then remove the block by making a
request.
   
I situation is I will be blocking up to 250K urls (for rate
 limiting,
hard
drop any connection.for a given period of time, then I need to
un-block
them).
   
Any rough timelines on when this might be released?
  
   Unfortunately, no, there is no timeline. Several subjects are being
   addressed
   at the same time so it's a matter of priority. Right now we have to
   rework
   all
   the low-level connection management to ensure proper integration of
   SSL,
   so we
   will see the stick tables after that.
  
   Regards,
   Willy
  
  
 
 



Re: urls in stick-table, any timeline?

2013-04-22 Thread S Ahmed
Hello,

Has this feature been released yet by any change? :)

Again my initial request was to do:

I was told that soon you will be able to store a URL in a stick-table, so I
could block a particular url and then remove the block by making a request.

I situation is I will be blocking up to 250K urls (for rate limiting, hard
drop any connection.for a given period of time, then I need to un-block
them).

Any rough timelines on when this might be released?

Soon, you'll be able to store an URL in a stick-table, so you'll be
able to update a gpc counter by setting up a particular header on the
server side which tells HAProxy to block this request.
For the cancellation of this blocking system, you could request the
URL with a particular header to unblock it.


On Tue, Aug 21, 2012 at 2:04 PM, Baptiste bed...@gmail.com wrote:

 Hey,

 Nothing coming right now.
 Maybe for Christmas :)

 cheers

 On Tue, Aug 21, 2012 at 5:19 PM, S Ahmed sahmed1...@gmail.com wrote:
  Hello,
 
  Any updates or guestimates on if sticky-table feature will be released?
 
  Just haven't been watching this list for a while and curious if there has
  been any progress.
 
  Appreciate it!
 
  On Sun, Jun 24, 2012 at 1:28 AM, Willy Tarreau w...@1wt.eu wrote:
 
  Hi,
 
  On Thu, Jun 21, 2012 at 05:16:22PM -0400, S Ahmed wrote:
   I was told that soon you will be able to store a URL in a stick-table,
   so I
   could block a particular url and then remove the block by making a
   request.
  
   I situation is I will be blocking up to 250K urls (for rate limiting,
   hard
   drop any connection.for a given period of time, then I need to
 un-block
   them).
  
   Any rough timelines on when this might be released?
 
  Unfortunately, no, there is no timeline. Several subjects are being
  addressed
  at the same time so it's a matter of priority. Right now we have to
 rework
  all
  the low-level connection management to ensure proper integration of SSL,
  so we
  will see the stick tables after that.
 
  Regards,
  Willy
 
 



do I still need nginx for static file serving?

2013-04-22 Thread S Ahmed
My backend servers run jetty, and currently I am using nginx that runs on
port 80 to route traffic to the backend that runs on e.g. port 8081.

I also using nginx to serve the static files for the folder:

/assets/

So all requests that have this folder do net get proxied to jetty on port
8081, but nginx servers the static files.

If I use haproxy now, do I still need to run nginx to service static files
or is this something haproxy can do just as effeciently?

I'd rather reduce the # of services I have to manage :)


Re: could a single ha proxy server sustain 1500 requests per second

2013-02-07 Thread S Ahmed
Thanks Willy.

On the same note you said not to run anything on the same machine, to lower
costs I want to run other things on the haproxy front-end load balancer.

What are the critical things to watch for on the server so I can be
notified at what point having 2 things on the server are becoming a problem?


On Wed, Dec 5, 2012 at 2:00 AM, Willy Tarreau w...@1wt.eu wrote:

 On Tue, Dec 04, 2012 at 02:19:30PM -0500, S Ahmed wrote:
  Hi,
 
  So 500 Mbits is 1/2 usage of a 1 Gbps port (haproxy and the back-end
  servers will have 1 Gbps connections).

 No, the traffic goes in opposite directions and the link is full duplex,
 so you can effectively have 1 Gbps in and 1 Gbps out at the same time.

  How does latency change things? e.g. what if it takes 90% clients 1
 second
  to send the 20K file, while some may take 1-3 seconds.

 it's easy, you said you were counting on 1500 req/s :

- 90% of 1500 req/s = 1350 req/s
- 10% of 1500 req/s =  150 req/s

 1350 req/s are present for one second = 1350 concurrent requests.
 150 req/s are present for 3 seconds = 450 concurrent requests.
 = you have a total of 1800 concurrent requests (with one connection
each, it's 1800 concurrent connections).

 What we can say with such numbers :
   - 1500 connections/s is light, even if conntrack is loaded and correctly
 tuned, you won't notice (we're doing twice this on a 500 Mbps Geode
 running on 1 watt).

   - 1800 concurrent connections is light too, multiply that by 16 kB, it's
 30MB of RAM for the kernel-side sockets, and twice that at most for
 haproxy, so less than 100 MB of RAM.

   - 250 Mbps in both directions should not be an issue either, even my
 pc-card realtek NIC does it on my 8-years old pentium-M.

 At only 1800 concurrent connections, the latency will probably be mostly
 related to the NIC's interrupt rate. But we're speaking about hundreds of
 microseconds here.

 If you're concerned about latency, use a correct NIC, don't run any other
 software on the machine, and obviously don't run this in a VM !

 Hoping this helps,
 Willy




Re: urls in stick-table, any timeline?

2012-08-21 Thread S Ahmed
Hello,

Any updates or guestimates on if sticky-table feature will be released?

Just haven't been watching this list for a while and curious if there has
been any progress.

Appreciate it!

On Sun, Jun 24, 2012 at 1:28 AM, Willy Tarreau w...@1wt.eu wrote:

 Hi,

 On Thu, Jun 21, 2012 at 05:16:22PM -0400, S Ahmed wrote:
  I was told that soon you will be able to store a URL in a stick-table,
 so I
  could block a particular url and then remove the block by making a
 request.
 
  I situation is I will be blocking up to 250K urls (for rate limiting,
 hard
  drop any connection.for a given period of time, then I need to un-block
  them).
 
  Any rough timelines on when this might be released?

 Unfortunately, no, there is no timeline. Several subjects are being
 addressed
 at the same time so it's a matter of priority. Right now we have to rework
 all
 the low-level connection management to ensure proper integration of SSL,
 so we
 will see the stick tables after that.

 Regards,
 Willy




Re: could haproxy call redis for a result?

2012-05-24 Thread S Ahmed
Baptiste,

Whenever this feature will be implemented, will it work for a specific url
like:

subdomain1.example.com

What about by query string?  like:

www.example.com/customer/12345

or

www.example.com/some/path?customerId=12345


Will it work for all the above?

On Tue, May 8, 2012 at 9:38 PM, S Ahmed sahmed1...@gmail.com wrote:

 Yes it is the lookup that I am worried about.


 On Tue, May 8, 2012 at 5:46 PM, Baptiste bed...@gmail.com wrote:

 Hi,

 Willy has just released 1.5-dev9, but unfortunately the track
 functions can't yet track strings (and so URLs).
 I'll let you know once a nightly snapshot could do it and we could
 work on a proof of concept configuration.

 Concerning 250K URLs, that should not be an issue at all to store them.
 Maybe looking for one URL could have a performance impact, we'll see.

 cheers

 On Tue, May 8, 2012 at 10:00 PM, S Ahmed sahmed1...@gmail.com wrote:
  Great.
 
  So any ideas how many urls one can story in these sticky tables before
 it
  becomes a problem?
 
  Would 250K be something of a concern?
 
 
  On Tue, May 8, 2012 at 11:26 AM, Baptiste bed...@gmail.com wrote:
 
  On Tue, May 8, 2012 at 3:25 PM, S Ahmed sahmed1...@gmail.com wrote:
   Ok that sounds awesome, how will that work though?  i.e. from say
 java,
   how
   will I do that?
  
   From what your saying it sounds like I will just have to modify the
   response
   add and a particular header.  And on the flip side, if I want to
 unblock
   I'll make a http request with something in the header that will
 unblock
   it?
  
 
  That's it.
  You'll have to track these headers with ACLs in HAProxy and to update
  the stick table accordingly.
  Then based on the value setup in the stick table, HAProxy can decide
  whether it will allow or reject the request.
 
   When do you think this will go live?
  
 
  In an other mail, Willy said he will release 1.5-dev9 today.
  So I guess it won't be too long now. Worste case would be later in the
  week or next week.
 
  cheers
 
 





Re: could haproxy call redis for a result?

2012-05-08 Thread S Ahmed
Ok that sounds awesome, how will that work though?  i.e. from say java, how
will I do that?

From what your saying it sounds like I will just have to modify the
response add and a particular header.  And on the flip side, if I want to
unblock I'll make a http request with something in the header that will
unblock it?

When do you think this will go live?

On Tue, May 8, 2012 at 4:26 AM, Baptiste bed...@gmail.com wrote:

 On Tue, May 8, 2012 at 4:39 AM, S Ahmed sahmed1...@gmail.com wrote:
  I agree it will add overheard for each call.
 
  Well would there a way for me to somehow tell haproxy from my
 application to
  block a particular url, and then send another api call to allow traffic
 from
  that url?

 This is different.
 Soon, you'll be able to store an URL in a stick-table, so you'll be
 able to update a gpc counter by setting up a particular header on the
 server side which tells HAProxy to block this request.
 For the cancellation of this blocking system, you could request the
 URL with a particular header to unblock it.

 It might be doable with HAProxy nightly snapshot, but you should
 definitely wait for Willy to provide the 1.5-dev9 which allows strings
 in stick-tables.



Re: could haproxy call redis for a result?

2012-05-08 Thread S Ahmed
Great.

So any ideas how many urls one can story in these sticky tables before it
becomes a problem?

Would 250K be something of a concern?

On Tue, May 8, 2012 at 11:26 AM, Baptiste bed...@gmail.com wrote:

 On Tue, May 8, 2012 at 3:25 PM, S Ahmed sahmed1...@gmail.com wrote:
  Ok that sounds awesome, how will that work though?  i.e. from say java,
 how
  will I do that?
 
  From what your saying it sounds like I will just have to modify the
 response
  add and a particular header.  And on the flip side, if I want to unblock
  I'll make a http request with something in the header that will unblock
 it?
 

 That's it.
 You'll have to track these headers with ACLs in HAProxy and to update
 the stick table accordingly.
 Then based on the value setup in the stick table, HAProxy can decide
 whether it will allow or reject the request.

  When do you think this will go live?
 

 In an other mail, Willy said he will release 1.5-dev9 today.
 So I guess it won't be too long now. Worste case would be later in the
 week or next week.

 cheers



Re: could haproxy call redis for a result?

2012-05-08 Thread S Ahmed
Yes it is the lookup that I am worried about.

On Tue, May 8, 2012 at 5:46 PM, Baptiste bed...@gmail.com wrote:

 Hi,

 Willy has just released 1.5-dev9, but unfortunately the track
 functions can't yet track strings (and so URLs).
 I'll let you know once a nightly snapshot could do it and we could
 work on a proof of concept configuration.

 Concerning 250K URLs, that should not be an issue at all to store them.
 Maybe looking for one URL could have a performance impact, we'll see.

 cheers

 On Tue, May 8, 2012 at 10:00 PM, S Ahmed sahmed1...@gmail.com wrote:
  Great.
 
  So any ideas how many urls one can story in these sticky tables before it
  becomes a problem?
 
  Would 250K be something of a concern?
 
 
  On Tue, May 8, 2012 at 11:26 AM, Baptiste bed...@gmail.com wrote:
 
  On Tue, May 8, 2012 at 3:25 PM, S Ahmed sahmed1...@gmail.com wrote:
   Ok that sounds awesome, how will that work though?  i.e. from say
 java,
   how
   will I do that?
  
   From what your saying it sounds like I will just have to modify the
   response
   add and a particular header.  And on the flip side, if I want to
 unblock
   I'll make a http request with something in the header that will
 unblock
   it?
  
 
  That's it.
  You'll have to track these headers with ACLs in HAProxy and to update
  the stick table accordingly.
  Then based on the value setup in the stick table, HAProxy can decide
  whether it will allow or reject the request.
 
   When do you think this will go live?
  
 
  In an other mail, Willy said he will release 1.5-dev9 today.
  So I guess it won't be too long now. Worste case would be later in the
  week or next week.
 
  cheers
 
 



could haproxy call redis for a result?

2012-05-07 Thread S Ahmed
I'm sure this isn't possible but it would be cool if it is.

My backend services write to redis, and if a client reaches a certain
threshold, I want to hard drop all further requests until x minutes have
passed.

Would it be possible, for each request, haproxy performs a lookup in redis,
and if a 0 is returned, drop the request completly (hard drop), if it is 1,
continue processing.


Re: could haproxy call redis for a result?

2012-05-07 Thread S Ahmed
I agree it will add overheard for each call.

Well would there a way for me to somehow tell haproxy from my application
to block a particular url, and then send another api call to allow traffic
from that url?

That would be really cool to have an API where I could do this from.

I know haproxy has rate limiting as per:
http://blog.serverfault.com/2010/08/26/1016491873/

But wondering if one could have more control over it, like say you have
multiple haproxy servers and you want to synch them, or simply the
application layer needs to decide when to drop a url connection or when to
accept.

On Mon, May 7, 2012 at 7:39 PM, Baptiste bed...@gmail.com wrote:

 On Tue, May 8, 2012 at 12:26 AM, S Ahmed sahmed1...@gmail.com wrote:
  I'm sure this isn't possible but it would be cool if it is.
 
  My backend services write to redis, and if a client reaches a certain
  threshold, I want to hard drop all further requests until x minutes have
  passed.
 
  Would it be possible, for each request, haproxy performs a lookup in
 redis,
  and if a 0 is returned, drop the request completly (hard drop), if it is
 1,
  continue processing.
 
 


 It would introduce latency in the request processing.
 Why would you need such way of serving your request?

 By the way, this is not doable with HAProxy.
 Well, at least, not out of the box :)
 Depending on your needs, you could hack some dirty scripts which can
 sync your redis DB with HAProxy server status through the stats
 socket.

 cheers



Re: could a single ha proxy server sustain 1500 requests per second

2012-05-04 Thread S Ahmed
how can I calculate if this will work in theory?

On Thu, May 3, 2012 at 5:30 PM, Baptiste bed...@gmail.com wrote:

 Hi,

 You'll need one gig interface (two looks better, one for the frontend
 and one for the backend servers), but it should work without any
 issues.

 cheers

 On Thu, May 3, 2012 at 9:35 PM, S Ahmed sahmed1...@gmail.com wrote:
  I have a service where people will be http posting documents that will
 be 20
  KB in size.
  The rate at which these documents will be http posted is 1500 requests
 per
  second.
 
  There will be backend web servers that will be parsing and load the data
  into a database (ignore this part but just giving the whole picture).
 
  What I want to know is, could a single HA proxy server handle this high
 rate
  of requests, seeing as the payload is fairly large (it isn't like a
 simple
  http get request).



Re: redirect to backend server during initial tcp handshake

2012-01-30 Thread S Ahmed
You mentioned that people use this for media type environments, or if users
are uploading media.

But if users are uploading media (video/images), that will be a http post
request.

Didn't you say that this redir only works with http get? (http post will
still be forwarded and hence all traffic will flow through the haproxy
server which is the bottleneck in high throughput environments like image
uploading).

On Fri, Jan 6, 2012 at 1:58 AM, Willy Tarreau w...@1wt.eu wrote:

 On Wed, Jan 04, 2012 at 03:53:59PM -0500, S Ahmed wrote:
  In my situation clients will be sending 1K of data, and I believe I can
 do
  this with a http get request safely since the limit is I think 4k.

 It depends if your goal is to maintain low latency or to spread the load
 better. For 1k-requests, you probably have around 300 bytes of headers.
 Maybe it makes sense to leave that in POST so that haproxy directly sends
 the data to the server without increasing the client's latency ?

  Ok so the benefit of this approach is you get to spread the load when the
  solutions requires it, but at the cost of latency.
 
  i.e. if it take 30ms for my clients to make a http request, it will now
  take 60ms.

 Exactly. That's why it's more suited for launch pages or for large static
 files.

 Regards,
 Willy




Re: rate limiting addon

2012-01-25 Thread S Ahmed
Willy,

After thinking about it more, this is the behaviour I want.

If a user requests more than 100 requests per minute, I want to refuse
connections until the minute expires.

I don't want to delay traffic, or slow it down or ask them to resent, just
refuse and drop traffic.

The clients application are sending over status information, and this
information can be droppped/refused (as per requirements).

How do I do this with haproxy?

With these requirements, it shouldn't result in MORE network traffic
correct? i.e. it will save bandwith

On Sun, Jan 8, 2012 at 2:24 AM, Willy Tarreau w...@1wt.eu wrote:

 On Sat, Jan 07, 2012 at 07:11:02PM -0500, S Ahmed wrote:
  I was reading this:  http://blog.serverfault.com/2010/08/26/1016491873/
 
  A bit confused, the link to the src is version 1.5 but version 1.4 seems
 to
  have a modified date of sept 2011 while 1.5 is august 2010.

 The most recent 1.5 dates September 10th 2011 (1.5-dev7). There is no
 need to compare 1.4 and 1.5 release dates, as 1.5 is the development
 branch and 1.4 is the stable branch. So new 1.4 versions are released
 when bugs are discovered, regardless of the 1.5 development cycle.

  Is this an addon module or?  Is it being maintained?

 1.5 is still in active development, but since the development is not
 fast, we try to ensure good stability for each new development release
 so that people who rely on it can safely use it.

  Ok my main question is this:
 
  When a given ipaddress is rate limited and connections are dropped, how
  exactly is this happening?   Will it drop the connection during the TCP
  handshake and as a result only the http header will be sent from the
 client
  and not body?

 Connections are not necessarily dropped. You can drop them, you can send
 them to a specific backend, you can delay them, you have various options.
 You can match an IP's connection rate using an ACL and you can do many
 things using ACLs. What is not doable at the moment (delayed to 1.6) is
 to shape the traffic.

  I'm using this where clients are sending http requests and they don't
  really have control over it, so I have to rate limit them as best as I
 can.
  With this rate limiting, will this save be bandwith also since I don't
 have
  to rate limit at the application level?

 Well, you have to undestand that however you do it, rate limiting does not
 save bandwidth but *increases* bandwidth usage : whether you force to send
 smaller packets or you drop and force to retransmit, in the end for the
 same
 amount of payload exchanged, more packets will have to be exchanged when
 the
 traffic is shaped. Rate limiting must not be used to save bandwidth, but to
 protect servers and to ensure a fair share of the available bandwidth
 between
 users.

  i.e. b/c at the application level I
  have to load the header and body to determine if I should reject the
  request.

 If you want to act on connections, in my opinion the best solution would
 be to only delay those who are abusing. Basically you have two server farms
 (which can be composed of the very same servers), one farm for normal users
 and the other one for abusers. The abusers one has a very small limit on
 concurrent connections per server. Once a user is tagged as an abuser, send
 it to this farm and he will have to share his access with the other ones
 like him, waiting for an available connection from a small pool. This will
 also mechanically make his connection rate fall, and allow him to access
 via the normal pool next time.

 What is recommended is also to consider that above a certain rate, you're
 facing a really nasty user that you want to kick off your site, and then
 you drop the connection as soon as you see them on the doorstep.

  (Ok as I wrote this it seems the link from the blog entry is to haproxy
  itself, so this is built in the core?)

 Yes, this is built in 1.5-dev. You can download 1.5-dev7 or even the
 latest daily snapshot and you'll get that.

 Willy




Re: rate limiting addon

2012-01-08 Thread S Ahmed
Actually my requirements are as follows:

To block all further connections if they make more than 100 api calls in a
given minute.

So if they make 100 api calls in the span of 55 seconds, block all further
calls for the next 5 seconds.

Can I do this?  And if I do, it should limit my bandwidth then correct?

On Sun, Jan 8, 2012 at 2:24 AM, Willy Tarreau w...@1wt.eu wrote:

 On Sat, Jan 07, 2012 at 07:11:02PM -0500, S Ahmed wrote:
  I was reading this:  http://blog.serverfault.com/2010/08/26/1016491873/
 
  A bit confused, the link to the src is version 1.5 but version 1.4 seems
 to
  have a modified date of sept 2011 while 1.5 is august 2010.

 The most recent 1.5 dates September 10th 2011 (1.5-dev7). There is no
 need to compare 1.4 and 1.5 release dates, as 1.5 is the development
 branch and 1.4 is the stable branch. So new 1.4 versions are released
 when bugs are discovered, regardless of the 1.5 development cycle.

  Is this an addon module or?  Is it being maintained?

 1.5 is still in active development, but since the development is not
 fast, we try to ensure good stability for each new development release
 so that people who rely on it can safely use it.

  Ok my main question is this:
 
  When a given ipaddress is rate limited and connections are dropped, how
  exactly is this happening?   Will it drop the connection during the TCP
  handshake and as a result only the http header will be sent from the
 client
  and not body?

 Connections are not necessarily dropped. You can drop them, you can send
 them to a specific backend, you can delay them, you have various options.
 You can match an IP's connection rate using an ACL and you can do many
 things using ACLs. What is not doable at the moment (delayed to 1.6) is
 to shape the traffic.

  I'm using this where clients are sending http requests and they don't
  really have control over it, so I have to rate limit them as best as I
 can.
  With this rate limiting, will this save be bandwith also since I don't
 have
  to rate limit at the application level?

 Well, you have to undestand that however you do it, rate limiting does not
 save bandwidth but *increases* bandwidth usage : whether you force to send
 smaller packets or you drop and force to retransmit, in the end for the
 same
 amount of payload exchanged, more packets will have to be exchanged when
 the
 traffic is shaped. Rate limiting must not be used to save bandwidth, but to
 protect servers and to ensure a fair share of the available bandwidth
 between
 users.

  i.e. b/c at the application level I
  have to load the header and body to determine if I should reject the
  request.

 If you want to act on connections, in my opinion the best solution would
 be to only delay those who are abusing. Basically you have two server farms
 (which can be composed of the very same servers), one farm for normal users
 and the other one for abusers. The abusers one has a very small limit on
 concurrent connections per server. Once a user is tagged as an abuser, send
 it to this farm and he will have to share his access with the other ones
 like him, waiting for an available connection from a small pool. This will
 also mechanically make his connection rate fall, and allow him to access
 via the normal pool next time.

 What is recommended is also to consider that above a certain rate, you're
 facing a really nasty user that you want to kick off your site, and then
 you drop the connection as soon as you see them on the doorstep.

  (Ok as I wrote this it seems the link from the blog entry is to haproxy
  itself, so this is built in the core?)

 Yes, this is built in 1.5-dev. You can download 1.5-dev7 or even the
 latest daily snapshot and you'll get that.

 Willy




Re: rate limiting addon

2012-01-08 Thread S Ahmed
I don't know the clients ipaddresses before hand (in case that matters).

On Sun, Jan 8, 2012 at 5:24 PM, Baptiste bed...@gmail.com wrote:

 Hi,

 You can do this with a stick-table and a store conn_rate(60s).
 Then with an ACL, you can trigger decision based on the conn_rate value:
 acl abuser  src_conn_rate gt 100
 block if abuser

 So the 101st request and above in a minute would be blocked.

 cheers


 On Sun, Jan 8, 2012 at 10:53 PM, S Ahmed sahmed1...@gmail.com wrote:
  Actually my requirements are as follows:
 
  To block all further connections if they make more than 100 api calls in
 a
  given minute.
 
  So if they make 100 api calls in the span of 55 seconds, block all
 further
  calls for the next 5 seconds.
 
  Can I do this?  And if I do, it should limit my bandwidth then correct?
 
 
  On Sun, Jan 8, 2012 at 2:24 AM, Willy Tarreau w...@1wt.eu wrote:
 
  On Sat, Jan 07, 2012 at 07:11:02PM -0500, S Ahmed wrote:
   I was reading this:
 http://blog.serverfault.com/2010/08/26/1016491873/
  
   A bit confused, the link to the src is version 1.5 but version 1.4
 seems
   to
   have a modified date of sept 2011 while 1.5 is august 2010.
 
  The most recent 1.5 dates September 10th 2011 (1.5-dev7). There is no
  need to compare 1.4 and 1.5 release dates, as 1.5 is the development
  branch and 1.4 is the stable branch. So new 1.4 versions are released
  when bugs are discovered, regardless of the 1.5 development cycle.
 
   Is this an addon module or?  Is it being maintained?
 
  1.5 is still in active development, but since the development is not
  fast, we try to ensure good stability for each new development release
  so that people who rely on it can safely use it.
 
   Ok my main question is this:
  
   When a given ipaddress is rate limited and connections are dropped,
 how
   exactly is this happening?   Will it drop the connection during the
 TCP
   handshake and as a result only the http header will be sent from the
   client
   and not body?
 
  Connections are not necessarily dropped. You can drop them, you can send
  them to a specific backend, you can delay them, you have various
 options.
  You can match an IP's connection rate using an ACL and you can do many
  things using ACLs. What is not doable at the moment (delayed to 1.6) is
  to shape the traffic.
 
   I'm using this where clients are sending http requests and they don't
   really have control over it, so I have to rate limit them as best as I
   can.
   With this rate limiting, will this save be bandwith also since I don't
   have
   to rate limit at the application level?
 
  Well, you have to undestand that however you do it, rate limiting does
 not
  save bandwidth but *increases* bandwidth usage : whether you force to
 send
  smaller packets or you drop and force to retransmit, in the end for the
  same
  amount of payload exchanged, more packets will have to be exchanged when
  the
  traffic is shaped. Rate limiting must not be used to save bandwidth, but
  to
  protect servers and to ensure a fair share of the available bandwidth
  between
  users.
 
   i.e. b/c at the application level I
   have to load the header and body to determine if I should reject the
   request.
 
  If you want to act on connections, in my opinion the best solution would
  be to only delay those who are abusing. Basically you have two server
  farms
  (which can be composed of the very same servers), one farm for normal
  users
  and the other one for abusers. The abusers one has a very small limit on
  concurrent connections per server. Once a user is tagged as an abuser,
  send
  it to this farm and he will have to share his access with the other ones
  like him, waiting for an available connection from a small pool. This
 will
  also mechanically make his connection rate fall, and allow him to access
  via the normal pool next time.
 
  What is recommended is also to consider that above a certain rate,
 you're
  facing a really nasty user that you want to kick off your site, and then
  you drop the connection as soon as you see them on the doorstep.
 
   (Ok as I wrote this it seems the link from the blog entry is to
 haproxy
   itself, so this is built in the core?)
 
  Yes, this is built in 1.5-dev. You can download 1.5-dev7 or even the
  latest daily snapshot and you'll get that.
 
  Willy
 
 



simple rate limiting

2012-01-07 Thread S Ahmed
Can haproxy be used to rate limit clients by either ipaddress or
hostname(domain name)?

If yes, will it rate limit by simply streaming in the request header?

The reason I am asking, say the client's request contains 100K of http post
data.  Currently for me to rate limit at the application level, I have to
read in the entire 100K + request header.  If it can be done earlier in the
request cycle, that better.

So I want to understand if it simply reads in the request header, will it
send some tcp type message saying the request was denied thus saving me
BOTH bandwitch and server load?


rate limiting addon

2012-01-07 Thread S Ahmed
I was reading this:  http://blog.serverfault.com/2010/08/26/1016491873/

A bit confused, the link to the src is version 1.5 but version 1.4 seems to
have a modified date of sept 2011 while 1.5 is august 2010.

Is this an addon module or?  Is it being maintained?

Ok my main question is this:

When a given ipaddress is rate limited and connections are dropped, how
exactly is this happening?   Will it drop the connection during the TCP
handshake and as a result only the http header will be sent from the client
and not body?

I'm using this where clients are sending http requests and they don't
really have control over it, so I have to rate limit them as best as I can.
With this rate limiting, will this save be bandwith also since I don't have
to rate limit at the application level? i.e. b/c at the application level I
have to load the header and body to determine if I should reject the
request.


(Ok as I wrote this it seems the link from the blog entry is to haproxy
itself, so this is built in the core?)

Thanks for the clarifications!


Re: simple rate limiting

2012-01-07 Thread S Ahmed
Is stick tables the same as this?  http://haproxy.1wt.eu/download/


On Sat, Jan 7, 2012 at 7:45 AM, Baptiste bed...@gmail.com wrote:

 Hi,

 Using stick tables, you can limit a source IP address based on
 different criteria: bytes in/out, req rate, etc

 If you want to apply this to some domains or URLs, just use content
 switching and apply this limitation on a backend only.

 Cheers


 On Sat, Jan 7, 2012 at 1:16 PM, S Ahmed sahmed1...@gmail.com wrote:
  Can haproxy be used to rate limit clients by either ipaddress or
  hostname(domain name)?
 
  If yes, will it rate limit by simply streaming in the request header?
 
  The reason I am asking, say the client's request contains 100K of http
 post
  data.  Currently for me to rate limit at the application level, I have to
  read in the entire 100K + request header.  If it can be done earlier in
 the
  request cycle, that better.
 
  So I want to understand if it simply reads in the request header, will it
  send some tcp type message saying the request was denied thus saving me
 BOTH
  bandwitch and server load?



Re: redirect to backend server during initial tcp handshake

2012-01-04 Thread S Ahmed
In my situation clients will be sending 1K of data, and I believe I can do
this with a http get request safely since the limit is I think 4k.

Ok so the benefit of this approach is you get to spread the load when the
solutions requires it, but at the cost of latency.

i.e. if it take 30ms for my clients to make a http request, it will now
take 60ms.

 Great!
  BTW, will this increase latency at all? i.e. extra round trip?

 Yes, one GET from the client to haproxy, and another from the client to
 the server, then the client usually stays on the server.

  So if it is a POST, then all traffic will still have to go through the
  HAProxy server correct?

 Correct.

  How could I round robin (or other smarter load balancing) between 5
 servers
  using redir?

 You simply have to use the roundrobin balance algorithm :

   balance roundrobin
   server s1 1.1.1.1:80 redir http://image1.mydomain check
   server s2 1.1.1.2:80 redir http://image2.mydomain check
   server s3 1.1.1.3:80 redir http://image3.mydomain check

 Don't use leastconn as haproxy will not see the active connections on the
 servers. But roundrobin, source, url, hdr, ... are fine.

 Regards,
 Willy




how do people have multiple haproxy servers?

2012-01-04 Thread S Ahmed
How is it possible for a single domain like www.example.com which maps to a
public i.p of xx.xx.xx.xx work when there are multiple haproxy servers?

The only way I can think of is if somehow your domain name maps to 2
different ipaddresses (if you can 2 haproxy servers).

I'm not talking about a single front-end haproxy which then further proxies
to another set of haproxy servers.

Is it possible to have multiple haproxy servers that are round robined?
If so, how?


Re: how do people have multiple haproxy servers?

2012-01-04 Thread S Ahmed
I see, thanks!

what's the more fancy way? :)

On Wed, Jan 4, 2012 at 10:12 PM, David Birdsong david.birds...@gmail.comwrote:

 There are simple ways and big fancy ways.

 I'd recommend a simple way to start out. DNS can serve more than 1 ip
 address for a single name via 2 or more A records.

 On Wed, Jan 4, 2012 at 6:48 PM, S Ahmed sahmed1...@gmail.com wrote:
  How is it possible for a single domain like www.example.com which maps
 to a
  public i.p of xx.xx.xx.xx work when there are multiple haproxy servers?
 
  The only way I can think of is if somehow your domain name maps to 2
  different ipaddresses (if you can 2 haproxy servers).
 
  I'm not talking about a single front-end haproxy which then further
 proxies
  to another set of haproxy servers.
 
  Is it possible to have multiple haproxy servers that are round robined?
   If
  so, how?
 
 



redirect to backend server during initial tcp handshake

2012-01-03 Thread S Ahmed
Hi,

People access my web service using:

services.example.com/some/path

I am planning on having the url services.example.com map to my HAProxy
server.

Now I have 5 backend servers that I want HAProxy to redirect requests.

Clients will make a single http GET request and their session will end.

So I don't want any traffic to actually go through the HAProxy server, I
want it to do this:

1. client makes a http request to services.example.com/some/path
2. haproxy then chooses a backend server, and tells the client to continue
the http request with:
   services4.example.com (or services{2,3,4,5})

Is this possible?

The 5 backend services would obviously have to be publicly accessible.


The reason is, I plan on having 1 gigabyte connections to my servers, and
if all traffic goes through my HAProxy it won't be able to handle the
throughput since it is only 1 gigabyte, if  what I outlined above is
possible, I can push 5 gigabytes then.

Thanks in advance!