Re: How to track 503's
I would want to route all traffic for a given domain (assuming filtering on the Host header). Thanks! Dan Dubovik Senior Linux Systems Engineer 480-505-8800 x4257 On 2/28/15, 12:22 AM, Baptiste bed...@gmail.com wrote: On Fri, Feb 27, 2015 at 8:23 PM, Daniel Dubovik ddubo...@godaddy.com wrote: Hello all! I am wanting to use HAProxy to detect if I receive a certain status code from a backend web server (say, a 503 error or some such) while processing a request. If I do receive it, track the request, so subsequent requests to the domain will behave differently (specifically, go to a different backend that has a different load balancing method, or different servers that can handle their load. Is there a way I can do this in HAProxy? Stick-tables don't let me track requests based on the response, only on the request information, so that doesn't seem like it would work, but seems like the only place that it would fit? Thanks! Dan Hi Daniel, Something not clear in your request is that do you want to route ALL traffic after an error, or only the traffic from a single user? You may use the 'stick store-response' when an error is returned by the server and track it when traffic comes in with the in_table fetch. This may require you to switch to HAProxy 1.6-dev. Baptiste
How to track 503's
Hello all! I am wanting to use HAProxy to detect if I receive a certain status code from a backend web server (say, a 503 error or some such) while processing a request. If I do receive it, track the request, so subsequent requests to the domain will behave differently (specifically, go to a different backend that has a different load balancing method, or different servers that can handle their load. Is there a way I can do this in HAProxy? Stick-tables don’t let me track requests based on the response, only on the request information, so that doesn’t seem like it would work, but seems like the only place that it would fit? Thanks! Dan
Re: 1.5.9 crashes every 4 hours, like clockwork
Did some digging, and I did find this article: http://blog.tinola.com/?e=36 It could be related to the issue you are experiencing, especially since just before the SIGABRT, the process is trying to do a hostname resolution, but can’t, because it’s in a chroot (the reason you get all the No such file or directory responses, is because they don’t exist in the chroot). If you remove the “chroot xxx” line from your haproxy config, does the problem go away? With the time frames being exactly 4 hours apart, is it possible you have some external software (monitors or the like?) that hits the server every 4 hours? Thanks! Dan Dubovik Senior Linux Systems Engineer From: David Adams dr...@yahoo.commailto:dr...@yahoo.com Reply-To: David Adams dr...@yahoo.commailto:dr...@yahoo.com Date: Thursday, December 11, 2014 at 7:08 PM To: Lukas Tribus luky...@hotmail.commailto:luky...@hotmail.com, Tait Clarridge t...@clarridge.camailto:t...@clarridge.ca Cc: HAproxy Mailing Lists haproxy@formilux.orgmailto:haproxy@formilux.org Subject: Re: 1.5.9 crashes every 4 hours, like clockwork I ran strace on it just before CRASHTIME. It stopped on cue, with an exit code of 134. The strace output is here: haproxy strace - Pastebin.comhttp://pastebin.com/VLxwDDwj As you'll see, it looks very strange - immediately after a series of futex calls (I've no idea of their significance, only noting that they don't appear in the strace at other times) then system returns a number of No such file or directory errors on a variety of system files, despite them being present before and after. Despite setting the ulimit in the session before starting haproxy no coredump was generated. I notice I can deploy haproxy without futex support. Is that worth a try? Many thanks to all those helping me sort this out. On Thursday, 11 December 2014, 17:45, Lukas Tribus luky...@hotmail.commailto:luky...@hotmail.com wrote: I will do next time. And yes, was planning to run strace. Do I need to recompile to enable coredumps? No, you just adjust ulimit before you start, and make sure you didn't strip (as in the command strip) the executable. Then check the core with: gdb path/to/the/binary path/to/the/core and to something like bt or bt full (within gdb). Regards, Lukas
Re: 1.5.9 crashes every 4 hours, like clockwork
Ad my email apparently hadn’t been updating all day :/ Thanks! Dan Dubovik Senior Linux Systems Engineer 480-505-8800 x4257 From: Daniel Dubovik ddubo...@godaddy.commailto:ddubo...@godaddy.com Date: Friday, December 12, 2014 at 2:09 PM To: David Adams dr...@yahoo.commailto:dr...@yahoo.com Cc: HAproxy Mailing Lists haproxy@formilux.orgmailto:haproxy@formilux.org Subject: Re: 1.5.9 crashes every 4 hours, like clockwork Did some digging, and I did find this article: http://blog.tinola.com/?e=36 It could be related to the issue you are experiencing, especially since just before the SIGABRT, the process is trying to do a hostname resolution, but can’t, because it’s in a chroot (the reason you get all the No such file or directory responses, is because they don’t exist in the chroot). If you remove the “chroot xxx” line from your haproxy config, does the problem go away? With the time frames being exactly 4 hours apart, is it possible you have some external software (monitors or the like?) that hits the server every 4 hours? Thanks! Dan Dubovik Senior Linux Systems Engineer From: David Adams dr...@yahoo.commailto:dr...@yahoo.com Reply-To: David Adams dr...@yahoo.commailto:dr...@yahoo.com Date: Thursday, December 11, 2014 at 7:08 PM To: Lukas Tribus luky...@hotmail.commailto:luky...@hotmail.com, Tait Clarridge t...@clarridge.camailto:t...@clarridge.ca Cc: HAproxy Mailing Lists haproxy@formilux.orgmailto:haproxy@formilux.org Subject: Re: 1.5.9 crashes every 4 hours, like clockwork I ran strace on it just before CRASHTIME. It stopped on cue, with an exit code of 134. The strace output is here: haproxy strace - Pastebin.comhttp://pastebin.com/VLxwDDwj As you'll see, it looks very strange - immediately after a series of futex calls (I've no idea of their significance, only noting that they don't appear in the strace at other times) then system returns a number of No such file or directory errors on a variety of system files, despite them being present before and after. Despite setting the ulimit in the session before starting haproxy no coredump was generated. I notice I can deploy haproxy without futex support. Is that worth a try? Many thanks to all those helping me sort this out. On Thursday, 11 December 2014, 17:45, Lukas Tribus luky...@hotmail.commailto:luky...@hotmail.com wrote: I will do next time. And yes, was planning to run strace. Do I need to recompile to enable coredumps? No, you just adjust ulimit before you start, and make sure you didn't strip (as in the command strip) the executable. Then check the core with: gdb path/to/the/binary path/to/the/core and to something like bt or bt full (within gdb). Regards, Lukas
Re: Stick-tables with roundrobin backend
Hmmm, I pared down the config even more, and it seems to be working now. Let me play around with it a bit to see what the difference is. For the record, in my current config, I do not have http-server-close set. I’m wondering if it’s been working all along, and my initial validation was wrong. I’ll report back if I find anything amiss. Thanks! Dan Dubovik Senior Linux Systems Engineer 480-505-8800 x4257 On 11/25/14, 3:56 AM, Daniel Dubovik ddubo...@godaddy.com wrote: I added option http-server-close to all backends (both the hdr(Host) balanced one, and the roundrobin one), and the behavior is the same. Stats output showing the table contents is: echo 'show table ft_web' | socat /var/run/haproxy.sock stdio # table: ft_web, type: string, size:1024000, used:1 0x2547540: key=a.com use=0 exp=244775 server_id=2 http_req_rate(1)=0 Is there a way to unset the server_id? Or is that a required key for the table entry? I know the doc notes that field is enabled by default. Thanks! — Dan. On 11/24/14, 10:35 PM, Baptiste bed...@gmail.com wrote: On Mon, Nov 24, 2014 at 11:08 PM, Daniel Dubovik ddubo...@godaddy.com wrote: Hey all! We have a cluster of HAProxy servers, in front of a set of Varnish nodes. Currently, we have HAProxy set to load balance traffic based on Host header to a given varnish server. Some of our sites have enough traffic, that it warrants roundrobining their traffic to multiple varnish servers. I've been looking into using stick-tables for connection tracking, and seem to have run into a wall, so hoping someone where can help. Relavent config follows: - frontend ft_web option forwardfor stats enable stats scope . bind :80 #Setup our stick-table and connection rate tracking stick-table type string len 50 size 1000k expire 5m peers loadbalancers store http_req_rate(10s) tcp-request inspect-delay 5s tcp-request content track-sc1 hdr(Host) acl heavy_hitters sc1_http_req_rate gt 5 # Heavy site only use_backend varnish_heavy if heavy_hitters # Just a standard http request, with no special options default_backend varnish backend varnish_heavy stats enable stats scope . balance roundrobin # Balance based on requested host # Add connection tracking #stick on hdr(Host) table ft_web stick store-request hdr(Host) table ft_web http-response set-header X-HEAVYSITE 1 server varnishserver01 10.11.12.13:80 weight 1 track check_servers/varnishserver01-track server varnishserver02 10.11.12.14:80 weight 1 track check_servers/varnishserver02-track backend varnish stats enable stats scope . balance hdr(Host) # Balance based on requested host hash-type consistent djb2 # Add connection tracking #stick on hdr(Host) table ft_web stick store-request hdr(Host) table ft_web server varnishserver01 10.11.12.13:80 weight 1 track check_servers/varnishserver01-track server varnishserver02 10.11.12.14:80 weight 1 track check_servers/varnishserver02-track Note: I tried both stick on and stick store-request, and both had the same behavior What I've found is, the backend selection will work. We will trigger the rate limit, and I will see the X-HEAVYSITE header, so I know that part is working. The trouble is, the balance algorithm is being ignored, and we are pinning sites to the same server still. Is there a way for me to have it honor the balance algorithm (roundrobin in this case) for requests in a stick-table, and not use the server_id value to auto-determine the server to use? Thanks! Dan Dubovik Senior Linux Systems Engineer 480-505-8800 x4257 Hi daniel Can you give a try to option http-server-close in your roundrobin backend? Baptiste
Re: Stick-tables with roundrobin backend
To close the loop on this one, the issue was in part with my testing. Ultimately the fix was to use stick store-request everywhere, instead of stick on”. Thanks! Dan Dubovik Senior Linux Systems Engineer 480-505-8800 x4257 On 11/25/14, 2:05 PM, Daniel Dubovik ddubo...@godaddy.com wrote: Hmmm, I pared down the config even more, and it seems to be working now. Let me play around with it a bit to see what the difference is. For the record, in my current config, I do not have http-server-close set. I’m wondering if it’s been working all along, and my initial validation was wrong. I’ll report back if I find anything amiss. Thanks! Dan Dubovik Senior Linux Systems Engineer 480-505-8800 x4257 On 11/25/14, 3:56 AM, Daniel Dubovik ddubo...@godaddy.com wrote: I added option http-server-close to all backends (both the hdr(Host) balanced one, and the roundrobin one), and the behavior is the same. Stats output showing the table contents is: echo 'show table ft_web' | socat /var/run/haproxy.sock stdio # table: ft_web, type: string, size:1024000, used:1 0x2547540: key=a.com use=0 exp=244775 server_id=2 http_req_rate(1)=0 Is there a way to unset the server_id? Or is that a required key for the table entry? I know the doc notes that field is enabled by default. Thanks! — Dan. On 11/24/14, 10:35 PM, Baptiste bed...@gmail.com wrote: On Mon, Nov 24, 2014 at 11:08 PM, Daniel Dubovik ddubo...@godaddy.com wrote: Hey all! We have a cluster of HAProxy servers, in front of a set of Varnish nodes. Currently, we have HAProxy set to load balance traffic based on Host header to a given varnish server. Some of our sites have enough traffic, that it warrants roundrobining their traffic to multiple varnish servers. I've been looking into using stick-tables for connection tracking, and seem to have run into a wall, so hoping someone where can help. Relavent config follows: - frontend ft_web option forwardfor stats enable stats scope . bind :80 #Setup our stick-table and connection rate tracking stick-table type string len 50 size 1000k expire 5m peers loadbalancers store http_req_rate(10s) tcp-request inspect-delay 5s tcp-request content track-sc1 hdr(Host) acl heavy_hitters sc1_http_req_rate gt 5 # Heavy site only use_backend varnish_heavy if heavy_hitters # Just a standard http request, with no special options default_backend varnish backend varnish_heavy stats enable stats scope . balance roundrobin # Balance based on requested host # Add connection tracking #stick on hdr(Host) table ft_web stick store-request hdr(Host) table ft_web http-response set-header X-HEAVYSITE 1 server varnishserver01 10.11.12.13:80 weight 1 track check_servers/varnishserver01-track server varnishserver02 10.11.12.14:80 weight 1 track check_servers/varnishserver02-track backend varnish stats enable stats scope . balance hdr(Host) # Balance based on requested host hash-type consistent djb2 # Add connection tracking #stick on hdr(Host) table ft_web stick store-request hdr(Host) table ft_web server varnishserver01 10.11.12.13:80 weight 1 track check_servers/varnishserver01-track server varnishserver02 10.11.12.14:80 weight 1 track check_servers/varnishserver02-track Note: I tried both stick on and stick store-request, and both had the same behavior What I've found is, the backend selection will work. We will trigger the rate limit, and I will see the X-HEAVYSITE header, so I know that part is working. The trouble is, the balance algorithm is being ignored, and we are pinning sites to the same server still. Is there a way for me to have it honor the balance algorithm (roundrobin in this case) for requests in a stick-table, and not use the server_id value to auto-determine the server to use? Thanks! Dan Dubovik Senior Linux Systems Engineer 480-505-8800 x4257 Hi daniel Can you give a try to option http-server-close in your roundrobin backend? Baptiste
Stick-tables with roundrobin backend
Hey all! We have a cluster of HAProxy servers, in front of a set of Varnish nodes. Currently, we have HAProxy set to load balance traffic based on Host header to a given varnish server. Some of our sites have enough traffic, that it warrants roundrobining their traffic to multiple varnish servers. I’ve been looking into using stick-tables for connection tracking, and seem to have run into a wall, so hoping someone where can help. Relavent config follows: - frontend ft_web option forwardfor stats enable stats scope . bind :80 #Setup our stick-table and connection rate tracking stick-table type string len 50 size 1000k expire 5m peers loadbalancers store http_req_rate(10s) tcp-request inspect-delay 5s tcp-request content track-sc1 hdr(Host) acl heavy_hitters sc1_http_req_rate gt 5 # Heavy site only use_backend varnish_heavy if heavy_hitters # Just a standard http request, with no special options default_backend varnish backend varnish_heavy stats enable stats scope . balance roundrobin # Balance based on requested host # Add connection tracking #stick on hdr(Host) table ft_web stick store-request hdr(Host) table ft_web http-response set-header X-HEAVYSITE 1 server varnishserver01 10.11.12.13:80 weight 1 track check_servers/varnishserver01-track server varnishserver02 10.11.12.14:80 weight 1 track check_servers/varnishserver02-track backend varnish stats enable stats scope . balance hdr(Host) # Balance based on requested host hash-type consistent djb2 # Add connection tracking #stick on hdr(Host) table ft_web stick store-request hdr(Host) table ft_web server varnishserver01 10.11.12.13:80 weight 1 track check_servers/varnishserver01-track server varnishserver02 10.11.12.14:80 weight 1 track check_servers/varnishserver02-track Note: I tried both “stick on” and stick store-request, and both had the same behavior What I’ve found is, the backend selection will work. We will trigger the rate limit, and I will see the X-HEAVYSITE header, so I know that part is working. The trouble is, the balance algorithm is being ignored, and we are pinning sites to the same server still. Is there a way for me to have it honor the balance algorithm (roundrobin in this case) for requests in a stick-table, and not use the server_id value to auto-determine the server to use? Thanks! Dan Dubovik Senior Linux Systems Engineer 480-505-8800 x4257
Re: Stick-tables with roundrobin backend
I added option http-server-close to all backends (both the hdr(Host) balanced one, and the roundrobin one), and the behavior is the same. Stats output showing the table contents is: echo 'show table ft_web' | socat /var/run/haproxy.sock stdio # table: ft_web, type: string, size:1024000, used:1 0x2547540: key=a.com use=0 exp=244775 server_id=2 http_req_rate(1)=0 Is there a way to unset the server_id? Or is that a required key for the table entry? I know the doc notes that field is enabled by default. Thanks! — Dan. On 11/24/14, 10:35 PM, Baptiste bed...@gmail.com wrote: On Mon, Nov 24, 2014 at 11:08 PM, Daniel Dubovik ddubo...@godaddy.com wrote: Hey all! We have a cluster of HAProxy servers, in front of a set of Varnish nodes. Currently, we have HAProxy set to load balance traffic based on Host header to a given varnish server. Some of our sites have enough traffic, that it warrants roundrobining their traffic to multiple varnish servers. I've been looking into using stick-tables for connection tracking, and seem to have run into a wall, so hoping someone where can help. Relavent config follows: - frontend ft_web option forwardfor stats enable stats scope . bind :80 #Setup our stick-table and connection rate tracking stick-table type string len 50 size 1000k expire 5m peers loadbalancers store http_req_rate(10s) tcp-request inspect-delay 5s tcp-request content track-sc1 hdr(Host) acl heavy_hitters sc1_http_req_rate gt 5 # Heavy site only use_backend varnish_heavy if heavy_hitters # Just a standard http request, with no special options default_backend varnish backend varnish_heavy stats enable stats scope . balance roundrobin # Balance based on requested host # Add connection tracking #stick on hdr(Host) table ft_web stick store-request hdr(Host) table ft_web http-response set-header X-HEAVYSITE 1 server varnishserver01 10.11.12.13:80 weight 1 track check_servers/varnishserver01-track server varnishserver02 10.11.12.14:80 weight 1 track check_servers/varnishserver02-track backend varnish stats enable stats scope . balance hdr(Host) # Balance based on requested host hash-type consistent djb2 # Add connection tracking #stick on hdr(Host) table ft_web stick store-request hdr(Host) table ft_web server varnishserver01 10.11.12.13:80 weight 1 track check_servers/varnishserver01-track server varnishserver02 10.11.12.14:80 weight 1 track check_servers/varnishserver02-track Note: I tried both stick on and stick store-request, and both had the same behavior What I've found is, the backend selection will work. We will trigger the rate limit, and I will see the X-HEAVYSITE header, so I know that part is working. The trouble is, the balance algorithm is being ignored, and we are pinning sites to the same server still. Is there a way for me to have it honor the balance algorithm (roundrobin in this case) for requests in a stick-table, and not use the server_id value to auto-determine the server to use? Thanks! Dan Dubovik Senior Linux Systems Engineer 480-505-8800 x4257 Hi daniel Can you give a try to option http-server-close in your roundrobin backend? Baptiste
can't identify protocol after reload
Hello all! We have HAProxy up and running now, and I have a few questions I'm wondering someone can help me with. To start, we are running HAProxy 1.5.1 (will be updating soon to 1.5.3), and it is on CentOS6.5 What is a safe limit to have maxconn set to? We have 10Gbig NICs, currently hitting a max of ~60k connections, and about 1.27Gb/s of throughput. Is there an upper limit I should avoid setting maxconn to? These servers are intended to be workhorses, and are pretty much only running HAProxy, and a few helper services, so I'm not too worried about HAProxy consuming all the resources on the machines. Second question: When we reload HAProxy, the older process never seems to go away. Looking at lsof output, it gives the following: [root@p3nlwpproxy001 ~]# lsof -p 26853 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME haproxy 26853 nobody cwdDIR 253,0 4096 662171 /usr/share/haproxy haproxy 26853 nobody rtdDIR 253,0 4096 662171 /usr/share/haproxy haproxy 26853 nobody txtREG 253,0 724368 396188 /usr/sbin/haproxy haproxy 26853 nobody memREG 253,0 1569282490761 /lib64/ld-2.12.so haproxy 26853 nobody memREG 253,0 19268002490762 /lib64/libc-2.12.so haproxy 26853 nobody memREG 253,0225362490588 /lib64/libdl-2.12.so haproxy 26853 nobody memREG 253,0 1458962490763 /lib64/libpthread-2.12.so haproxy 26853 nobody memREG 253,0910962490771 /lib64/libz.so.1.2.3 haproxy 26853 nobody memREG 253,0 9656 397429 /usr/lib64/libpcreposix.so.0.0.0 haproxy 26853 nobody memREG 253,0 1246242490772 /lib64/libselinux.so.1 haproxy 26853 nobody memREG 253,0 1139522490587 /lib64/libresolv-2.12.so haproxy 26853 nobody memREG 253,0 1953536 397603 /usr/lib64/libcrypto.so.1.0.1e haproxy 26853 nobody memREG 253,0433922490776 /lib64/libcrypt-2.12.so haproxy 26853 nobody memREG 253,0 4720642490775 /lib64/libfreebl3.so haproxy 26853 nobody memREG 253,0172562490786 /lib64/libcom_err.so.2.1 haproxy 26853 nobody memREG 253,0463682490784 /lib64/libkrb5support.so.0.1 haproxy 26853 nobody memREG 253,0 1775202490785 /lib64/libk5crypto.so.3.1 haproxy 26853 nobody memREG 253,0 9447122490788 /lib64/libkrb5.so.3.3 haproxy 26853 nobody memREG 253,0125922490410 /lib64/libkeyutils.so.1.3 haproxy 26853 nobody memREG 253,0 2805202490789 /lib64/libgssapi_krb5.so.2.2 haproxy 26853 nobody memREG 253,0 1838162490701 /lib64/libpcre.so.0.0.1 haproxy 26853 nobody memREG 253,0 444040 395978 /usr/lib64/libssl.so.1.0.1e haproxy 26853 nobody0u REG0,90 3840 anon_inode haproxy 26853 nobody5u unix 0x88081cb61680 0t0 1151268926 socket haproxy 26853 nobody 3072u sock0,6 0t0 2119378753 can't identify protocol haproxy 26853 nobody 5538u sock0,6 0t0 2796993928 can't identify protocol haproxy 26853 nobody *651u sock0,6 0t0 2881376906 can't identify protocol haproxy 26853 nobody *250u sock0,6 0t0 1594166923 can't identify protocol haproxy 26853 nobody *096u sock0,6 0t0 133637843 can't identify protocol haproxy 26853 nobody *791u sock0,6 0t0 2132297198 can't identify protocol haproxy 26853 nobody *245u sock0,6 0t0 4127967966 can't identify protocol haproxy 26853 nobody *567u sock0,6 0t0 2957000232 can't identify protocol haproxy 26853 nobody *763u sock0,6 0t0 215548265 can't identify protocol haproxy 26853 nobody *505u sock0,6 0t0 1894363619 can't identify protocol haproxy 26853 nobody *513u sock0,6 0t0 1763061211 can't identify protocol haproxy 26853 nobody *377u sock0,6 0t0 3158442296 can't identify protocol Wondering if anyone has seen this, or knows what we can do for the seemingly hung connections. They seem to be what is keeping the old process around. Thanks! Dan Dubovik
Re: POST with x-www-form-urlencoded Content-Type
Hi Willy, I built a new package with the patch, and my test cases are passing now. Just wanted to say thanks for the super quick turn around on this issue! Thanks! Dan Dubovik Senior Linux Systems Engineer 480-505-8800 x4257 On 7/10/14 10:34 AM, Willy Tarreau w...@1wt.eu wrote: Hi Dan, On Thu, Jul 10, 2014 at 05:20:18PM +0200, Willy Tarreau wrote: Hi Dan, On Wed, Jul 09, 2014 at 07:13:33PM +, Daniel Dubovik wrote: Hello all, I am attempting to balance traffic to a number of backend instances. I am balancing based off the Host header, and for the most part everything is working. When testing a bit more today, I came across some weird behavior, and am hoping someone can help out. When POSTing to a site, if it is done using the Content-Type application/x-www-form-urlencoded, and has actual data, HAProxy falls back to a roundrobin balancing scheme. POSTing using a Content-Type of multipart/form-data, however, works just fine. Oddly, application/x-www-form-urlencoded with no actual data, also works as expected. (...) There's indeed a bug, the amount of data forwarded is not deduced correctly to rewind the buffer, I'm even wondering if it's expected that we let them pass at this point. I'm investigating, thanks for your report! OK I could fix it. The patch is very small but it required some extra care because that's a sensible area that I already fixed in dev23 but not enough. Other balancing algorithms are affected, and worse, http-send-name-header was bogus as well in this case. I've applied the fix, I'm attaching it here, it applies both to 1.5 and to 1.6. Thanks for your report, that was a nasty one and I'm glad we got rid of it early! Willy
POST with x-www-form-urlencoded Content-Type
Hello all, I am attempting to balance traffic to a number of backend instances. I am balancing based off the Host header, and for the most part everything is working. When testing a bit more today, I came across some weird behavior, and am hoping someone can help out. When POSTing to a site, if it is done using the Content-Type application/x-www-form-urlencoded, and has actual data, HAProxy falls back to a roundrobin balancing scheme. POSTing using a Content-Type of multipart/form-data, however, works just fine. Oddly, application/x-www-form-urlencoded with no actual data, also works as expected. Log line I receive when posting with data, using multipart/form-data: Jul 9 11:40:06 xxx haproxy[28084]: 172.19.46.89:52564 [09/Jul/2014:11:40:06.238] fromvarnish fromvarnish/port_10945 0/0/0/45/45 200 426 - - 2/2/0/1/0 0/0 {web.2014-07-09-08-28-39.xxx.com|multipart/form-data; boundary=155819574760c61e} POST /posttome.php HTTP/1.1 Note, it picked a backend that I would expect. Curl command used to generate the above log: curl -i -F submit=submit -F firstname= -H Host: web.2014-07-09-08-28-39.egyitnews.mobi http://10.224.67.9/posttome.php Log line I receive when posting with out data, using application/x-www-form-urlencoded Jul 9 11:41:11 xxx haproxy[28084]: 172.19.46.89:52572 [09/Jul/2014:11:41:11.457] fromvarnish fromvarnish/port_10945 0/0/0/2/2 200 401 - - 2/2/0/1/0 0/0 {web.2014-07-09-08-28-39.xx.com|application/x-www-form-urlencoded} POST /posttome.php HTTP/1.1 Note, same backend is picked. This is the ideal behavior. Curl command used to generate the above log: curl -i --data-urlencode '' -H Host: web.2014-07-09-08-28-39.egyitnews.mobi http://10.224.67.9/posttome.php Log line I receive when posting data, using application/x-www-form-url-encoded Jul 9 11:40:39 xxx haproxy[28084]: 172.19.46.89:52569 [09/Jul/2014:11:40:39.635] fromvarnish fromvarnish/port_10004 0/0/0/1/1 200 330 - - 2/2/0/1/0 0/0 {web.2014-07-09-08-28-39.xx.com|application/x-www-form-urlencoded} POST /posttome.php HTTP/1.1 Jul 9 11:46:29 xxx haproxy[28084]: 172.19.46.89:52597 [09/Jul/2014:11:46:29.703] fromvarnish fromvarnish/port_10005 0/0/0/1/1 200 330 - - 2/2/0/1/0 0/0 {web.2014-07-09-08-28-39.xx.com|application/x-www-form-urlencoded} POST /posttome.php HTTP/1.1 Jul 9 11:46:36 xxx haproxy[28084]: 172.19.46.89:52600 [09/Jul/2014:11:46:36.829] fromvarnish fromvarnish/port_10006 0/0/0/1/1 200 330 - - 2/2/0/1/0 0/0 {web.2014-07-09-08-28-39.xx.com|application/x-www-form-urlencoded} POST /posttome.php HTTP/1.1 Note, in this one, it picked a different backend each time, in a roundrobin manner Curl command used to generate the above log: curl -i --data-urlencode a=b -H Host: web.2014-07-09-08-28-39.egyitnews.mobi http://10.224.67.9/posttome.php My configuration is as follows: global log /dev/loglocal0 debug log 127.0.0.1 local1 notice maxconn 4096 uid 99 gid 99 #daemon debug #quiet defaults log global modehttp option httplog option dontlognull #retries3 #option redispatch maxconn 2000 timeout connect 5000 timeout client 5 timeout server 5 listen stats bind :81 stats enable stats uri / listen fromvarnish capture request header Host len 63 capture request header Content-Type len 100 option http-keep-alive #frontend fromvarnish bind *:80 # default_backend toapache #backend toapache option forwardfor balance hdr(Host) hash-type map-based djb2 server port_1 10.224.67.9:1 server port_10001 10.224.67.9:10001 server port_10002 10.224.67.9:10002 server port_10003 10.224.67.9:10003 server port_10004 10.224.67.9:10004 . . . server port_10996 10.224.67.9:10996 server port_10997 10.224.67.9:10997 server port_10998 10.224.67.9:10998 server port_10999 10.224.67.9:10999 http-response set-header X-Port %s I disabled retries and redispatch, thinking that there was some issue on the backend, I tried it using either a listen or a frontend-backend pair, and receive the same results. I attempted to look at an strace of the two x-www-form-urlencoded behaviors (with and without content), the results of that are: 12:01:42.011403 recvfrom(2, POST /posttome.php HTTP/1.1\r\nUser-Agent: curl/7.35.0\r\nAccept: */*\r\nHost: web.2014-07-09-08-28-39.egyitnews.mobi\r\nContent-Length: 3\r\nContent-Type: application/x-www-form-urlencoded\r\n\r\na=b, 8192, 0, NULL, NULL) = 186 12:01:42.011804 sendto(3, POST /posttome.php HTTP/1.1\r\nUser-Agent: curl/7.35.0\r\nAccept: */*\r\nHost: web.2014-07-09-08-28-39.egyitnews.mobi\r\nContent-Length: 3\r\nContent-Type: