Re: Understanding how to use rise and fall settings on a dead server
Hey, According to me, it's easier to change the NIC, since your intermittent issues may happen under load only. With HAProxy, I can't see anything you can currently do to avoid this behavior. I mean that NIC intermittent issues may happen under load, so haproxy would consider server down. load decreasing, NIC will work properly again and haproxy would consider server operational. and so on for a lonng time. Don't try to hide a hardware issue using a load-balancer... cheers On Thu, Feb 2, 2012 at 4:28 AM, John Clegg j...@dashtickets.co.nz wrote: Hi I'm trying to understand how ensure backserver which is failing and classified as dead stays dead. I've just had an instance on another server which is using another load-balancer where the NIC has intermittently failing and it caused the load-balancer to flap constantly. I would like to set a threshold where if the back-end service fails that it says dead, it stays dead and needs to be manually re-added to load-balancer. I'm trying to understand how the rise and fall settings (plus other config settings) can achieve this, or if there is another approach. Any ideas would be appreciated. Regards John -- John Clegg Dash Tickets http://www.dashtickets.co.nz
Re: Multiple haproxy instances and server connection limit
Hi, There is no way to do this. Even using stick tables, since only entries from a table are synchronized and not counters associated to each of them. sorry. cheers On Thu, Feb 2, 2012 at 10:15 AM, Mariusz Gronczewski xani...@gmail.com wrote: Hi, we have 2 haproxy servers in active-active configuration and I'm wondering if it's possible to set it in a way that maxconn for each backend server is shared between them so 1. If 2 balancers are up, each server have 50 maxconn 2. If only one is up (other dead/in maintenace), limit changes to 100 Is there any way to do it without restarting haproxy and loading new config ? Regards, -- Mariusz Gronczewski
questions
Hello, I try to install apache cluster with haproxy, the question is: is it necessary that apache nodes are in the same subnet with the loadbalancer nodes? -- Sincerly yours, Fraj KALLEL.
Re: questions
Hi, there is no reason for such configuration. HAProxy can be installed either in the same subnet or in a different subnet with no issues at all: it works as a reverse proxy in such case. cheers On Thu, Feb 2, 2012 at 3:52 PM, Fraj KALLEL frajkal...@gmail.com wrote: Hello, I try to install apache cluster with haproxy, the question is: is it necessary that apache nodes are in the same subnet with the loadbalancer nodes? -- Sincerly yours, Fraj KALLEL.
Re: questions
2012/2/2 Fraj KALLEL frajkal...@gmail.com Hello, I try to install apache cluster with haproxy, the question is: is it necessary that apache nodes are in the same subnet with the loadbalancer nodes? No, it isn't necessary. It is enough when haproxy can connect to apache servers. -- Türker Sezer TS Design Informatics LTD. http://www.tsdesign.info/
Re: Help determining where the bottleneck is
Thanks for the response. The stats were lagging actually, we determined that the bottleneck was before HAproxy (it ended up being the IPS in front of the network) However, our linux guy suggested the following sysctl changes to enhance throughput which i will share here: net.ipv4.tcp_tw_reuse = 1 net.ipv4.ip_local_port_range = 1024 65023 net.ipv4.tcp_max_syn_backlog = 10 net.ipv4.tcp_fin_timeout = 30 net.ipv4.tcp_synack_retries = 2 net.core.somaxconn = 6 net.core.netdev_max_backlog = 1 On Sun, Jan 29, 2012 at 5:26 AM, Willy Tarreau w...@1wt.eu wrote: Hi Steve, On Tue, Jan 24, 2012 at 08:55:15AM -0800, Steve V wrote: Good morning, Much love for haproxy and many thanks to all who have worked on and contributed to it. We have been using it for several years without issue. However, we have been doing load testing lately and there appears to be a bottleneck. It may not even have to do with haproxy (i dont think it does) but i need to double check anyways just to be thorough and cover all our bases. Hardware: VM running on ESXi, it has 2gigs RAM allocated to it, and 2 CPU's GuestOS: CentOS 5 Haproxy version: 1.4.8 (however, we just upgraded to 1.4.19 last night) Problem: second_proxy is getting hammered by a load test, site performance decreases to the point where the site is barely usable and the majority of pages time out. however, go to a different site that is in the same haproxy config listening on http_proxy going to the same backend server, and the site comes up fine and fast. it seems like something is being throttled or queued somewhere. its possible that it could be an issue behind haproxy on the app servers, but i just want to make sure there is nothing i need to tweak in my config. Here is a snapshot of the haproxy stats page for the slow pool second_proxy http://tinypic.com/r/15887qf/5 Did you tune any sysctl on your system ? Your snapshot reports a peak of 1600 conns/second, but the default kernel settings (somaxconn 128 and tcp_max_syn_backlog 1024) make this hard to reach, so it's very possible that the socket queue is simply full. I'm used to set both between 1 and 2 with good success. There is something you can try to detect if haproxy still accepts connections fine : simply try to connect to the stats URL on the unresponding port. If the stats display properly, then you're stuck on the servers. If the stats do not respond either, then the connection is not accepted. Be careful, you have no maxconn setting in the defaults section, and by default a listen uses 2000. I'm seeing that your snapshot indicates that this limit was not reached, still I wanted to let you know it's going to be the next issue once this one is resolved. here is my haproxy.cfg global maxconn 8096 daemon nbproc 1 stats socket /var/run/haproxy.stat defaults clitimeout 60 srvtimeout 60 Do you realize that this is 10 minutes (we're speaking HTTP here) ? Regards, Willy
[no subject]
Hi Im A Student In School And Your Proxy Sites Are Really Useful Can You Please Send Me Updates On The Sites Because They Seem To Block All The Ones We Knew. Thank You
haproxy converts non-standard/extended 5xx error codes to 502
Our web servers (behind haproxy) issue 500 error codes to pass information back to our client application on how to handle future requests. We use standard codes, like 500, 501, 502 but also use some non standard ones, 512 and 513. It seems that haproxy converts these non-standard error codes to 502 errors instead of passing them through as-is. Is there a way to configure/tell haproxy to pass through the error codes like these without converting them? TIA -Mark
Re: Understanding how to use rise and fall settings on a dead server
That's the problem . We did have a hardware issue on the NIC and the load balancer kept putting the server live / dead all the time. Unfortunately the NIC wasn't monitored so we had no idea the NIC was flapping. (This has is now monitored) My issue that the NIC flapping caused the load balancer to flap and that caused a lot of customer issues which impacted revenue for the business :-( It would have been better for that server to have marked dead and the servers under the load balancer would have taken over. My reasoning is that if there is a weird hardware / server fault or its flapping for too long / too many times, I want the load balancer to mark the server dead. I can then pick that up with my monitoring. Or is there a better strategy? John On Fri, Feb 3, 2012 at 1:43 AM, Baptiste bed...@gmail.com wrote: Hey, According to me, it's easier to change the NIC, since your intermittent issues may happen under load only. With HAProxy, I can't see anything you can currently do to avoid this behavior. I mean that NIC intermittent issues may happen under load, so haproxy would consider server down. load decreasing, NIC will work properly again and haproxy would consider server operational. and so on for a lonng time. Don't try to hide a hardware issue using a load-balancer... cheers On Thu, Feb 2, 2012 at 4:28 AM, John Clegg j...@dashtickets.co.nz wrote: Hi I'm trying to understand how ensure backserver which is failing and classified as dead stays dead. I've just had an instance on another server which is using another load-balancer where the NIC has intermittently failing and it caused the load-balancer to flap constantly. I would like to set a threshold where if the back-end service fails that it says dead, it stays dead and needs to be manually re-added to load-balancer. I'm trying to understand how the rise and fall settings (plus other config settings) can achieve this, or if there is another approach. Any ideas would be appreciated. Regards John -- John Clegg Dash Tickets http://www.dashtickets.co.nz -- John Clegg Chief Technical Officier Dash Tickets Phone: +64 4 831 5480 x805 http://www.dashtickets.co.nz
Re: Understanding how to use rise and fall settings on a dead server
Hi, Your request is legitimate, but I don't see how to do this with HAProxy itself. You need to write a third party script to do this. What I would do is to configure HAProxy logs, configure my syslog to isolate legitimate traffic from server status changes. That way, with a third party script it's easy to know if a server is flapping or not. Then you could disable the flapping server using the haproxy stats socket. cheers On Thu, Feb 2, 2012 at 9:24 PM, John Clegg j...@dashtickets.co.nz wrote: That's the problem . We did have a hardware issue on the NIC and the load balancer kept putting the server live / dead all the time. Unfortunately the NIC wasn't monitored so we had no idea the NIC was flapping. (This has is now monitored) My issue that the NIC flapping caused the load balancer to flap and that caused a lot of customer issues which impacted revenue for the business :-( It would have been better for that server to have marked dead and the servers under the load balancer would have taken over. My reasoning is that if there is a weird hardware / server fault or its flapping for too long / too many times, I want the load balancer to mark the server dead. I can then pick that up with my monitoring. Or is there a better strategy? John On Fri, Feb 3, 2012 at 1:43 AM, Baptiste bed...@gmail.com wrote: Hey, According to me, it's easier to change the NIC, since your intermittent issues may happen under load only. With HAProxy, I can't see anything you can currently do to avoid this behavior. I mean that NIC intermittent issues may happen under load, so haproxy would consider server down. load decreasing, NIC will work properly again and haproxy would consider server operational. and so on for a lonng time. Don't try to hide a hardware issue using a load-balancer... cheers On Thu, Feb 2, 2012 at 4:28 AM, John Clegg j...@dashtickets.co.nz wrote: Hi I'm trying to understand how ensure backserver which is failing and classified as dead stays dead. I've just had an instance on another server which is using another load-balancer where the NIC has intermittently failing and it caused the load-balancer to flap constantly. I would like to set a threshold where if the back-end service fails that it says dead, it stays dead and needs to be manually re-added to load-balancer. I'm trying to understand how the rise and fall settings (plus other config settings) can achieve this, or if there is another approach. Any ideas would be appreciated. Regards John -- John Clegg Dash Tickets http://www.dashtickets.co.nz -- John Clegg Chief Technical Officier Dash Tickets Phone: +64 4 831 5480 x805 http://www.dashtickets.co.nz
Issue with client connections hanging to haproxy
Hello. I'm having an issue with connections to haproxy hanging upon the connect stage. My setup is 2 nginx/php-fpm backends with haproxy load balancing them (3 servers total). When connecting directly to the app servers everything is great, but through haproxy my browser hangs at connecting for 1-30 seconds sometimes.. or just never connects requiring a hard refresh. Sysctl and haproxy settings on the load balancer are: net.ipv4.tcp_tw_recycle = 1 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_max_tw_buckets = 36 net.ipv4.tcp_fin_timeout = 20 net.ipv4.ip_local_port_range = 200064000 net.ipv4.tcp_syncookies = 0 net.ipv4.tcp_max_orphans = 262144 net.ipv4.tcp_max_syn_backlog = 262144 net.ipv4.tcp_synack_retries = 2 net.ipv4.tcp_syn_retries = 2 kernel.shmmax = 4294967296 fs.file-max = 1048576 net.core.netdev_max_backlog = 10 net.core.somaxconn = 10 net.core.rmem_max = 8388608 net.ipv4.tcp_rmem = 4096 1048576 8388608 net.core.wmem_max = 8388608 net.ipv4.tcp_wmem = 4096 1048576 8388608 net.ipv4.tcp_mem = 8388608 8388608 8388608 net.core.optmem_max = 40960 global log 127.0.0.1 local0 user haproxy group haproxy daemon maxconn 10 #stats socket /opt/haproxy/etc/sock.haproxy uid 0 gid 0 mode 700 level admin defaults log global option dontlognull balance leastconn retries 3 option redispatch timeout connect 2ms timeout server 3ms timeout client 2ms listen stats hidden:47880 mode http stats enable stats uri /stats stats realm HAProxy\ Statistics stats auth hidden listen www A.B.C.D:80 mode http option httpclose option nolinger timeout client 2ms option httpchk HEAD / HTTP/1.0 option forwardfor cookie SERVERID insert balance leastconn maxconn 5 server app1 10.240.0.2:80 cookie a1 maxconn 25000 check server app2 10.240.0.3:80 cookie a2 maxconn 25000 check #server app3 10.240.0.4:80 cookie a3 maxconn 15000 check backup tcpdump output when this happens: http://pastebin.com/cXE8kWew (note the 8 second hang) 20:46:11.923871 IP MY-CLIENT.55494 MY-SERVER.www: Flags [P.], seq 706215489:706216213, ack 3030317681, win 4280, length 724 20:46:12.118792 IP MY-CLIENT.55494 MY-SERVER.www: Flags [.], ack 2635, win 4280, length 0 20:46:12.303468 IP MY-CLIENT.55494 MY-SERVER.www: Flags [.], ack 5269, win 4280, length 0 20:46:12.303483 IP MY-CLIENT.55494 MY-SERVER.www: Flags [.], ack 7659, win 4280, length 0 20:46:12.303492 IP MY-CLIENT.55494 MY-SERVER.www: Flags [F.], seq 724, ack 7659, win 4280, length 0 20:46:20.111011 IP MY-CLIENT.55493 MY-SERVER.www: Flags [P.], seq 3064322029:3064322755, ack 1531281668, win 4280, length 726 20:46:20.140771 IP MY-CLIENT.55495 MY-SERVER.www: Flags [P.], seq 2914190645:2914191371, ack 2823463340, win 4280, length 726 20:46:20.305019 IP MY-CLIENT.55493 MY-SERVER.www: Flags [.], ack 320, win 4200, length 0 20:46:20.305032 IP MY-CLIENT.55493 MY-SERVER.www: Flags [F.], seq 726, ack 320, win 4200, length 0 20:46:20.324906 IP MY-CLIENT.55495 MY-SERVER.www: Flags [.], ack 709, win 4103, length 0 20:46:20.334540 IP MY-CLIENT.55495 MY-SERVER.www: Flags [F.], seq 726, ack 709, win 4103, length 0 20:46:22.087034 IP MY-CLIENT.55496 MY-SERVER.www: Flags [F.], seq 2675337998, ack 678907057, win 4280, length 0 20:46:22.277682 IP MY-CLIENT.55496 MY-SERVER.www: Flags [R.], seq 1, ack 188, win 0, length 0 Also of interest is the fact that there is about 3k SYN_RECV, 20k TIME_WAIT and 200 ESTABLISHED connections in netstat. Under the errors - resp column of the haproxy stats page for the backend, if I hover over the numbers for backend it says connection resets during transfer: 300k+ client, 0 server. Any help is greatly appreciated, I am stumped. Ivan
Re: Issue with client connections hanging to haproxy
Hi, You should setup net.ipv4.ip_local_port_range as well to increase allowed opened ports to servers. Your maxconns seems too high, but I doubt this is the source of your issue. cheers On Thu, Feb 2, 2012 at 10:16 PM, Ivan Ator ivanat...@gmail.com wrote: Hello. I'm having an issue with connections to haproxy hanging upon the connect stage. My setup is 2 nginx/php-fpm backends with haproxy load balancing them (3 servers total). When connecting directly to the app servers everything is great, but through haproxy my browser hangs at connecting for 1-30 seconds sometimes.. or just never connects requiring a hard refresh. Sysctl and haproxy settings on the load balancer are: net.ipv4.tcp_tw_recycle = 1 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_max_tw_buckets = 36 net.ipv4.tcp_fin_timeout = 20 net.ipv4.ip_local_port_range = 2000 64000 net.ipv4.tcp_syncookies = 0 net.ipv4.tcp_max_orphans = 262144 net.ipv4.tcp_max_syn_backlog = 262144 net.ipv4.tcp_synack_retries = 2 net.ipv4.tcp_syn_retries = 2 kernel.shmmax = 4294967296 fs.file-max = 1048576 net.core.netdev_max_backlog = 10 net.core.somaxconn = 10 net.core.rmem_max = 8388608 net.ipv4.tcp_rmem = 4096 1048576 8388608 net.core.wmem_max = 8388608 net.ipv4.tcp_wmem = 4096 1048576 8388608 net.ipv4.tcp_mem = 8388608 8388608 8388608 net.core.optmem_max = 40960 global log 127.0.0.1 local0 user haproxy group haproxy daemon maxconn 10 # stats socket /opt/haproxy/etc/sock.haproxy uid 0 gid 0 mode 700 level admin defaults log global option dontlognull balance leastconn retries 3 option redispatch timeout connect 2ms timeout server 3ms timeout client 2ms listen stats hidden:47880 mode http stats enable stats uri /stats stats realm HAProxy\ Statistics stats auth hidden listen www A.B.C.D:80 mode http option httpclose option nolinger timeout client 2ms option httpchk HEAD / HTTP/1.0 option forwardfor cookie SERVERID insert balance leastconn maxconn 5 server app1 10.240.0.2:80 cookie a1 maxconn 25000 check server app2 10.240.0.3:80 cookie a2 maxconn 25000 check #server app3 10.240.0.4:80 cookie a3 maxconn 15000 check backup tcpdump output when this happens: http://pastebin.com/cXE8kWew (note the 8 second hang) 20:46:11.923871 IP MY-CLIENT.55494 MY-SERVER.www: Flags [P.], seq 706215489:706216213, ack 3030317681, win 4280, length 724 20:46:12.118792 IP MY-CLIENT.55494 MY-SERVER.www: Flags [.], ack 2635, win 4280, length 0 20:46:12.303468 IP MY-CLIENT.55494 MY-SERVER.www: Flags [.], ack 5269, win 4280, length 0 20:46:12.303483 IP MY-CLIENT.55494 MY-SERVER.www: Flags [.], ack 7659, win 4280, length 0 20:46:12.303492 IP MY-CLIENT.55494 MY-SERVER.www: Flags [F.], seq 724, ack 7659, win 4280, length 0 20:46:20.111011 IP MY-CLIENT.55493 MY-SERVER.www: Flags [P.], seq 3064322029:3064322755, ack 1531281668, win 4280, length 726 20:46:20.140771 IP MY-CLIENT.55495 MY-SERVER.www: Flags [P.], seq 2914190645:2914191371, ack 2823463340, win 4280, length 726 20:46:20.305019 IP MY-CLIENT.55493 MY-SERVER.www: Flags [.], ack 320, win 4200, length 0 20:46:20.305032 IP MY-CLIENT.55493 MY-SERVER.www: Flags [F.], seq 726, ack 320, win 4200, length 0 20:46:20.324906 IP MY-CLIENT.55495 MY-SERVER.www: Flags [.], ack 709, win 4103, length 0 20:46:20.334540 IP MY-CLIENT.55495 MY-SERVER.www: Flags [F.], seq 726, ack 709, win 4103, length 0 20:46:22.087034 IP MY-CLIENT.55496 MY-SERVER.www: Flags [F.], seq 2675337998, ack 678907057, win 4280, length 0 20:46:22.277682 IP MY-CLIENT.55496 MY-SERVER.www: Flags [R.], seq 1, ack 188, win 0, length 0 Also of interest is the fact that there is about 3k SYN_RECV, 20k TIME_WAIT and 200 ESTABLISHED connections in netstat. Under the errors - resp column of the haproxy stats page for the backend, if I hover over the numbers for backend it says connection resets during transfer: 300k+ client, 0 server. Any help is greatly appreciated, I am stumped. Ivan
Re: Issue with client connections hanging to haproxy
As per my existing sysctl.conf: net.ipv4.ip_local_port_range = 200064000 I don't think I can do much more :( On 2/2/2012 1:32 PM, Baptiste wrote: Hi, You should setup net.ipv4.ip_local_port_range as well to increase allowed opened ports to servers. Your maxconns seems too high, but I doubt this is the source of your issue. cheers On Thu, Feb 2, 2012 at 10:16 PM, Ivan Atorivanat...@gmail.com wrote: Hello. I'm having an issue with connections to haproxy hanging upon the connect stage. My setup is 2 nginx/php-fpm backends with haproxy load balancing them (3 servers total). When connecting directly to the app servers everything is great, but through haproxy my browser hangs at connecting for 1-30 seconds sometimes.. or just never connects requiring a hard refresh. Sysctl and haproxy settings on the load balancer are: net.ipv4.tcp_tw_recycle = 1 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_max_tw_buckets = 36 net.ipv4.tcp_fin_timeout = 20 net.ipv4.ip_local_port_range = 200064000 net.ipv4.tcp_syncookies = 0 net.ipv4.tcp_max_orphans = 262144 net.ipv4.tcp_max_syn_backlog = 262144 net.ipv4.tcp_synack_retries = 2 net.ipv4.tcp_syn_retries = 2 kernel.shmmax = 4294967296 fs.file-max = 1048576 net.core.netdev_max_backlog = 10 net.core.somaxconn = 10 net.core.rmem_max = 8388608 net.ipv4.tcp_rmem = 4096 1048576 8388608 net.core.wmem_max = 8388608 net.ipv4.tcp_wmem = 4096 1048576 8388608 net.ipv4.tcp_mem = 8388608 8388608 8388608 net.core.optmem_max = 40960 global log 127.0.0.1 local0 user haproxy group haproxy daemon maxconn 10 #stats socket /opt/haproxy/etc/sock.haproxy uid 0 gid 0 mode 700 level admin defaults log global option dontlognull balance leastconn retries 3 option redispatch timeout connect 2ms timeout server 3ms timeout client 2ms listen stats hidden:47880 mode http stats enable stats uri /stats stats realm HAProxy\ Statistics stats auth hidden listen www A.B.C.D:80 mode http option httpclose option nolinger timeout client 2ms option httpchk HEAD / HTTP/1.0 option forwardfor cookie SERVERID insert balance leastconn maxconn 5 server app1 10.240.0.2:80 cookie a1 maxconn 25000 check server app2 10.240.0.3:80 cookie a2 maxconn 25000 check #server app3 10.240.0.4:80 cookie a3 maxconn 15000 check backup tcpdump output when this happens: http://pastebin.com/cXE8kWew (note the 8 second hang) 20:46:11.923871 IP MY-CLIENT.55494 MY-SERVER.www: Flags [P.], seq 706215489:706216213, ack 3030317681, win 4280, length 724 20:46:12.118792 IP MY-CLIENT.55494 MY-SERVER.www: Flags [.], ack 2635, win 4280, length 0 20:46:12.303468 IP MY-CLIENT.55494 MY-SERVER.www: Flags [.], ack 5269, win 4280, length 0 20:46:12.303483 IP MY-CLIENT.55494 MY-SERVER.www: Flags [.], ack 7659, win 4280, length 0 20:46:12.303492 IP MY-CLIENT.55494 MY-SERVER.www: Flags [F.], seq 724, ack 7659, win 4280, length 0 20:46:20.111011 IP MY-CLIENT.55493 MY-SERVER.www: Flags [P.], seq 3064322029:3064322755, ack 1531281668, win 4280, length 726 20:46:20.140771 IP MY-CLIENT.55495 MY-SERVER.www: Flags [P.], seq 2914190645:2914191371, ack 2823463340, win 4280, length 726 20:46:20.305019 IP MY-CLIENT.55493 MY-SERVER.www: Flags [.], ack 320, win 4200, length 0 20:46:20.305032 IP MY-CLIENT.55493 MY-SERVER.www: Flags [F.], seq 726, ack 320, win 4200, length 0 20:46:20.324906 IP MY-CLIENT.55495 MY-SERVER.www: Flags [.], ack 709, win 4103, length 0 20:46:20.334540 IP MY-CLIENT.55495 MY-SERVER.www: Flags [F.], seq 726, ack 709, win 4103, length 0 20:46:22.087034 IP MY-CLIENT.55496 MY-SERVER.www: Flags [F.], seq 2675337998, ack 678907057, win 4280, length 0 20:46:22.277682 IP MY-CLIENT.55496 MY-SERVER.www: Flags [R.], seq 1, ack 188, win 0, length 0 Also of interest is the fact that there is about 3k SYN_RECV, 20k TIME_WAIT and 200 ESTABLISHED connections in netstat. Under the errors - resp column of the haproxy stats page for the backend, if I hover over the numbers for backend it says connection resets during transfer: 300k+ client, 0 server. Any help is greatly appreciated, I am stumped. Ivan
RE: Understanding how to use rise and fall settings on a dead server
I tend to have really large rise, and small fall like 2 and 9 (or 99 or higher would be good if you want to ensure it stays down long enough to trigger). That way they stay dead for awhile, but can go down quickly. Anyways, so that it shows in my monitoring system I have this in my zabbix cfg on all my load balancers and trigger an alert if it is ever 0: UserParameter=proxysrvrsdown,echo show stat | /usr/local/bin/socat /var/lib/haproxy-stat stdio | grep -c DOWN So, if a frontend is flapping (and it could be the web server and not the nic), I will get the flapping as alerts from my network monitoring. Personally, if you think a backend should stay down when down, I would recommend having the backend do it's own self checks and shoot itself in the head if it detects problems, so that it will stay down. That said if you have enough backends, having a high rise could be a good idea. However, be warned that if there is one machine really bad, and the problem is on the load balancer side or global network hiccup, all backends could incorrectly be marked as down. So you really don't want them to stay down for too long. From: j...@dashtickets.com [mailto:j...@dashtickets.com] On Behalf Of John Clegg Sent: Wednesday, February 01, 2012 10:29 PM To: haproxy@formilux.org Subject: Understanding how to use rise and fall settings on a dead server Hi I'm trying to understand how ensure backserver which is failing and classified as dead stays dead. I've just had an instance on another server which is using another load-balancer where the NIC has intermittently failing and it caused the load-balancer to flap constantly. I would like to set a threshold where if the back-end service fails that it says dead, it stays dead and needs to be manually re-added to load-balancer. I'm trying to understand how the rise and fall settings (plus other config settings) can achieve this, or if there is another approach. Any ideas would be appreciated. Regards John -- John Clegg Dash Tickets http://www.dashtickets.co.nz