Re: Rate Limiting Blog Link
Excellent point, Jonathan. So, would having HAProxy support/implement HTTPS be the only way to allow HTTPS rate limiting (in HTTPS only and HTTP and HTTPS mixed environments)? As for my other point. Have you looked at the sample configuration on http://blog.serverfault.com/post/1016491873/ It's a lot of configuration. And in that post it even describes part of the configuration as more cryptic but is not too complicated. I don't know many people who could configure their server to do rate limiting without that blog post (and just the documentation). Moreover, if you took over a project and saw this configuration, it'd take you a bit to figure out what's going on. There are also statements in that post such as the expire argument is how long to keep an entry in the table (In this case it just needs to be twice the length of the longest rate argument for a smoothed average). The time arguments for connection rate and bytes out rate are how long to calculate the average over. I just want a rate-limit reserved word that allows me to control connection rate / second (and bytes out rate), where i can send to some additional backend if violated. On Mon, Apr 11, 2011 at 5:47 AM, Jonathan Matthews cont...@jpluscplusm.comwrote: On 6 April 2011 16:42, bradford fingerm...@gmail.com wrote: Also, in a previous email I mentioned something about X-Forwarded-For IP addresses being comma delimited. This table would have to take that into consideration, I guess. No it shouldn't. If you rate-limit based on information that you find in the XFF header you allow malicious users to a) bypass the rate-limit by faking up different XFF headers each time or b) DoS legitimate users by faking up the same, matching, XFF header each time and letting haproxy do the DoS for them Also, above and beyond I haven't understood it yet, the rest of your email was rather light on *detail*. If other people are comprehending and happily using the functionality based on the existing config requirements and documentation, then perhaps the flaw doesn't lie with the config and/or documentation. My 2-pence, Jonathan -- Jonathan Matthews London, UK http://www.jpluscplusm.com/contact.html
http-check send-state / no header
hi, i set the send-state, but the backends didnt receive the header. I cant find the error. listen XX80 99.12.24.5:80 modehttp source 192.168.1.155:0 balance roundrobin timeout server 4000 timeout client 4000 timeout connect 8000 maxconn 3000 cookie SERVERID insert indirect errorfile 400 /etc/haproxy/errorfiles.400 errorfile 403 /etc/haproxy/errorfiles.403 errorfile 408 /etc/haproxy/errorfiles.408 errorfile 500 /etc/haproxy/errorfiles.500 errorfile 502 /etc/haproxy/errorfiles.502 errorfile 504 /etc/haproxy/errorfiles.504 errorfile 503 /etc/haproxy/errorfiles.503 server XX160XX01 192.168.1.103:80 check inter 1500 rise 1 fall 1 maxqueue 0 maxconn 1000 id 8201 cookie XX01 server XX160XX02 192.168.1.104:80 check inter 1600 rise 1 fall 1 maxqueue 0 maxconn 1000 id 8202 cookie XX02 server XX160XX03 192.168.1.105:80 check inter 1700 rise 1 fall 1 maxqueue 0 maxconn 1000 id 8203 cookie XX03 retries 2 option httpchk HEAD /haproxy/haproxy-xx.jsp http-check send-state option forwardfor option httpclose regards Bernhard This message was sent using IMP, the Internet Messaging Program.
Re: using haproxy for https
On Tue, Apr 12, 2011 at 12:15 AM, Joseph Hardeman jwharde...@gmail.com wrote: HI, Considering these are for a customer and they have already purchased their certs, I don't want to go through the hassle of converting them and causing them any issues. I don't see how this would inconvenience anybody, it is a pretty straightforward operation. It is done server-side and won't impact the customer or CA etc. https://support.servertastic.com/entries/323869-moving-ssl-certificate-from-iis-to-apache You are simply exporting the cert/key from IIS, which will insist on encrypting them. Then you are decrypting them using openssl to a PEM format file so it can be used by software other than IIS. Now we can stick with the examples on the haproxy site using mode tcp, but I was wondering is there a way via ACL's or something to do something along the lines of reading the requested domain name and sending that traffic to a specific server or set of servers? Of course not, if you are doing TCP mode with SSL traffic, how are you going to inspect the traffic at the proxy? Remember, it is encrypted.
Tproxy with multiple interfaces
Hi, I'm trying to setup an HAProxy instance to transparently load balance a group of web servers. The HAProxy server and web servers each have two interfaces; eth0 as the public interface and eth1 the private. I'm trying to configure the load balancer to accept requests on port 80 on eth0 and transparently proxy the connections to the web servers over the private interfaces on eth1. I've configured the load balancer in the normal way for tproxy, and have the web servers routing out through it. i.e. iptables -t mangle -N DIVERT iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT iptables -t mangle -A DIVERT -j MARK --set-mark 1 iptables -t mangle -A DIVERT -j ACCEPT ip rule add fwmark 1 dev eth0 lookup 100 ip rule add fwmark 1 dev eth1 lookup 100 ip route add local 0.0.0.0/0 dev lo table 100 root@haproxy:~# sysctl net.ipv4.ip_forward net.ipv4.ip_forward = 1 root@web:~# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric RefUse Iface xxx.xxx.97.00.0.0.0 255.255.255.0 U 0 00 eth0 192.168.0.0 0.0.0.0 255.255.252.0 U 0 00 eth1 0.0.0.0 xxx.xxx.97.155 0.0.0.0 UG0 00 eth0 ... server web1 192.168.3.65:80 source 192.168.3.64 usesrc clientip ... When testing this setup, the connection is correctly proxied to the web server through eth1 and a response is sent back to the HAProxy server on eth0, but it's ignored and the connection hangs. Here's some netstat output during the connection: HAProxy: Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp0 1 xxx.xxx.48.1:42424 192.168.3.65:80 SYN_SENT1386/haproxy tcp0 0 xxx.xxx.97.155:80 xxx.xxx.48.1:42424 ESTABLISHED 1386/haproxy Web Server: Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp0 0 192.168.3.65:80 xxx.xxx.48.1:42424 SYN_RECV- And the relevant entries from a tcpdump on each interface on each server: HAProxy/eth0 IP xxx.xxx.48.1.42424 xxx.xxx.97.155.80: Flags [S], seq 1021535895, win 5840, options [mss 1418,sackOK,TS val 2448718 ecr 0,nop,wscale 7], length 0 HAProxy/eth0 IP xxx.xxx.97.155.80 xxx.xxx.48.1.42424: Flags [S.], seq 504489330, ack 1158274356, win 5792, options [mss 1460,sackOK,TS val 230477 ecr 1407043,nop,wscale 7], length 0 HAProxy/eth0 IP xxx.xxx.48.1.42424 xxx.xxx.97.155.80: Flags [.], ack 1, win 46, options [nop,nop,TS val 1407043 ecr 230477], length 0 HAproxy/eth1 IP xxx.xxx.48.1.42424 192.168.3.65.80: Flags [S], seq 391399045, win 5840, options [mss 1460,sackOK,TS val 230550 ecr 0,nop,wscale 7], length 0 Web/eth1 IP xxx.xxx.48.1.42424 192.168.3.65.80: Flags [S], seq 391399045, win 5840, options [mss 1460,sackOK,TS val 230550 ecr 0,nop,wscale 7], length 0 Web/eth0 IP 192.168.3.65.80 xxx.xxx.48.1.42424: Flags [S.], seq 4033028970, ack 391399046, win 5792, options [mss 1460,sackOK,TS val 6751967 ecr 230550,nop,wscale 7], length 0 HAproxy/eth0 IP 192.168.3.65.80 xxx.xxx.48.1.42424: Flags [S.], seq 4033028970, ack 391399046, win 5792, options [mss 1460,sackOK,TS val 6751967 ecr 230550,nop,wscale 7], length 0 NB. In this example I had set the usesrc setting to client so the client's port was used for readability, but the same occurs with clientip. I'm sure this is occurring because the response connection is arriving on a different interface to the one HAProxy originated the connection to the web server on. Does anyone know of a way around this? Is there an iptables or ip rule that can be set to switch the return traffic from eth0 to eth1? I have tried testing the setup by proxying the connections to the web server's public interface on eth0 instead: ... server web1 xxx.xxx.97.156:80 source xxx.xxx.97.155 usesrc clientip ... And the transparent proxying works perfectly. Any thoughts or suggestions appreciated. Many thanks, REW
HAproxy 1.5-dev SSL-ID troubles
Hello! I have configured a Cisco CSS devices and had some experieces about them. Then I thinked that I try HAproxy development versioon that suppots stiky SSL and I installed debian 6.0.1 x86_64 into VMware ESXi vitrualmahine and installed HAproxy 1.5-dev6 . After that I tried to create HAproxy configuration that uses sticky SSL sessions and try to start Haproxy i had following terror message root@haproxy:# /usr/local/sbin/haproxy -f /etc/haproxy/haproxy.conf [ALERT] 101/163223 (1993) : Proxy 'https': type of pattern not usable with type of stick-table 'https'. [ALERT] 101/163223 (1993) : Proxy 'https': type of pattern not usable with type of stick-table 'https'. [ALERT] 101/163223 (1993) : Fatal errors found in configuration. and haproxy demon do not start. If understand correctly I do not need to use tunnel to use SSL sticky sessioon configurations. My SSL sticky sessioon configuration originates form HAproxy 1.5-dev documention folder file configuration.txt example # Learn SSL session ID from both request and response and create affinity I would b ebe peased if any one explain is it a bug of the HAproxy development version or is it my configuration problem ! Lauri Adamson AS Andmevara My haproxy.config content is following : global user haproxy group haproxy stats socket/tmp/haproxy daemon defaults contimeout 500 clitimeout 500 srvtimeout 500 listen stats :1936 mode http stats enable stats hide-version stats scope . stats realm Haproxy\ Statistics stats uri / stats stats auth Username:Password listen http 10.1.0.44:80 mode tcp balance leastconn maxconn 1 server web1 10.244.129.1:80 check server web2 10.244.129.2:80 check listen https 10.1.0.44:443 mode tcp balance leastconn maxconn 1 # maximum SSL session ID length is 32 bytes. stick-table type binary len 32 size 30k expire 30m acl clienthello req_ssl_hello_type 1 acl serverhello rep_ssl_hello_type 2 # use tcp content accepts to detects ssl client and server hello. tcp-request inspect-delay 5s tcp-request content accept if clienthello # no timeout on response inspect delay by default. tcp-response content accept if serverhello # SSL session ID (SSLID) may be present on a client or server hello. # Its length is coded on 1 byte at offset 43 and its value starts # at offset 44. # Match and learn on request if client hello. stick on payload_lv(43,1) if clienthello # Learn on response if server hello. stick store-response payload_lv(43,1) if serverhello server web1 10.244.129.1:443 check server web2 10.244.129.2:443 check
Subscribe
RE: Tproxy with multiple interfaces
Randy, The problem is the gateway on the backend webservers needs to be set as a VIP (or eth1 interface) on the HAproxy servers on their private interface (assuming you have two HAproxy servers and are using heartbeat for failover). It looks like from your routing table that eth0 on the webservers' gateway is pointed to the eth0 interface on haproxy this is why it works perfectly when you configure haproxy to use the public IPs on the webservers. Once you change the default gateway on the backend webserver's to use eth1 on the haproxy server (or a VIP which lives on eth1 using heartbeat for failover between two haproxy servers) then it will work. Brian Carpio Senior Systems Engineer [cid:image001.jpg@01CBF8F3.6C9226D0] Office: +1.303.962.7242 Mobile: +1.720.319.8617 Email: bcar...@broadhop.com From: Randy Wilson [mailto:randyedwil...@gmail.com] Sent: Tuesday, April 12, 2011 8:29 AM To: haproxy@formilux.org Subject: Tproxy with multiple interfaces Hi, I'm trying to setup an HAProxy instance to transparently load balance a group of web servers. The HAProxy server and web servers each have two interfaces; eth0 as the public interface and eth1 the private. I'm trying to configure the load balancer to accept requests on port 80 on eth0 and transparently proxy the connections to the web servers over the private interfaces on eth1. I've configured the load balancer in the normal way for tproxy, and have the web servers routing out through it. i.e. iptables -t mangle -N DIVERT iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT iptables -t mangle -A DIVERT -j MARK --set-mark 1 iptables -t mangle -A DIVERT -j ACCEPT ip rule add fwmark 1 dev eth0 lookup 100 ip rule add fwmark 1 dev eth1 lookup 100 ip route add local 0.0.0.0/0http://0.0.0.0/0 dev lo table 100 root@haproxy:~# sysctl net.ipv4.ip_forward net.ipv4.ip_forward = 1 root@web:~# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric RefUse Iface xxx.xxx.97.00.0.0.0 255.255.255.0 U 0 00 eth0 192.168.0.0 0.0.0.0 255.255.252.0 U 0 00 eth1 0.0.0.0 xxx.xxx.97.155 0.0.0.0 UG0 00 eth0 ... server web1 192.168.3.65:80http://192.168.3.65:80 source 192.168.3.64 usesrc clientip ... When testing this setup, the connection is correctly proxied to the web server through eth1 and a response is sent back to the HAProxy server on eth0, but it's ignored and the connection hangs. Here's some netstat output during the connection: HAProxy: Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp0 1 xxx.xxx.48.1:42424 192.168.3.65:80http://192.168.3.65:80 SYN_SENT1386/haproxy tcp0 0 xxx.xxx.97.155:80 xxx.xxx.48.1:42424 ESTABLISHED 1386/haproxy Web Server: Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp0 0 192.168.3.65:80http://192.168.3.65:80 xxx.xxx.48.1:42424 SYN_RECV- And the relevant entries from a tcpdump on each interface on each server: HAProxy/eth0 IP xxx.xxx.48.1.42424 xxx.xxx.97.155.80: Flags [S], seq 1021535895, win 5840, options [mss 1418,sackOK,TS val 2448718 ecr 0,nop,wscale 7], length 0 HAProxy/eth0 IP xxx.xxx.97.155.80 xxx.xxx.48.1.42424: Flags [S.], seq 504489330, ack 1158274356, win 5792, options [mss 1460,sackOK,TS val 230477 ecr 1407043,nop,wscale 7], length 0 HAProxy/eth0 IP xxx.xxx.48.1.42424 xxx.xxx.97.155.80: Flags [.], ack 1, win 46, options [nop,nop,TS val 1407043 ecr 230477], length 0 HAproxy/eth1 IP xxx.xxx.48.1.42424 192.168.3.65.80: Flags [S], seq 391399045, win 5840, options [mss 1460,sackOK,TS val 230550 ecr 0,nop,wscale 7], length 0 Web/eth1 IP xxx.xxx.48.1.42424 192.168.3.65.80: Flags [S], seq 391399045, win 5840, options [mss 1460,sackOK,TS val 230550 ecr 0,nop,wscale 7], length 0 Web/eth0 IP 192.168.3.65.80 xxx.xxx.48.1.42424: Flags [S.], seq 4033028970, ack 391399046, win 5792, options [mss 1460,sackOK,TS val 6751967 ecr 230550,nop,wscale 7], length 0 HAproxy/eth0 IP 192.168.3.65.80 xxx.xxx.48.1.42424: Flags [S.], seq 4033028970, ack 391399046, win 5792, options [mss 1460,sackOK,TS val 6751967 ecr 230550,nop,wscale 7], length 0 NB. In this example I had set the usesrc setting to client so the client's port was used for readability, but the same occurs with clientip. I'm sure this is occurring because the response connection is arriving on a different interface to the one HAProxy originated the connection to the web server on. Does anyone know of a way around this? Is there an iptables or ip rule that can be set to switch the return traffic from eth0 to eth1? I have tried testing the setup by proxying the connections to the web server's public interface on eth0 instead: ... server web1 xxx.xxx.97.156:80 source xxx.xxx.97.155 usesrc clientip ...
Re: Tproxy with multiple interfaces
Hi Brian, Thanks for the response. I had previously tried this, but setting the default gateway on the web servers to point to the HAProxy server's eth1 results in the web servers losing all external connectivity, as the source address is always a private address. root@web:~# ping -c 5 8.8.8.8 PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. ^C --- 8.8.8.8 ping statistics --- 5 packets transmitted, 0 received, 100% packet loss, time 4008ms root@haproxy:~# tcpdump -n -i eth1 icmp tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth1, link-type EN10MB (Ethernet), capture size 96 bytes 16:33:57.024999 IP 192.168.3.65 8.8.8.8: ICMP echo request, id 6180, seq 1, length 64 16:33:58.034080 IP 192.168.3.65 8.8.8.8: ICMP echo request, id 6180, seq 2, length 64 16:33:59.034036 IP 192.168.3.65 8.8.8.8: ICMP echo request, id 6180, seq 3, length 64 16:34:00.034104 IP 192.168.3.65 8.8.8.8: ICMP echo request, id 6180, seq 4, length 64 16:34:01.034266 IP 192.168.3.65 8.8.8.8: ICMP echo request, id 6180, seq 5, length 64 Any other ideas? I'm currently running a similar setup to load balance a mail cluster that's been in place for almost 4 years. The HAProxy servers use an older kernel with the cttproxy patch. The mail servers all receive connections from the HAProxy boxes on their eth1 interfaces and route back out to the them on their eth0s - without any iptables rules or routing tables. Thanks, REW On Tue, Apr 12, 2011 at 4:24 PM, Brian Carpio bcar...@broadhop.com wrote: Randy, The problem is the gateway on the backend webservers needs to be set as a VIP (or eth1 interface) on the HAproxy servers on their private interface (assuming you have two HAproxy servers and are using heartbeat for failover). It looks like from your routing table that eth0 on the webservers’ gateway is pointed to the eth0 interface on haproxy this is why it works perfectly when you configure haproxy to use the public IPs on the webservers. Once you change the default gateway on the backend webserver’s to use eth1 on the haproxy server (or a VIP which lives on eth1 using heartbeat for failover between two haproxy servers) then it will work. *Brian Carpio * *Senior Systems Engineer* *[image: Description: Description: BroadHop Home Page]* *Office: +1.303.962.7242* *Mobile: +1.720.319.8617* *Email: bcar...@broadhop.com* *From:* Randy Wilson [mailto:randyedwil...@gmail.com] *Sent:* Tuesday, April 12, 2011 8:29 AM *To:* haproxy@formilux.org *Subject:* Tproxy with multiple interfaces Hi, I'm trying to setup an HAProxy instance to transparently load balance a group of web servers. The HAProxy server and web servers each have two interfaces; eth0 as the public interface and eth1 the private. I'm trying to configure the load balancer to accept requests on port 80 on eth0 and transparently proxy the connections to the web servers over the private interfaces on eth1. I've configured the load balancer in the normal way for tproxy, and have the web servers routing out through it. i.e. iptables -t mangle -N DIVERT iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT iptables -t mangle -A DIVERT -j MARK --set-mark 1 iptables -t mangle -A DIVERT -j ACCEPT ip rule add fwmark 1 dev eth0 lookup 100 ip rule add fwmark 1 dev eth1 lookup 100 ip route add local 0.0.0.0/0 dev lo table 100 root@haproxy:~# sysctl net.ipv4.ip_forward net.ipv4.ip_forward = 1 root@web:~# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric RefUse Iface xxx.xxx.97.00.0.0.0 255.255.255.0 U 0 00 eth0 192.168.0.0 0.0.0.0 255.255.252.0 U 0 00 eth1 0.0.0.0 xxx.xxx.97.155 0.0.0.0 UG0 00 eth0 ... server web1 192.168.3.65:80 source 192.168.3.64 usesrc clientip ... When testing this setup, the connection is correctly proxied to the web server through eth1 and a response is sent back to the HAProxy server on eth0, but it's ignored and the connection hangs. Here's some netstat output during the connection: HAProxy: Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp0 1 xxx.xxx.48.1:42424 192.168.3.65:80 SYN_SENT1386/haproxy tcp0 0 xxx.xxx.97.155:80 xxx.xxx.48.1:42424 ESTABLISHED 1386/haproxy Web Server: Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp0 0 192.168.3.65:80 xxx.xxx.48.1:42424 SYN_RECV- And the relevant entries from a tcpdump on each interface on each server: HAProxy/eth0 IP xxx.xxx.48.1.42424 xxx.xxx.97.155.80: Flags [S], seq 1021535895, win 5840, options [mss 1418,sackOK,TS val 2448718 ecr 0,nop,wscale 7], length 0 HAProxy/eth0 IP xxx.xxx.97.155.80 xxx.xxx.48.1.42424: Flags [S.], seq 504489330, ack 1158274356, win 5792, options [mss
RE: Tproxy with multiple interfaces
Randy, I can't speak to how your other environment works, as it seems suspicious that it works the way you describe in fully transparent mode but I also can't speak to the cttproxy patch as I've never used it. When you set the default gateway on the webservers to the haproxy eth1 interface you would then need to setup ipmasq in iptables to make the haproxy server properly route the packets out to the internet and masq the source IP as the haproxy eth0 IP. If the other environment is truly working then possibly you need to check two other settings on the non-working environment. On the haproxy environment make sure the below are set to 1... Possibly this will resolve your problems, if it does can you let me know because I can't seem to wrap my head around the fact that it would work. cat /proc/sys/net/ipv4/conf/all/send_redirects cat /proc/sys/net/ipv4/conf/eth0/send_redirects cat /proc/sys/net/ipv4/conf/eth1/send_redirects Brian Carpio Senior Systems Engineer [cid:image001.jpg@01CBF8F7.EABDD0F0] Office: +1.303.962.7242 Mobile: +1.720.319.8617 Email: bcar...@broadhop.com From: Randy Wilson [mailto:randyedwil...@gmail.com] Sent: Tuesday, April 12, 2011 9:40 AM To: haproxy@formilux.org Subject: Re: Tproxy with multiple interfaces Hi Brian, Thanks for the response. I had previously tried this, but setting the default gateway on the web servers to point to the HAProxy server's eth1 results in the web servers losing all external connectivity, as the source address is always a private address. root@web:~# ping -c 5 8.8.8.8 PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. ^C --- 8.8.8.8 ping statistics --- 5 packets transmitted, 0 received, 100% packet loss, time 4008ms root@haproxy:~# tcpdump -n -i eth1 icmp tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth1, link-type EN10MB (Ethernet), capture size 96 bytes 16:33:57.024999 IP 192.168.3.65 8.8.8.8http://8.8.8.8: ICMP echo request, id 6180, seq 1, length 64 16:33:58.034080 IP 192.168.3.65 8.8.8.8http://8.8.8.8: ICMP echo request, id 6180, seq 2, length 64 16:33:59.034036 IP 192.168.3.65 8.8.8.8http://8.8.8.8: ICMP echo request, id 6180, seq 3, length 64 16:34:00.034104 IP 192.168.3.65 8.8.8.8http://8.8.8.8: ICMP echo request, id 6180, seq 4, length 64 16:34:01.034266 IP 192.168.3.65 8.8.8.8http://8.8.8.8: ICMP echo request, id 6180, seq 5, length 64 Any other ideas? I'm currently running a similar setup to load balance a mail cluster that's been in place for almost 4 years. The HAProxy servers use an older kernel with the cttproxy patch. The mail servers all receive connections from the HAProxy boxes on their eth1 interfaces and route back out to the them on their eth0s - without any iptables rules or routing tables. Thanks, REW On Tue, Apr 12, 2011 at 4:24 PM, Brian Carpio bcar...@broadhop.commailto:bcar...@broadhop.com wrote: Randy, The problem is the gateway on the backend webservers needs to be set as a VIP (or eth1 interface) on the HAproxy servers on their private interface (assuming you have two HAproxy servers and are using heartbeat for failover). It looks like from your routing table that eth0 on the webservers' gateway is pointed to the eth0 interface on haproxy this is why it works perfectly when you configure haproxy to use the public IPs on the webservers. Once you change the default gateway on the backend webserver's to use eth1 on the haproxy server (or a VIP which lives on eth1 using heartbeat for failover between two haproxy servers) then it will work. Brian Carpio Senior Systems Engineer Office: +1.303.962.7242 Mobile: +1.720.319.8617 Email: bcar...@broadhop.comhttp://bcar...@broadhop.com From: Randy Wilson [mailto:randyedwil...@gmail.commailto:randyedwil...@gmail.com] Sent: Tuesday, April 12, 2011 8:29 AM To: haproxy@formilux.orgmailto:haproxy@formilux.org Subject: Tproxy with multiple interfaces Hi, I'm trying to setup an HAProxy instance to transparently load balance a group of web servers. The HAProxy server and web servers each have two interfaces; eth0 as the public interface and eth1 the private. I'm trying to configure the load balancer to accept requests on port 80 on eth0 and transparently proxy the connections to the web servers over the private interfaces on eth1. I've configured the load balancer in the normal way for tproxy, and have the web servers routing out through it. i.e. iptables -t mangle -N DIVERT iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT iptables -t mangle -A DIVERT -j MARK --set-mark 1 iptables -t mangle -A DIVERT -j ACCEPT ip rule add fwmark 1 dev eth0 lookup 100 ip rule add fwmark 1 dev eth1 lookup 100 ip route add local 0.0.0.0/0http://0.0.0.0/0 dev lo table 100 root@haproxy:~# sysctl net.ipv4.ip_forward net.ipv4.ip_forward = 1 root@web:~# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric RefUse Iface xxx.xxx.97.00.0.0.0 255.255.255.0
x-forwarded-for and server side keep alive
Hi there, I browsed the list to look for an answer to this question, without success, so I hope you can help me on this. I want to use Haproxy in front of Tomcat. I need to get the client's IP, so I logically activated 'option forwardfor', which works fine. I also want server-side keepalive. And this is when I discovered that Haproxy sends the x-forwarded-for header with the first request of the keep-alived connection only. It seems that tomcat 6.0.32 (that we use) cannot remember the x-forwarded-for value across multiple requests. So we would need to send the header with every request. My first question is: does anybody see anything wrong with those assumptions ? Then: is there a way to have x-forwarded-for added to each request without giving up on server-side keep alive ? Thanks, Julien
RE: x-forwarded-for and server side keep alive
From the documentation It is important to note that as long as HAProxy does not support keep-alive connections, only the first request of a connection will receive the header. For this reason, it is important to ensure that option httpclose is set when using this option. Examples : # Public HTTP address also used by stunnel on the same machine frontend www mode http option forwardfor except 127.0.0.1 # stunnel already adds the header # Those servers want the IP Address in X-Client backend www mode http option forwardfor header X-Client See also : option httpclose Brian Carpio Senior Systems Engineer Office: +1.303.962.7242 Mobile: +1.720.319.8617 Email: bcar...@broadhop.com -Original Message- From: Julien Vehent [mailto:jul...@linuxwall.info] Sent: Tuesday, April 12, 2011 1:55 PM To: Haproxy Subject: x-forwarded-for and server side keep alive Hi there, I browsed the list to look for an answer to this question, without success, so I hope you can help me on this. I want to use Haproxy in front of Tomcat. I need to get the client's IP, so I logically activated 'option forwardfor', which works fine. I also want server-side keepalive. And this is when I discovered that Haproxy sends the x-forwarded-for header with the first request of the keep-alived connection only. It seems that tomcat 6.0.32 (that we use) cannot remember the x-forwarded-for value across multiple requests. So we would need to send the header with every request. My first question is: does anybody see anything wrong with those assumptions ? Then: is there a way to have x-forwarded-for added to each request without giving up on server-side keep alive ? Thanks, Julien
Re: RE: x-forwarded-for and server side keep alive
option http-server-close is sufficient and allow client side keep-alive. Moreover, to achive a good load balancing, server side keepalice NEEDS to be disabled (with http-server-close option) since mutiple connections inside one keep-alive session are not balanced... Client side keep alive does not matters here. On Tuesday 12 April 2011 13:53:49 Brian Carpio wrote: From the documentation It is important to note that as long as HAProxy does not support keep-alive connections, only the first request of a connection will receive the header. For this reason, it is important to ensure that option httpclose is set when using this option. Examples : # Public HTTP address also used by stunnel on the same machine frontend www mode http option forwardfor except 127.0.0.1 # stunnel already adds the header # Those servers want the IP Address in X-Client backend www mode http option forwardfor header X-Client See also : option httpclose Brian Carpio Senior Systems Engineer Office: +1.303.962.7242 Mobile: +1.720.319.8617 Email: bcar...@broadhop.com -Original Message- From: Julien Vehent [mailto:jul...@linuxwall.info] Sent: Tuesday, April 12, 2011 1:55 PM To: Haproxy Subject: x-forwarded-for and server side keep alive Hi there, I browsed the list to look for an answer to this question, without success, so I hope you can help me on this. I want to use Haproxy in front of Tomcat. I need to get the client's IP, so I logically activated 'option forwardfor', which works fine. I also want server-side keepalive. And this is when I discovered that Haproxy sends the x-forwarded-for header with the first request of the keep-alived connection only. It seems that tomcat 6.0.32 (that we use) cannot remember the x-forwarded-for value across multiple requests. So we would need to send the header with every request. My first question is: does anybody see anything wrong with those assumptions ? Then: is there a way to have x-forwarded-for added to each request without giving up on server-side keep alive ? Thanks, Julien -- Guillaume Castagnino ca...@xwing.info / guilla...@castagnino.org
[GIT PULL horms-rebased7] pidfile fixes
Hi Willy, While looking over the horms-rebased7 branch of http://git.1wt.eu/git/haproxy.git/ I noticed a few problems in the pidfile handling. I think that the first patch may well fix a bug that I introduced, while the second two seem to be artifacts of your subsequent refactoring to remove my hold-the-pidfile-open-forever-in-the-master logic. I'm happy for you to squash these changes into your horms-rebased7 branch as you see fit. I do not have any other outstanding problems with your horms-rebased7 branch. The following changes since commit 12ed6f925aced0c1a07cd0937ebee004a092d83c: Teach socket_cache about wildcard addresses (2011-04-05 23:01:51 +0200) are available in the git repository at: git://github.com/horms/haproxy.git horms-rebased7 Simon Horman (3): Only write pid file once on startup Close pidfile when it is no longer needed. Always use the pidfile returned by prepare() src/haproxy.c | 30 +- 1 files changed, 17 insertions(+), 13 deletions(-)
[PATCH 2/2] Always use the pidfile returned by prepare()
prepare() will open and truncate pidfile if a pid file is to be used. This is a bug which is a hangover from factoring out changes to keep the pidfile open in master processes. Signed-off-by: Simon Horman ho...@verge.net.au --- src/haproxy.c |6 +++--- 1 files changed, 3 insertions(+), 3 deletions(-) diff --git a/src/haproxy.c b/src/haproxy.c index 46e4ae9..9c6ce46 100644 --- a/src/haproxy.c +++ b/src/haproxy.c @@ -1509,9 +1509,9 @@ int main(int argc, char **argv) while (1) { if (!replacing_workers) { - FILE *newpidfile = prepare(argc, argv); - if (!is_master) - pidfile = newpidfile; + if (pidfile) + fclose(pidfile); + pidfile = prepare(argc, argv); mode = global.mode (MODE_QUIET|MODE_VERBOSE); } else /* Restore value of mode before it was -- 1.7.4.1