Re: HAP 1.5.11 and SSL
Add 'ssl' to your server line so it uses ssl for the backend connection. Sent from my iPhone On Apr 16, 2015, at 12:12 PM, Phil Daws ux...@splatnix.net wrote: Hello all! Long time no post but have lost some of my old notes and hitting an issue with SSL. In my haproxy.conf I have: frontend frontend-zimbra-zwc-http mode http bind 10.1.8.73:80 redirect scheme https if !{ ssl_fc } frontend frontend-zimbra-zwc-https bind 10.1.8.73:443 ssl crt /etc/haproxy/certs/mydomain.pem ciphers RC4:HIGH:!aNULL:!MD5 option tcplog reqadd X-Forwarded-Proto:\ https default_backend backend-zimbra-zwc backend backend-zimbra-zwc mode http server zwc01 10.1.8.40:443 maxconn 1000 check-ssl verify none server zwc02 10.1.8.41:443 maxconn 1000 check-ssl verify none backup the HTTP connections are being re-directed to HTTPS as desired but when it hits the backend I see: (NGINX) The plain HTTP request was sent to HTTPS port If I have redirected at the frontend then why is plain HTTP being sent to the backend ? Thanks, Phil (null) (null)
Re: haproxy / mysql can't bind to socket
You want to reconfigure your MySQL server to only bind to the IP address you want it to, rather than to *:3306 so your haproxy instance can bind to 3306 on the VIP. On 4/16/15 4:19 PM, Tim Dunphy wrote: Hello, I'm trying to get haproxy to work with two database servers. But I'm getting stuck on an error when trying to start up haproxy. Saying that it can't bind to the socket. [root@aoadbld00036la haproxy]# service haproxy start Starting haproxy: [ALERT] 105/160506 (29040) : Starting proxy mysql-cluster: cannot bind socket [FAILED] Mysql is running and listening on port 3306 on all interfaces: [root@aoadbld00036la haproxy]# lsof -i :3306 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME mysqld 28711 mysql 22u IPv4 6614552 0t0 TCP *:mysql (LISTEN) mysqld 28711 mysql 44u IPv4 6614952 0t0 TCP aoadbld00036la:mysql-aoadbld00036lb.stg-tfayd.com:56669 http://aoadbld00036lb.stg-tfayd.com:56669 (ESTABLISHED) I have a virtual IP being provided by keepalived. I have a mysql database listening to this ip. I'm not using the real IP for this post. But I can log into the database using this virtual IP. I have non local binds setup in sysctl.conf : [root@aoadbld00036la ~]# grep ipv4 /etc/sysctl.conf | grep bind net.ipv4.ip_nonlocal_bind = 1 But for some reason this configuration still isn't giving me any luck! global log 127.0.0.1 local0 notice user haproxy group haproxy defaults log global retries 2 timeout connect 3000 timeout server 5000 timeout client 5000 listen mysql-cluster bind 10.10.10.163:3306 http://10.10.10.163:3306 mode tcp option mysql-check user haproxy_check balance roundrobin server mysql-1 10.10.10.248:3306 http://10.10.10.248:3306 check server mysql-2 10.10.10.249:3306 http://10.10.10.249:3306 check listen stats *:80 mode http stats enable stats uri / stats realm Strictly\ Private stats auth admin:secret Can someone please help me out on the solution here? I think the answer may be to get mysql to listen on a different port locally. And have the VIP address provide the service on the load balacned VIP address. Please correct me if I'm wrong there. Thanks, Tim -- GPG me!! gpg --keyserver pool.sks-keyservers.net http://pool.sks-keyservers.net --recv-keys F186197B
Re: HProxy - HTTPS for Stats
What platform are you running, and what version of haproxy are you using? There are probably precompiled binaries for 1.5 which is needed for ssl. Sent from my iPad On Dec 29, 2014, at 11:01 AM, Yosef Amir amir.yo...@comverse.com wrote: I would like that HAProxy will use the OPENSSL already installed on my Linux. I don’t want to bring the SSL libs with HAProxy . Assuming I compiled HAProxy using USE_OPENSSL=1: Does it mean HAProxy will link to local OPENSSL on my Linux ? Does stats configuration with SSL (as you sent in previous mail) will work ? (listen stats bind :8050 ssl crt /path/to/crt) -Original Message- From: Baptiste [mailto:bed...@gmail.com] Sent: Monday, December 29, 2014 4:02 PM To: Yosef Amir; HAProxy Subject: Re: HProxy - HTTPS for Stats Hi Yosef, Please keep the ML in Cc. You first need to compile HAProxy to support SSL. Use the USE_OPENSSL compilation directive. Baptiste On Mon, Dec 29, 2014 at 2:25 PM, Yosef Amir amir.yo...@comverse.com wrote: Hi, I get the following error : # haproxy -f /etc/haproxy/haproxy.cfg [ALERT] 362/160119 (16836) : parsing [/etc/haproxy/haproxy.cfg:49] : 'bind :8050' unknown keyword 'ssl'. Registered keywords : [ TCP] defer-accept [ TCP] interface arg [ TCP] mss arg [ TCP] v4v6 [ TCP] v6only [ TCP] transparent (not supported) [STAT] level arg [UNIX] gid arg [UNIX] group arg [UNIX] mode arg [UNIX] uid arg [UNIX] user arg [ ALL] accept-proxy [ ALL] backlog arg [ ALL] id arg [ ALL] maxconn arg [ ALL] name arg [ ALL] nice arg [ ALL] process arg [ALERT] 362/160119 (16836) : Error(s) found in configuration file : /etc/haproxy/haproxy.cfg [ALERT] 362/160119 (16836) : Fatal errors found in configuration. Thanks Amir Yosef -Original Message- From: Baptiste [mailto:bed...@gmail.com] Sent: Monday, December 29, 2014 12:59 PM To: Yosef Amir Cc: haproxy@formilux.org; Cohen Galit Subject: Re: HProxy - HTTPS for Stats On Mon, Dec 29, 2014 at 11:00 AM, Yosef Amir amir.yo...@comverse.com wrote: Hi , I would like to configure stats in haproxy.config file. For http is working great. How can I configure the HAProxy stats to use HTTPS ? Does it supported? My current lab configuration for stats is : listen stats :8050 mode http stats admin if TRUE # LOCALHOST stats show-legends stats uri /admin?stats #default is /haproxy?stats stats refresh 5s stats realm HAProxy\ Statistics #the \ sign stands for space userlist stats-auth group readonly users haproxy user haproxy insecure-password haproxy Thanks Amir Yosef This e-mail message may contain confidential, commercial or privileged information that constitutes proprietary information of Comverse Inc. or its subsidiaries. If you are not the intended recipient of this message, you are hereby notified that any review, use or distribution of this information is absolutely prohibited and we request that you delete all copies and contact us by e-mailing to: secur...@comverse.com. Thank You. Hi Yosef, You can simply bind the port using SSL and point to your certificate: listen stats bind :8050 ssl crt /path/to/crt [...] Baptiste This e-mail message may contain confidential, commercial or privileged information that constitutes proprietary information of Comverse Inc. or its subsidiaries. If you are not the intended recipient of this message, you are hereby notified that any review, use or distribution of this information is absolutely prohibited and we request that you delete all copies and contact us by e-mailing to: secur...@comverse.com. Thank You. “This e-mail message may contain confidential, commercial or privileged information that constitutes proprietary information of Comverse Inc. or its subsidiaries. If you are not the intended recipient of this message, you are hereby notified that any review, use or distribution of this information is absolutely prohibited and we request that you delete all copies and contact us by e-mailing to: secur...@comverse.com. Thank You.”
Re: Just had a thought about the poodle issue....
You mean like this? http://blog.haproxy.com/2014/10/15/haproxy-and-sslv3-poodle-vulnerability/ On 10/18/14, 10:34 AM, Malcolm Turnbull wrote: I was thinking Haproxy could be used to block any non-TLS connection Like you can with iptables: https://blog.g3rt.nl/take-down-sslv3-using-iptables.html However it would be nice if you had users trying to connect via IE6/7 etc on XP to display a nice message like, please upgrade to a secure browser chrome or firefox etc? Is that easy to do?
Re: CDN IP Address capturing
My environment uses Akamai for cdn and I've never heard of this requirement. We get an x-forwarded-for header, along with some other Akamai specific stuff. I've never has issues with our report on compliance running it this way. I would push back on your provider. Is using option hdr 22 even a standard thing? I really don't get how it helps with compliance status at all. Sent from my iPad On Jul 25, 2014, at 7:38 AM, Kobus Bensch kobus.ben...@trustpayglobal.com wrote: Hi We use HAProxy extensively and until a few days ago, had no problem with capturing the IP address of clients in X-Forward-IP portion of the HAproxy config. We have now, due to requirements in other countries, taken a service with a CDN provider. As the specific service is required to be PCI compliant, the only way they can provide us with the client IP address is to put it in the TCP option header 22. The last 32 bits of this header will contain the client IP address in HEX format. How, if at all possible, can this be transferred from this header into the X-Forward-For header on HAProxy so we can capture it in our application for further analysis in our back end systems? We use HAProxy 1.5.1, soon to be 1.5.2 on Centos 6.5. Our HAProxy sits in front of Apache HTTPD. Thanks in advance Kobus Trustpay Global Limited is an authorised Electronic Money Institution regulated by the Financial Conduct Authority registration number 900043. Company No 07427913 Registered in England and Wales with registered address 130 Wood Street, London, EC2V 6DL, United Kingdom. For further details please visit our website at www.trustpayglobal.com. The information in this email and any attachments are confidential and remain the property of Trustpay Global Ltd unless agreed by contract. It is intended solely for the person to whom or the entity to which it is addressed. If you are not the intended recipient you may not use, disclose, copy, distribute, print or rely on the content of this email or its attachments. If this email has been received by you in error please advise the sender and delete the email from your system. Trustpay Global Ltd does not accept any liability for any personal view expressed in this message.
Re: Loadbalancing with ssl on www only
A wildcard cert is helpful for some things, but domain.com will not validate against a cert issued for *.domain.com On 10/29/13, 10:52 AM, Bhaskar Maddala wrote: If it is any help you can get a certificate for *. domain.com http://domain.com On Oct 28, 2013 9:37 PM, Felix fe...@ferchland.org mailto:fe...@ferchland.org wrote: Hello, I am using haproxy to loadbalance my webapplication but I get into a problem with our ssl certificate. haproxy is also serving the ssl certificate to the clients. this works quite well. we only have certificate for www as subdomain, so all traffic hitting haproxy should be redirected to https://www. if the visitor comes from non ssl the domain can be rewritten without a problem, but if the visitor types the domain with ssl but without subdomain, the url can't be rewritten before the (in this case invalid) ssl certificate was served by haproxy. is there a way to redirect an ssl request before serving the certificate? global maxconn 4096 daemon log 128.0.0.1 local0 defaults log global mode http contimeout 5000 clitimeout 5 srvtimeout 5 option forwardfor retries 3 option redispatch option http-server-close frontend http *:80 mode http redirect location https://www.url.com if !{ ssl_fc } frontend https # reqadd X-Forwarded-Proto:\ https # www Redirect mode http acl non-www hdr(host) url.com http://url.com redirect prefix https://www.url.com if non-www bind *:443 ssl crt /crt/ssl.pem no-sslv3 default_backend web option forwardfor
Re: AW: Loadbalancing with ssl on www only
No way it worked with Apache. Ssl verification happens before http can do anything. Sent from my iPad On Oct 29, 2013, at 12:39 PM, Felix Ferchland fe...@ferchland.org wrote: So it’s simply impossible to redirect the request? I was using nginx as reverse proxy before and even apache can do that with a redirection… I’m a little surprised that this is simply impossible and i need another ssl vertificate. But thanks for the quick answers! Von: Bhaskar Maddala [mailto:madda...@gmail.com] Gesendet: Dienstag, 29. Oktober 2013 16:07 An: David Coulson Cc: Felix; haproxy@formilux.org Betreff: Re: Loadbalancing with ssl on www only Ahh, thank you -Bhaskar On Tue, Oct 29, 2013 at 10:56 AM, David Coulson da...@davidcoulson.net wrote: A wildcard cert is helpful for some things, but domain.com will not validate against a cert issued for *.domain.com On 10/29/13, 10:52 AM, Bhaskar Maddala wrote: If it is any help you can get a certificate for *. domain.com On Oct 28, 2013 9:37 PM, Felix fe...@ferchland.org wrote: Hello, I am using haproxy to loadbalance my webapplication but I get into a problem with our ssl certificate. haproxy is also serving the ssl certificate to the clients. this works quite well. we only have certificate for www as subdomain, so all traffic hitting haproxy should be redirected to https://www. if the visitor comes from non ssl the domain can be rewritten without a problem, but if the visitor types the domain with ssl but without subdomain, the url can't be rewritten before the (in this case invalid) ssl certificate was served by haproxy. is there a way to redirect an ssl request before serving the certificate? global maxconn 4096 daemon log 128.0.0.1 local0 defaults log global mode http contimeout 5000 clitimeout 5 srvtimeout 5 option forwardfor retries 3 option redispatch option http-server-close frontend http *:80 mode http redirect location https://www.url.com if !{ ssl_fc } frontend https # reqadd X-Forwarded-Proto:\ https # www Redirect mode http acl non-www hdr(host)url.com redirect prefix https://www.url.com if non-www bind *:443 ssl crt /crt/ssl.pem no-sslv3 default_backend web option forwardfor
Re: AW: AW: Loadbalancing with ssl on www only
Please post your apache configuration. There is seriously no way it worked. Redirection is redirection, and assuming it's all using ssl the certificate will impact the redirection. Sent from my iPad On Oct 29, 2013, at 1:11 PM, Felix Ferchland fe...@ferchland.org wrote: I can tell you, it worked. I think the difference is he kind of redirect (url vs header redirect). But I’m not an expert in proxy url rewriting, so I simply have to deal with that. I can’t order a new certificate for the domain because it’s an ev cert and these are quite expensive… Von: David Coulson [mailto:da...@davidcoulson.net] Gesendet: Dienstag, 29. Oktober 2013 17:58 An: Felix Ferchland Cc: Bhaskar Maddala; haproxy@formilux.org Betreff: Re: AW: Loadbalancing with ssl on www only No way it worked with Apache. Ssl verification happens before http can do anything. Sent from my iPad On Oct 29, 2013, at 12:39 PM, Felix Ferchland fe...@ferchland.org wrote: So it’s simply impossible to redirect the request? I was using nginx as reverse proxy before and even apache can do that with a redirection… I’m a little surprised that this is simply impossible and i need another ssl vertificate. But thanks for the quick answers! Von: Bhaskar Maddala [mailto:madda...@gmail.com] Gesendet: Dienstag, 29. Oktober 2013 16:07 An: David Coulson Cc: Felix; haproxy@formilux.org Betreff: Re: Loadbalancing with ssl on www only Ahh, thank you -Bhaskar On Tue, Oct 29, 2013 at 10:56 AM, David Coulson da...@davidcoulson.net wrote: A wildcard cert is helpful for some things, but domain.com will not validate against a cert issued for *.domain.com On 10/29/13, 10:52 AM, Bhaskar Maddala wrote: If it is any help you can get a certificate for *. domain.com On Oct 28, 2013 9:37 PM, Felix fe...@ferchland.org wrote: Hello, I am using haproxy to loadbalance my webapplication but I get into a problem with our ssl certificate. haproxy is also serving the ssl certificate to the clients. this works quite well. we only have certificate for www as subdomain, so all traffic hitting haproxy should be redirected to https://www. if the visitor comes from non ssl the domain can be rewritten without a problem, but if the visitor types the domain with ssl but without subdomain, the url can't be rewritten before the (in this case invalid) ssl certificate was served by haproxy. is there a way to redirect an ssl request before serving the certificate? global maxconn 4096 daemon log 128.0.0.1 local0 defaults log global mode http contimeout 5000 clitimeout 5 srvtimeout 5 option forwardfor retries 3 option redispatch option http-server-close frontend http *:80 mode http redirect location https://www.url.com if !{ ssl_fc } frontend https # reqadd X-Forwarded-Proto:\ https # www Redirect mode http acl non-www hdr(host)url.com redirect prefix https://www.url.com if non-www bind *:443 ssl crt /crt/ssl.pem no-sslv3 default_backend web option forwardfor
Re: Loadbalancing with ssl on www only
No. You need to get a cert with both www.domain.com and domain.com in it so both are valid in a browser. Sent from my iPad On Oct 28, 2013, at 9:33 PM, Felix fe...@ferchland.org wrote: Hello, I am using haproxy to loadbalance my webapplication but I get into a problem with our ssl certificate. haproxy is also serving the ssl certificate to the clients. this works quite well. we only have certificate for www as subdomain, so all traffic hitting haproxy should be redirected to https://www. if the visitor comes from non ssl the domain can be rewritten without a problem, but if the visitor types the domain with ssl but without subdomain, the url can't be rewritten before the (in this case invalid) ssl certificate was served by haproxy. is there a way to redirect an ssl request before serving the certificate? global maxconn 4096 daemon log 128.0.0.1 local0 defaults log global mode http contimeout 5000 clitimeout 5 srvtimeout 5 option forwardfor retries 3 option redispatch option http-server-close frontend http *:80 mode http redirect location https://www.url.com if !{ ssl_fc } frontend https # reqadd X-Forwarded-Proto:\ https # www Redirect mode http acl non-www hdr(host)url.com redirect prefix https://www.url.com if non-www bind *:443 ssl crt /crt/ssl.pem no-sslv3 default_backend web option forwardfor
Re: Haproxy SSL certificat exception with root
You can't just add mydomain.com to the *.mydomain.com certificate? Not much you can do with HAProxy here. Since the cert is invalid for https://mydomain.com/, users are going to get a SSL error when they connect. On 10/1/13 6:51 AM, Matthieu Boret wrote: Hi, I've setup Haproxy 1.5 dev 19 to handle my http and https traffic. All works fine except when I request the root url in https: https://mydomain.com My certificate is a wildcard *.mydomain.com http://mydomain.com What is the solution to remove this error? An url rewrite and add www? My Haproxy configuration: frontend https-requests mode http bind :80 bind :443 ssl crt ./mydomain.pem force-sslv3 acl is_webfront path_reg ^www||^/$(.*) acl is_api hdr(host) -i api.mydomain.com http://api.mydomain.com use_backend bk_webfront if is_webfront use_backend bk_api if is_api default_backend bk_webfront Thanks Matthieu
Re: Can HAProxy Reverse Proxy SSL to Backend?
On 7/1/13 7:10 PM, Qingshan Xie wrote: Willy, To explain my last question 3. Can HAProxy set a default frontend service? I list a possible configuration below, frontend PUBLIC bind :80 acl rec_w7 path_beg /A acl rec_w7 path_beg /B acl rec_w7 path_beg /B .. use_backend W7-Backend if rec_w7 #Default # acl rec_w6 path_beg /* use_backend W6-Backend if rec_w6 What I want HAProxy does is, if the request does not match any patterns in /A, /B, /C, .. can the traffic be sent to the default, W6-Backend? Is it doable? https://code.google.com/p/haproxy-docs/wiki/default_backend
Re: HAProxy latest on SSL
On 6/10/13 11:55 AM, Lukas Tribus wrote: Frontend SSL and backend SSL traffic has nothing to do with each other if thats what you mean. So both backends would be used, independently of whether the frontend connection is SSL or not. Maybe that should be made clear in the example. Since you have a front-end accepting both SSL and clear traffic, and a backend with SSL and clear servers, it is confusing to someone who doesn't know it works.
Re: HAProxy latest on SSL
On 6/10/13 7:18 PM, Lukas Tribus wrote: Do you have a concrete suggestion how to make this clearer? I think just make it clear that if you want SSL front-end traffic to go to SSL back-end traffic you need this: use-server backend:80 if !{ ssl_fc } use-server backend:443 if { ssl_fc } IMHO, it's confusing having clear and SSL backends defined and they being equivalent from a traffic routing perspective - I realize that's one of the great things HAProxy does, but might not be clear to a person new to the app. David
Re: Haproxy issues with rspirep
What version? I had a similar issue with dev17. Sent from my iPad On May 29, 2013, at 3:12 PM, s...@siezeconsulting.com s...@siezeconsulting.com wrote: Hello, rspirep ^Location:\ http://(.*):80(.*) Location:\ https://\1:443\2 if { ssl_fc } The above works but the following doesn't (Location URL is unchanged ) why ? rspirep ^Location:\ http://(.*):80(.*) Location:\ http://172.17.25.100:8080\2 if { ssl_fc } Reference : http://blog.exceliance.fr/2013/02/26/ssl-offloading-impact-on-web-applications/ Regards Syed
Re: Haproxy issues with rspirep
Does rspirep work with tcp? Does it not need to be using HTTP mode? David On May 29, 2013, at 4:28 PM, s...@siezeconsulting.com wrote: Hi Cyril , Sorry for the brevity . Haproxy IP = 172.17.25.100 ( fiction IP for clarity) Application server hostname = openamHost Application server IP = 172.17.25.101 Url for ssl offload access https://192.168.0.1/sso/Login Configured haproxy to ssl offload a tomcat based application running on port 8080 (OpenAm specifically). SSL offload happens , traffic is sent to port 8080 but the application sends a redirect URL in return as the following Problematic URL : http://172.17.25.99:80/sso/Login I used the following directive in the frontend of the haproxy configuration rspirep ^Location:\ http://(.*):80(.*) Location:\ http://172.17.25.100:8080\2 if { ssl_fc } Generic problem : Haproxy would capture i assumed the problematic URL and replace it with whatever happens to be my custom URL? Specific requirement: The application is wrongly sending the redirect URL , I would ideally want to capture any HTTP url and convert into HTTPS so that haproxy can again re-route it to port 8080 after decryption each time. Finally my simple requirement is to be able to control rewriting URLs at haproxy . haproxy.cfg frontend secured *:443 mode tcp SSL CERT BLAH BLAH rspirep ^Location:\ http://(.*):80(.*) Location:\ http://172.17.25.100:8080\2 if { ssl_fc } default_backend app #- # round robin balancing between the various backends #- backend app mode tcp balance roundrobin server app1 172.17.25.101:8080 check Hope i haven't complicated the problem this time :-) Regards Syed From: Cyril Bonté cyril.bo...@free.fr Sent: Thu, 30 May 2013 01:15:45 To: s...@siezeconsulting.com s...@siezeconsulting.com Cc: haproxy@formilux.org haproxy@formilux.org Subject: Re: Haproxy issues with rspirep Hi Syed, Le 29/05/2013 21:12, s...@siezeconsulting.com a æcopy;crit : Hello, rspirep ^Location:\ http://(.*):80(.*) Location:\ https://\1:443\2 if { ssl_fc } The above works but the following doesn't (Location URL is unchanged ) why ? rspirep ^Location:\ http://(.*):80(.*) Location:\ http://172.17.25.100:8080\2 if { ssl_fc } There's a lack of details. One configuration line is not enough to understand what you want to achieve. It will be hard to help you. Can you explain your needs and provide your whole configuration (please remove any sensitive data, such as passwords, IPs, ...) ? Are you sure you really want the ssl_fc condition here ? Reference : http://blog.exceliance.fr/2013/02/26/ssl-offloading-impact-on-web-applications/ Regards Syed -- Cyril Bontæcopy;
Re: SSL offloading configuration
Haproxy 1.5-Dev can do this already Sent from my iPhone On Apr 30, 2013, at 8:47 AM, Chris Sarginson ch...@sargy.co.uk wrote: Hi, Are there any plans to allow HAProxy to take the traffic that it can now SSL offload, perform header analysis, and then use an SSL encrypted connection to the backend server? I have a situation where I need to be able to use ACLs against SSL encrypted traffic, but then continue passing the traffic to the backend over an encrypted connection. This is specifically a security concern, rather than an issue with poor code. Cheers Chris
Re: Keeping LB pools status in sync
On 4/26/13 8:09 PM, Ahmed Osman wrote: Hello Everyone, I'm wondering if anyone is able to tell me if this is default behavior or if I need to configure this. In a nutshell I have this setup: LB_Pool1 Server1:6060 Server2:6060 LB_Pool2 Server1:80 Server2:80 I can do a check pretty easily on LB_Pool2 however I don't have a method for doing so on LB_Pool1. If something goes wrong with Server1 then the check in LB_Pool2 will detect it immediately and remove it from the pool until it's back up. Will Server1 be removed from LB_Pool1 at the same time? And if not, how would I set it up so that happens? No, but you can set the check port to 80 on pool 1 - Is there some reason why you can't check port 6060?
Re: Client ip gets lost after a request being passed through two haproxies?
On 4/25/13 2:12 PM, PiBa-NL wrote: Hey Wei Kong, Your probably using *option forwardfor http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4-option%20forwardfor* right? Think a second about how that option works: - HAProxyB recieves a connection from the Client IP, and adds a header in the http traffic telling X-Forwarded-For:c.l.i.ent - Then HAProxyA recieves a connection from HAProxyB, and adds another X-Forwarded-For:b.b.b.b header.. - Now Nginx recieves the connection from HAProxyA and the message might contains 2 X-Forwarded-For headers, of which only the last header is used (as it should be). Is this really what HAProxy does? Didn't test it yet, but most other reverse proxies just append IPs to the X-Forwarded-For header. First proxy: X-Forwarded-For: 1.1.1.1 Second proxy X-Forwarded-For: 1.1.1.1, 2.2.2.2 David
Re: Client ip gets lost after a request being passed through two haproxies?
On 4/25/13 2:12 PM, PiBa-NL wrote: Hey Wei Kong, Your probably using *option forwardfor http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4-option%20forwardfor* right? I checked this - HAProxy does append to the X-Forwarded-For header. In this example 10.2.3.40 is the HAproxy instance, and 192.168.1.1 was the X-Forwarded-For header passed to it. client ip/xforwardfor 10.250.52.241/192.168.1.1, 10.2.3.40 Maybe you are just parsing X-Forwarded-For wrong? David
Re: Balancing SIP
On Apr 12, 2013, at 11:26 AM, Jonathan Matthews wrote: Does anyone have anything they could share about using HAProxy for load-balancing SIP? Positive /or/ negative, of course! :-) HAProxy doesn't support UDP traffic, so SIP won't work very well. Maybe look at LVS, or one of the numerous SIP proxies which already exist.
Re: HAProxy and Zimbra
On Apr 10, 2013, at 2:36 PM, Phil Daws wrote: Hello, have just started to explore HAProxy and am finding it amazing! As a long time Zimbra user I wanted to see how one could balance the front-end web client so had a play around. What I have at present is the following configuration: frontend zimbra-zwc-frontend-https bind 172.30.8.21:443 ssl crt /etc/haproxy/certs/zimbra.pem mode tcp option tcplog reqadd X-Forwarded-Proto:\ https default_backend zimbra-zwc-backend-http What version of HAProxy are you running? If you are on 1.5-dev12 or newer, you should be using 'mode http' for this frontend. backend zimbra-zwc-backend-http mode http balance roundrobin stick-table type ip size 200k expire 30m stick on src server zwc1 zm1:80 check port 80 server zwc2 zm2:80 check port 80 I admit that the configuration has been cobbled together from other peoples thoughts and ideas; though it does actually work! I did try to go the route of HTTPS - HTTPS but that completely fell apart due to Zimbra using NGINX and automatically re-routing HTTP - HTTPS. The other stumbling block was I could not see how to check that the report HTTPS (443) port was available. I have seen check port and check id used but neither worked as expected. So at present I have HAProxy acting as the SSL terminator and backing off the requests to a HTTP backend. I can take one backend node down, upgrade it, and restart it with affecting any new connections again a single destination IP address; NICE! :) If you are using 1.5, SSL backends are easy. backend https-backend mode http balance source server server1 server1:443 check ssl server server2 server2:443 check ssl
Re: Question on parsing request body, URL re-writing
On Apr 9, 2013, at 1:53 PM, Connelly, Zachary (CGI Federal) wrote: HAProxy Mail List, I am a new user of the HAProxy software. I am attempting to set it up for the first time and am interested to see if the tool is able to parse the body of a request. I saw in the configuration document that “…HAProxy never touches data contents, it stops analysis at the end of headers,” but was curious if there is any way to parse the body contents, not to change the info but to use for routing the request to an appropriate endpoint. My guess from this statement and reviewing the configuration document is no but wanted to see if I’m missing anything. I was actually going to ask the same thing - I have a 'security' application with hard-coded absolute URLs in the body that need to be rewritten. We do some funky stuff with HAProxy 1.5-dev to get it to work at all, but the absolute URLs are breaking a few use cases. David
Re: Two HAProxy instances with a shared IP
On 4/9/13 5:27 PM, Jeff Zellner wrote: Hey Phil, I've recently been evaluating all of the above. Wackamole + Spread have so far worked the best for me (distributing a number of VIP's across a cluster of HAProxy machines with, allowing failover). Heartbeat didn't seem to work well in my environment, and I had a lot of trouble getting ucarp configured just-right. Not sure what environment you're in, but we run HAProxy with Pacemaker on RHEL, and it works pretty much flawlessly. In some cases we do active/passive VIPs the way most HA systems do, but others we use BGP to advertise /32 routes into the network and use that to get traffic to our load balancers. Works nicely, especially since not everything has to be on the same VLAN to support it. David
Re: HAProxy crashing on start
On 4/8/13 6:19 AM, Will Glass-Husain wrote: Hi, I've set up two identical instances of haproxy, using a peer table. I know they are identical because I cloned them from the same EC2 image. (I edited the config file by hand, but ran a diff to be sure it's the same). The problem is that while one instance starts up fine, the other starts, but immediately crashes. This one shows normal start up messages in the logs (but no errors). It creates the /var/run/haproxy.pid file and the /tmp/haproxy socket, and then the process apparently exits. Can you run the second instance with the '-d' switch from the command line? Basically take the existing command line and add '-d' after haproxy and capture the output. David
Re: Stickiness lost after failover
On 4/3/13 5:36 AM, Baptiste wrote: Better using stick tables with store-response and store-request to replace your appsession configuration. Is there a configuration example of this method somewhere? Google didn't turn up much for me. David
Re: htaccess in haproxy config
On 3/28/13 6:45 AM, Wolfgang Routschka wrote: Hello everybody, today a question about htaccess in haproxy config directly Is it possible to configure a htaccess protect in haproxy config similar apache htpasswd file Greetings htaccess can do a lot of things, so I'm assuming you're just wanting to do HTTP basic authentication. userlist httpusers user username insecure-password password frontend restricted_cluster acl auth_acl http_auth(httpusers) http-request auth realm basicauth unless auth_acl
Re: Intermittent success of rspirep
On 3/13/13 7:59 AM, Cyril Bonté wrote: For now, I don't know where to look but maybe it can be useful to find and fix the issue. I also tried with : v1.5-dev8 : it works v1.5-dev9 : segfault v1.5-dev10 : segfault v1.5-dev11 : couldn't compile v1.5-dev12 : couldn't compile v1.5-dev13 : it doesn't work anymore Maybe we'll have to study the commits between dev8 and dev9 ? Does anyone have any suggestions how to further troubleshoot this bug, or a potential workaround? David
Re: Intermittent success of rspirep
Looks good so far. Will do more testing tomorrow. Thanks Willy! Sent from my iPhone On Mar 25, 2013, at 8:19 PM, Willy Tarreau w...@1wt.eu wrote: Hi guys, On Mon, Mar 25, 2013 at 06:54:24AM -0400, David Coulson wrote: On 3/13/13 7:59 AM, Cyril Bonté wrote: For now, I don't know where to look but maybe it can be useful to find and fix the issue. I also tried with : v1.5-dev8 : it works v1.5-dev9 : segfault v1.5-dev10 : segfault v1.5-dev11 : couldn't compile v1.5-dev12 : couldn't compile v1.5-dev13 : it doesn't work anymore Maybe we'll have to study the commits between dev8 and dev9 ? Does anyone have any suggestions how to further troubleshoot this bug, or a potential workaround? Yes, please use this patch, that I'm about to merge into master. And thanks to Cyril for pinging me again on this subject which I initially missed! Cheers, Willy 0001-BUG-MEDIUM-http-fix-another-issue-caused-by-http-sen.patch
Re: Active/active HAProxy
On Mar 19, 2013, at 9:52 AM, Jérôme Benoit wrote: cheap hosting with no control on their backbone and network load on one box reach the max. So what happens when you lose a system? If you are doing active/active and either/both systems are above 50% utilized, you're going to have an issue when a failure occurs. You know you can run LVS on the same boxes as HAProxy? That wouldn't require a topology change, at least from a network perspective. David
Re: Intermittent success of rspirep
On 3/11/13 9:18 PM, David Coulson wrote: Configuration is below. Short story is my rspirep Location header replacement is successful only ~20% of the time - I'm just testing w/ curl over and over. I saw mixed information about http-server-close and http-pretend-keepalive, but it didn't seem to make much difference. I am running 1.5-dev17 - I'm going to try to build and test the latest snapshot shortly. I built 20130311 snapshot, and still experience the same issue - I thinned the configuration down to a single backend server and tried the rspirep in both the frontend and backend portion of the config. Running haproxy in debug mode, I see on a failing request that the backend.srvhdr and backend.srvrep lines are missing from the output. Seems to be consistent, although there isn't an error that might indicate why those are missing. Is there an easy way to get more detailed debug output than this? This works: 0001:app.accept(0007)=0008 from [10.2.3.40:58527] 0001:app.clireq[0008:]: GET /console-selfservice/ExistingUser/Links.do?action=myAccount HTTP/1.1 0001:app.clihdr[0008:]: User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.14.0.0 zlib/1.2.3 libidn/1.18 libssh2/1.4.2 0001:app.clihdr[0008:]: Host: apptest.domain.com 0001:app.clihdr[0008:]: Accept: */* 0001:app-console-selfservice.srvrep[0008:0009]: HTTP/1.1 302 Moved Temporarily 0001:app-console-selfservice.srvhdr[0008:0009]: Cache-Control: no-cache=set-cookie 0001:app-console-selfservice.srvhdr[0008:0009]: Connection: close 0001:app-console-selfservice.srvhdr[0008:0009]: Date: Tue, 12 Mar 2013 10:49:33 GMT 0001:app-console-selfservice.srvhdr[0008:0009]: Transfer-Encoding: chunked 0001:app-console-selfservice.srvhdr[0008:0009]: Location: https://rhesprodapp01.domain.com:7004/IMS-AA-IDP/sso/logon?RequestID=5e3830f4d834fa0a1d479e49f73a2b7dMajorVersion=1MinorVersion=2IssueInstant=2013-03-12T10%3A49%3A33ProviderID=urn%3Acom%3Arsasecurity%3A2004%3A10%3Asso%3Aprovider%3Aconsole-selfservice-providerIsPassive=falseAuthnContextClassRef=urn%3Acom%3Arsasecurity%3A2004%3A08%3Aauthn%3Apolicy%3Ac56399a2749110ac00d44d644862f5b2%20urn%3Acom%3Arsasecurity%3A2006%3A08%3Aauthn%3Asessionlifetime%3A1000c0027099AuthnContextComparison=exactRelayState=aHR0cHM6Ly9yaGVzcHJvZHJzYTAxLnN0ZXJsaW5nLmNvbTo3MDA0L2NvbnNvbGUtc2VsZnNlcnZpY2UvRXhpc3RpbmdVc2VyL0xpbmtzLmRvP2FjdGlvbj1teUFjY291bnQ%3Drsa%3AClientAddress=10%2E250%2E52%2E241SigAlg=http%3A%2F%2Fwww%2Ew3%2Eorg%2F2000%2F09%2Fxmldsig%23app-sha1Signature=PcVtsFezCv2FFj8YkYBHL2S5ji6r5uqKXJMu4MwtP9YQt3f7SZ4nyCp5dyt7Kq01OEgGn4JMLruSM644xHV2YMNI3PWq4U1D3%2FMsAJubWq9PDNAmT3mlZ3zFYwi %2Fy5ja4ukgeK9FN9EA0XtUqxZWP%2Fy4K%2B9eSMe50JpYrstXrpQ%3D 0001:app-console-selfservice.srvhdr[0008:0009]: Set-Cookie: ims-aa-idp-jsessionid=W6TkR1LdNs1mZG12vRZV2KVP5tvBhJqFMY1F5WPM1cNpkHnltPx8!-2128245804; path=/console-selfservice; secure 0001:app-console-selfservice.srvhdr[0008:0009]: X-Powered-By: Servlet/2.5 JSP/2.1 0001:app-console-selfservice.srvcls[0008:0009] 0001:app-console-selfservice.clicls[0008:0009] 0001:app-console-selfservice.closed[0008:0009] rspirep doesn't work here: 0003:app.accept(0007)=0008 from [10.2.3.40:58554] 0003:app.clireq[0008:]: GET /console-selfservice/ExistingUser/Links.do?action=myAccount HTTP/1.1 0003:app.clihdr[0008:]: User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.14.0.0 zlib/1.2.3 libidn/1.18 libssh2/1.4.2 0003:app.clihdr[0008:]: Host: apptest.domain.com 0003:app.clihdr[0008:]: Accept: */* 0003:app-console-selfservice.srvcls[0008:0009] 0003:app-console-selfservice.clicls[0008:0009] 0003:app-console-selfservice.closed[0008:0009] David
Re: Intermittent success of rspirep
On 3/12/13 7:31 AM, Cyril Bonté wrote: I'm sorry to say that you've certainly met a bug while combining http-send-name-header (which is a bit tricky in the code) and ssl ciphering on servers. This is a case that has not been tested, I think. I can also reproduce this with the configuration you provided (also wit the last snapshot). Once I remove ssl ciphering, it works well. I'm not sure to have time to investigate this before next week, so I hope Willy or someone at Exceliance can have a look on this. Do you really need to cipher outgoing connections ? Disabling it could be a quick fix to your issue. The appliances being proxied only supports HTTPS, so I can't do plain HTTP on the backend unfortunately. Fortunately, they're not production yet so nothing is 'broken'. At least I know it wasn't something I did :) David
Intermittent success of rspirep
Configuration is below. Short story is my rspirep Location header replacement is successful only ~20% of the time - I'm just testing w/ curl over and over. I saw mixed information about http-server-close and http-pretend-keepalive, but it didn't seem to make much difference. I am running 1.5-dev17 - I'm going to try to build and test the latest snapshot shortly. Thanks- David global user haproxy group haproxy log 127.0.0.1 local2 daemon stats socket /var/run/haproxy.stat mode 600 level admin maxconn 4 ulimit-n 81000 chroot /var/lib/haproxy pidfile /var/run/haproxy.pid crt-base /etc/haproxy/ssl defaults log global mode http option dontlognull #option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen stats :::7000 mode http stats enable stats hide-version stats realm haproxy stats uri / stats auth haproxy:test backend app-operations-console server rhesprodapp01 10.250.52.216:7072 check ssl # server rhesprodapp02 10.250.52.217:7072 check ssl backend app-console-selfservice http-send-name-header Host #cookie JSESSIONID prefix balance source server rhesprodapp01.domain.com:7004 rhesprodapp01.domain.com:7004 check ssl server rhesprodapp02.domain.com:7004 rhesprodapp02.domain.com:7004 check ssl frontend app timeout client 8640 modehttp option httpclose option forwardfor # option http-server-close # option http-pretend-keepalive bind :443 ssl crt domain.pem ciphers ECDHE-RSA-AES256-SHA:RC4-SHA:RC4:HIGH:!MD5:!aNULL:!EDH:!AESGCM default_backend app-console-selfservice acl url_app_ops path_beg /operations-console use_backend app-operations-console if url_app_ops rspirep ^Location: Location:\ foobar #rspirep ^Location:\s*https://([^/]+)/(.*)$ Location:\ https://apptest.domain.com/\2
Re: Problems with 1.5-dev17 and bind to interface
On 2/12/13 7:32 AM, Cornelius Riemenschneider wrote: The server is configured to listen to all traffic on eth1 to a specific port (12340), so either traffic sent to its normal internal ip adress or to its VIP address, in case keepalived assigned it to us will result in haproxy receiving traffic on 12340. I think that is the key issue - HAProxy will listen on all IPs assigned to the interface when it is started. I do not believe it will bind to new IPs as they are added by keepalived or some other clustering tool.
Re: Problems with 1.5-dev17 and bind to interface
On 2/12/13 7:38 AM, Cornelius Riemenschneider wrote: RE: Problems with 1.5-dev17 and bind to interface Ah okay, I expected bind :*12340 interface eth1 to listen to traffic coming to the interface, not to bind to al ips which are bound to the interface at the moment of starting haproxy. If that's really the case, the documentation of bind interface could be improved. There isn't a concept of 'bind to port 12345 on interface eth1'. It's either bind to *:12345 or x.x.x.x:1234 Other thing we have done at times is to just have haproxy bind to the box's IP address then drop in an iptables rule to NAT inbound traffic to the VIP into that IP/port.
Re: HAProxy basic setup question
No. HAProxy does not care if the systems are on the same subnet. Whatever you are using for VIP failover probably will though. Most people use bonded interfaces and multiple switches. Nothing to do with HAProxy. David On 12/8/12 8:20 AM, Hermes Flying wrote: Hi, I wanted to ask: If I have linux-1 and linux-2 running HAProxy instances each and let's say that HAProxy in linux-1 is active and does load balancing between linux-1 and linux-2 (both run Tomcat instance) is there a requirement that the 2 linux machines be connected to the same hub for load balancing/forwarding to succeed? If yes, isn't this a single point of failure? If not, what is the standard setup for what I describe? Thank you!
Re: HAProxy basic setup question
Then you need to plan your environment such that you have enough systems/capacity so you can lose some of it and not impact your ability to meet demand. Presumably if one system can't handle your workload, then having two in a cluster isn't going to be very effective after an outage. In general you figure out how many systems you need (N), then add another one (N+1). In some cases if the application is critical, you do N+2 so you can lose hardware and still have redundancy. On 12/8/12 12:44 PM, Hermes Flying wrote: But if one server is lost then the client requests will all be served by 1 of the 2 servers instead of distributing the requests among the 2 servers. So due to overload we could have from degradation to e.g. SW crash due to OutOfMemory exceptions. I mean doesn't this avalanche into a SPOF? *From:* Willy Tarreau w...@1wt.eu *To:* Hermes Flying flyingher...@yahoo.com *Cc:* David Coulson da...@davidcoulson.net; haproxy@formilux.org haproxy@formilux.org *Sent:* Saturday, December 8, 2012 7:25 PM *Subject:* Re: HAProxy basic setup question On Sat, Dec 08, 2012 at 09:14:48AM -0800, Hermes Flying wrote: Hi Willy, thanks for this. 1)I wanted to ask does the oblique lines indicates 2-port NIC on each server? Exactly. Note that almost all servers nowadays come with 2 onboard ports. 2) If I remove the oblique line as you note and have the 2 switches interconnected this is still considered a SPOF right? As loss of either switch1 or switch 2 brings the design down, right? No it's not a spof because if you lose a switch, you only lose the attached server and the attached router, so your architecture works in degraded mode but you still have one component of each kind to provide the service. Willy
Re: HAproxy and detect split-brain (network failures)
Again, you are mixing everything up. HAProxy has it's own configuration - It defines what nodes your port 80 traffic (or whatever) is routed to. Haproxy does periodic health checks of these backend services to make sure they are available for requests. If you have multiple haproxy instances they will all independently do health checks and not share any of that information with each other. HAProxy will route traffic to all systems defined as a backend for a particular service based upon whatever criteria is in the haproxy config. You can run a two-node environment that is active/backup from a VIP perspective, but active/active from a haproxy service perspective - Each node would run Apache (or whatever your service is) and haproxy would distribute requests across both based on your haproxy config. But, at any point in time only one node would actually be routing requests through it's local instance of haproxy. I can't make it any simpler than that. Draw a diagram of what you are trying to do if it doesn't make sense. On 11/29/12 2:06 PM, Hermes Flying wrote: You are saying that one instance of HAProxy runs in each system and one instance is assigned the VIP that clients hit-on (out of scope for HAProxy). But this HAProxy distributes the requests according to the load, either on system-A or system-B for which you seem to refer to as backup system. In what way are you now refering to it as backup system? Because I am interested in distributing the load to all the nodes. *From:* David Coulson da...@davidcoulson.net *To:* Hermes Flying flyingher...@yahoo.com *Cc:* Baptiste bed...@gmail.com; haproxy@formilux.org haproxy@formilux.org *Sent:* Thursday, November 29, 2012 8:57 PM *Subject:* Re: HAproxy and detect split-brain (network failures) You can do that, but haproxy doesn't have anything to do with the failover process, other than you run an instance of haproxy on one server, and another instance on your backup system. As I said, neither of the haproxy instances communicate anything, so all you need to do is move the IP clients are using from one server to the other in order to handle a failure. Moving the IP around is something keepalived, pacemaker, etc handles - Look at their documentation for specifics and challenges in a two-node config. HAProxy doesn't have a concent of primary and backup in terms of it's own instances. Each of them is stand alone. It's up to you, based on your network/IP config which one has traffic routed to it. David On 11/29/12 1:53 PM, Hermes Flying wrote: But if I install 2 HAProxy as load balancers, doesn't one act as the primary loadbalancer directing the load to the known servers while the secondary takes over load distribution as soon as the heartbeat fails? I remember reading this. Is this wrong? *From:* David Coulson mailto:da...@davidcoulson.net *To:* Hermes Flying mailto:flyingher...@yahoo.com *Cc:* Baptiste mailto:bed...@gmail.com; mailto:haproxy@formilux.org mailto:haproxy@formilux.org *Sent:* Thursday, November 29, 2012 8:39 PM *Subject:* Re: HAproxy and detect split-brain (network failures) You are mixing two totally different things together. 1) HAProxy will do periodic health checks of backend systems you are routing to. Depending if you configure something as 'backup' or 'not backup' will determine if/how traffic is routed to it. The backend systems do not 'take over'. Haproxy just routes traffic to systems based on your configuration. The backend systems don't know/care about the other backend nodes, unless your application requires it which is a different story and nothing to do with haproxy. HAproxy only cares about a single instance of itself - If you have more than one haproxy instance, they do NOT communicate anything between each other. 2) In terms of keepalived, pacemaker, etc, it makes no difference which you use with haproxy - all they do is manage the IP address(es) which haproxy is listening on, and perhaps restart haproxy if it dies. Their configuration and how you maintain quorum in a two-node configuration is a question for one of their mailing lists, or just read their documentation. I personally use pacemaker. On 11/29/12 1:35 PM, Hermes Flying wrote: Well I don't follow: You can have a pool of primary that it routes across, then backup systems that are only used when all primary systems are unavailable. When you are saying that the backup systems that are used when primary systems are unavailable, how do they decide to take over? How do they know that the other systems are unavailable? Are you saying that they depend on third party components like the ones you mentioned (Keepalived etc)? In this case, what is the most suitable tool to be used along with HAProxy? Is there a reference manual for this somewhere? *From:* David Coulson mailto:da...@davidcoulson.net *To:* Hermes Flying mailto:flyingher...@yahoo.com *Cc:* Baptiste mailto:bed...@gmail.com; mailto:haproxy
Re: HAproxy and detect split-brain (network failures)
Both haproxy instances have the same config, with the tomcat instances with the same weight, etc. Run something like keepalived or pacemaker to manage a VIP between the two boxes. That's it. Not sure about keepalived, but pacemaker can make sure haproxy is running, then either restart it or move the VIP if it is not running. David On 11/29/12 2:27 PM, Hermes Flying wrote: Something like the following: HAProxy1 Tomcat1 |+/\ | + |+---Tomcat2 +/+\ + + HAProxy2+++ HAProxy1 is in the same machine as Tomcat1 HAproxy2 is in the same machine as Tomcat2 HAProxy1 distributes the load among Tomcat1 and Tomcat2. I erroneously thought that HAProxy2 would take over when HAProxy1 crashed to distribute the load among Tomcat1/Tomcat2. So if both are independent what can I do? *From:* David Coulson da...@davidcoulson.net *To:* Hermes Flying flyingher...@yahoo.com *Cc:* Baptiste bed...@gmail.com; haproxy@formilux.org haproxy@formilux.org *Sent:* Thursday, November 29, 2012 9:12 PM *Subject:* Re: HAproxy and detect split-brain (network failures) Again, you are mixing everything up. HAProxy has it's own configuration - It defines what nodes your port 80 traffic (or whatever) is routed to. Haproxy does periodic health checks of these backend services to make sure they are available for requests. If you have multiple haproxy instances they will all independently do health checks and not share any of that information with each other. HAProxy will route traffic to all systems defined as a backend for a particular service based upon whatever criteria is in the haproxy config. You can run a two-node environment that is active/backup from a VIP perspective, but active/active from a haproxy service perspective - Each node would run Apache (or whatever your service is) and haproxy would distribute requests across both based on your haproxy config. But, at any point in time only one node would actually be routing requests through it's local instance of haproxy. I can't make it any simpler than that. Draw a diagram of what you are trying to do if it doesn't make sense. On 11/29/12 2:06 PM, Hermes Flying wrote: You are saying that one instance of HAProxy runs in each system and one instance is assigned the VIP that clients hit-on (out of scope for HAProxy). But this HAProxy distributes the requests according to the load, either on system-A or system-B for which you seem to refer to as backup system. In what way are you now refering to it as backup system? Because I am interested in distributing the load to all the nodes. *From:* David Coulson mailto:da...@davidcoulson.net *To:* Hermes Flying mailto:flyingher...@yahoo.com *Cc:* Baptiste mailto:bed...@gmail.com; mailto:haproxy@formilux.org mailto:haproxy@formilux.org *Sent:* Thursday, November 29, 2012 8:57 PM *Subject:* Re: HAproxy and detect split-brain (network failures) You can do that, but haproxy doesn't have anything to do with the failover process, other than you run an instance of haproxy on one server, and another instance on your backup system. As I said, neither of the haproxy instances communicate anything, so all you need to do is move the IP clients are using from one server to the other in order to handle a failure. Moving the IP around is something keepalived, pacemaker, etc handles - Look at their documentation for specifics and challenges in a two-node config. HAProxy doesn't have a concent of primary and backup in terms of it's own instances. Each of them is stand alone. It's up to you, based on your network/IP config which one has traffic routed to it. David On 11/29/12 1:53 PM, Hermes Flying wrote: But if I install 2 HAProxy as load balancers, doesn't one act as the primary loadbalancer directing the load to the known servers while the secondary takes over load distribution as soon as the heartbeat fails? I remember reading this. Is this wrong? *From:* David Coulson mailto:da...@davidcoulson.net *To:* Hermes Flying mailto:flyingher...@yahoo.com *Cc:* Baptiste mailto:bed...@gmail.com; mailto:haproxy@formilux.org mailto:haproxy@formilux.org *Sent:* Thursday, November 29, 2012 8:39 PM *Subject:* Re: HAproxy and detect split-brain (network failures) You are mixing two totally different things together. 1) HAProxy will do periodic health checks of backend systems you are routing to. Depending if you configure something as 'backup' or 'not backup' will determine if/how traffic is routed to it. The backend systems do not 'take over'. Haproxy just routes traffic to systems based on your configuration. The backend systems don't know/care about the other backend nodes, unless your application requires it which is a different story and nothing to do with haproxy. HAproxy only cares about a single instance of itself - If you have more than one haproxy instance, they do
Re: HAproxy and detect split-brain (network failures)
Again, you need to talk to the pacemaker people for actual clustering information. The ping was so a node could detect it lost upstream connectivity, and move the VIP, otherwise the VIP may continue to run on a system which does not have access to your network. This has nothing at all to do with split brain. If you want to deal with split brain, add a third node. Period. You also want to have redundant heartbeat communication paths. You also want STONITH/fencing so if one node detects the other is down it'll power it off or crash it. I've not had issues with a two-node cluster with two diverse backend communication links and fencing enabled. David On 11/29/12 3:58 PM, Hermes Flying wrote: You can have pacemaker ping an IP (gateway for example) and migrate the VIP based on that How does this help for splitbrain? If I understand what you say, pacemaker will ping an IP and if successfull will assume that the other node has crashed. But what if the other node hasn't and it is just their communication link that failed? Won't both become primary? How does the ping help? *From:* David Coulson da...@davidcoulson.net *To:* Hermes Flying flyingher...@yahoo.com *Cc:* Baptiste bed...@gmail.com; haproxy@formilux.org haproxy@formilux.org *Sent:* Thursday, November 29, 2012 10:26 PM *Subject:* Re: HAproxy and detect split-brain (network failures) On 11/29/12 3:11 PM, Hermes Flying wrote: I see now! One last question since you are using Pacemaker. Do you recommend it for splitbrain so that I look into that direction? Any two node cluster has risk of split brain. if you implement fencing/STONITH, you are in a better place. If you have a third node, that's even better, even if it does not actually run any services beyond the cluster software I mean when you say that pacemaker restart HAProxy, does it detect network failures as well? Or only SW crashes? I assume pacemaker will be aware of both HAProxy1 and HAProxy2 in my described deployment You can have pacemaker ping an IP (gateway for example) and migrate the VIP based on that. In my config I have haproxy configured as a cloned resource in pacemaker, so all nodes have the same pacemaker config for haproxy and it keeps haproxy running on all nodes all of the time.
Re: HAproxy and detect split-brain (network failures)
In general, yes, Pacemaker is reliable. If your config is wrong, you may still have an outage in the event of a failure. That said, if you are a business and need support, you probably want to use whatever clustering software ships with the distribution you use. I belive SuSE uses pacemaker, but RedHat still uses rgmanager. Pacemaker is tech preview in RHEL6 but will be mainline in 7. I believe RedHat employ some core developers of pacemaker. David On 11/29/12 4:10 PM, Hermes Flying wrote: Thank you for your help. I take it that you are find Pacemaker reliable in your experience? Should I look into it? *From:* David Coulson da...@davidcoulson.net *To:* Hermes Flying flyingher...@yahoo.com *Cc:* Baptiste bed...@gmail.com; haproxy@formilux.org haproxy@formilux.org *Sent:* Thursday, November 29, 2012 11:04 PM *Subject:* Re: HAproxy and detect split-brain (network failures) Again, you need to talk to the pacemaker people for actual clustering information. The ping was so a node could detect it lost upstream connectivity, and move the VIP, otherwise the VIP may continue to run on a system which does not have access to your network. This has nothing at all to do with split brain. If you want to deal with split brain, add a third node. Period. You also want to have redundant heartbeat communication paths. You also want STONITH/fencing so if one node detects the other is down it'll power it off or crash it. I've not had issues with a two-node cluster with two diverse backend communication links and fencing enabled. David On 11/29/12 3:58 PM, Hermes Flying wrote: You can have pacemaker ping an IP (gateway for example) and migrate the VIP based on that How does this help for splitbrain? If I understand what you say, pacemaker will ping an IP and if successfull will assume that the other node has crashed. But what if the other node hasn't and it is just their communication link that failed? Won't both become primary? How does the ping help? *From:* David Coulson mailto:da...@davidcoulson.net *To:* Hermes Flying mailto:flyingher...@yahoo.com *Cc:* Baptiste mailto:bed...@gmail.com; mailto:haproxy@formilux.org mailto:haproxy@formilux.org *Sent:* Thursday, November 29, 2012 10:26 PM *Subject:* Re: HAproxy and detect split-brain (network failures) On 11/29/12 3:11 PM, Hermes Flying wrote: I see now! One last question since you are using Pacemaker. Do you recommend it for splitbrain so that I look into that direction? Any two node cluster has risk of split brain. if you implement fencing/STONITH, you are in a better place. If you have a third node, that's even better, even if it does not actually run any services beyond the cluster software I mean when you say that pacemaker restart HAProxy, does it detect network failures as well? Or only SW crashes? I assume pacemaker will be aware of both HAProxy1 and HAProxy2 in my described deployment You can have pacemaker ping an IP (gateway for example) and migrate the VIP based on that. In my config I have haproxy configured as a cloned resource in pacemaker, so all nodes have the same pacemaker config for haproxy and it keeps haproxy running on all nodes all of the time.
Re: Load Balalncing Anycast DNS using Round Robin and HAproxy
On 9/6/12 4:59 AM, ril.kidd wrote: Hello, I have setup anycast DNS using BIND as the DNS server and BIRD routing daemon. I have 1 route server and 5 route clients. If you are using anycast, why not just let the routers load multiple routes to the destination IP, and let it do 'load balancing', rather than adding another layer of complexity? Not sure how you do that with BIRD, but you can use the 'maximum-paths' option on a Cisco router to allow more than one BGP route to a particular destination subnet to end up in the routing table.
Re: HAProxy in High Availability
On 6/28/12 7:15 PM, Willy Tarreau wrote: That's already what keepalived does, and it goes a bit further in that you can monitor the service for real, not just the process presence, and even decide several failover scenarios using floating VRRP priorities. For instance, I usually assign a weight of 4 to my haproxy process and 2 to sshd. That way, if haproxy dies, the other one takes the VIP. However, if both haproxy work and one sshd dies, the one taking the VIP is the one still reachable for administration. If one of each dies, the one with the running haproxy wins. You can do similar functionality with Pacemaker - Not sure if it is more complex, the same, or simpler to support than keepalived. It's just 'different'. I've used keepalived twice in the last 10 years, and pacemaker pretty much every day - Probably a little biased. Pacemaker is tech preview in RHEL6, and SuSE 11 uses it as it's standard resource manager for clustering. Would be nice if there was documentation or at least config snippets from users who have implemented it in the field. Some more complex setups involve switching a VIP depending on the number of application servers that haproxy sees, using monitor-fail if This is handy for multi-layer architectures with an inter-DC LAN for instance. Do you have a configuration example of this? I don't think there is a custom HAProxy OCF for Pacemaker yet - I've just been using the init.d script, but sounds like it could use a 'real' Pacemaker script to support it properly, especially if you can feed availability information of backend systems as attributes to influence where resources are placed. Is parsing out the http stats page (in csv, xml, or whatever) the simplest way to get the current 'state' of HAProxy and the systems it is routing to, or is there a better/cleaner/other way to do it?
Re: HAProxy in High Availability
You're better off running haproxy via pacemaker, so if haproxy dies then you can not have your VIP run on that host. We've been doing this for a while and it works nicely. Simple to configure too. On 6/28/12 6:39 AM, Türker Sezer wrote: Hi, We use HAProxy in our all high availability setups. We set up HAProyx instances as active-active or active-backup instances. We use DNS roundrobin to distribute requests to active haproxy instances and use keepalived for failover.
Re: HAProxy in High Availability
primitive re-haproxy-lsb lsb:haproxy \ meta failure-timeout=60 \ op monitor interval=30 timeout=5s \ op start interval=0 timeout=5s \ op stop interval=0 timeout=5s primitive re-adproxy-ip ocf:heartbeat:IPaddr \ meta failure-timeout=60 \ params ip=172.31.0.5 cidr_netmask=32 nic=lo \ meta failure-timeout=60 \ op monitor interval=30s group gr-haproxy re-haproxy-lsb re-adproxy-ip On 6/28/12 7:01 AM, Thomas Manson wrote: I don't know Pacemaker, do you have a sample config to share ?
Re: HAProxy in High Availability
They failover IPs between hosts running haproxy using keepalived - The 2 (or more) IPs references by the DNS record will always be 'alive'. On 6/28/12 7:00 AM, Thomas Manson wrote: usually a client will cache the IP served by the DNS server, in order to not query each time the DNS system. So how can the client switch to another server once it has resolved one. Regards, Thomas.
Re: HAProxy in High Availability
Multiple IP addresses are used, and managed by keepalived. On 6/28/12 7:11 AM, Thomas Manson wrote: Ok, but then, I don't get where is used DNS Round Robin, if only one IP is used. (it may be obvious, sorry ;);) Regards, Thomas. On Thu, Jun 28, 2012 at 1:08 PM, Türker Sezer turkerse...@tsdesign.info mailto:turkerse...@tsdesign.info wrote: On Thu, Jun 28, 2012 at 11:59 AM, Manson Thomas m...@mansonthomas.com mailto:m...@mansonthomas.com wrote: usually a client will cache the IP served by the DNS server, in order to not query each time the DNS system. So how can the client switch to another server once it has resolved one. Clients dont switch ip address. They connect same ip address. But we move ip address to backup or another active instance using keepalived so they connect another server using same ip address. -- Türker Sezer TS Design Informatics LTD. http://www.tsdesign.info/
Re: haproxy - varnish - backend server
Is haproxy adding X-Forwarded-For to the request it sends varnish? If so, just don't have varnish manipulate X-Forwarded-For and your app will use the header added by HAProxy. David On 6/5/12 9:04 PM, hapr...@serverphorums.com wrote: Hi guys Originally we had haproxy in front and connecting to backend server haproxy - backend server and applications and backend server see the real client ip fine without any issues But we decided to try adding Varnish cache in between haproxy - varnish - backend server Problem now is backend server and ips are seeing the client ip of the haproxy server and not real visitor client ips. varnish has the appropriate forwarding of client ips, remove req.http.X-Forwarded-For; set req.http.X-Forwarded-For = client.ip; and works if Varnish only in front of backends. So what setting if any in haproxy would I need to add or check for, to get the proper client ip from haproxy through varnish into the backend ? Using haproxy v1.3 here with Varnish 3.0.2. thanks --- posted at http://www.serverphorums.com http://www.serverphorums.com/read.php?10,508289,508289#msg-508289
Re: ha proxy Nagios plugin
We had the same issue with NagiosXI - I just updated check_haproxy to append ;csv to the url that it does a GET against. Seems like less work then modifying all your HAProxy instances :-) On 6/4/12 2:54 AM, Esteban Torres Rodríguez wrote: 2012/6/2 Willy Tarreauw...@1wt.eu: On Fri, Jun 01, 2012 at 11:28:00AM +0200, Esteban Torres Rodríguez wrote: 2012/6/1 Laurent DOLOSORlaur...@dinhosting.fr: Hello, But when I set the Centreon interface command, everything that is behind the ; delete it. It seems that the ; is a character not recognized or invalid. Try with a \ before the ;, we had the same problem and it resolv it. nothing. Not working. This is the command: check_haproxy -u http://ipserver:/\;csv; -U admin -P pass This is the output of the command. check_haproxy -u 'http://10.239.212.26:/\ That's bad, it is possible that they strip everything past this point to avoid accidentely running commands on poorly written scripts :-/ I'm seeing a possibly workaround : use another character instead of ';' in your requests, and have haproxy replace it with a ';' in the frontend, with the backend processing the stats. For instance : check_haproxy -u http://ipserver:/=csv; frontend stats_frt reqrep ^([^\ :]*\ )(/=)(.*) \1/;\3 default_backend stats_bck backend stats_bck stats uri / ... Willy Thanks!!! It works perfectly