Re: problem with sort of caching of use_backend with socket.io and apache

2012-11-29 Thread david rene comba lareu
Hi,

many thanks, your link was exactly what i needed ! :D

Regards,
Shadow.

2012/11/29 Baptiste bed...@gmail.com:
 Hi David,

 For more information about HAProxy and websockets, please have a look at:
 http://blog.exceliance.fr/2012/11/07/websockets-load-balancing-with-haproxy/

 It may give you some hints and point you to the right direction.

 cheers


 On Wed, Nov 28, 2012 at 6:34 PM, david rene comba lareu
 shadow.of.sou...@gmail.com wrote:
 Thanks willy, i solved it as soon you answer me but i'm still dealing
 to the configuration to make it work as i need:

 my last question was this:
 http://serverfault.com/questions/451690/haproxy-is-caching-the-forwarding
 and i got it working, but for some reason, after the authentication is
 made and the some commands are sent, the connection is dropped and a
 new connection is made as you can see here:

   info  - handshake authorized 2ZqGgU2L5RNksXQRWuhi
   debug - setting request GET /socket.io/1/websocket/2ZqGgU2L5RNksXQRWuhi
   debug - set heartbeat interval for client 2ZqGgU2L5RNksXQRWuhi
   debug - client authorized for
   debug - websocket writing 1::
   debug - websocket received data packet
 5:3+::{name:ferret,args:[tobi]}
   debug - sending data ack packet
   debug - websocket writing 6:::3+[woot]
   info  - transport end (socket end)
   debug - set close timeout for client 2ZqGgU2L5RNksXQRWuhi
   debug - cleared close timeout for client 2ZqGgU2L5RNksXQRWuhi
   debug - cleared heartbeat interval for client 2ZqGgU2L5RNksXQRWuhi
   debug - discarding transport
   debug - client authorized
   info  - handshake authorized WkHV-B80ejP6MHQTWuhj
   debug - setting request GET /socket.io/1/websocket/WkHV-B80ejP6MHQTWuhj
   debug - set heartbeat interval for client WkHV-B80ejP6MHQTWuhj
   debug - client authorized for
   debug - websocket writing 1::
   debug - websocket received data packet
 5:4+::{name:ferret,args:[tobi]}
   debug - sending data ack packet
   debug - websocket writing 6:::4+[woot]
   info  - transport end (socket end)

 i tried several configurations, something like this:
 http://stackoverflow.com/questions/4360221/haproxy-websocket-disconnection/

 and also declaring 2 backends, and using ACL to forward to a backend
 that has the
   option http-pretend-keepalive
 when the request is a websocket request and to a backend that has
 http-server-close when the request is only for socket.io static files
 or is any other type of request that is not websocket.

 i would clarify that http-server-close is only on the nginx backend
 and in the static files backend, http-pretend-keepalive is on frontend
 all and in the websocket backend.

 anyone could point me to the right direction? i tried several
 combinations and none worked so far :(

 thanks in advance for your time and patience :)

 2012/11/24 Willy Tarreau w...@1wt.eu:
 Hi David,

 On Sat, Nov 24, 2012 at 09:26:56AM -0300, david rene comba lareu wrote:
 Hi everyone,

 i'm little disappointed with a problem i'm having trying to configure
 HAproxy in the way i need, so i need a little of help of you guys,
 that knows a lot more than me about this, as i reviewed all the
 documentation and tried several things but nothing worked :(.

 basically, my structure is:

 HAproxy as frontend, in 80 port - forwards by default to webserver
 (in this case is apache, in other machines could be nginx)
  - depending the domain
 and the request, forwards to an Node.js app

 so i have something like this:

 global
 log 127.0.0.1   local0
 log 127.0.0.1   local1 notice
 maxconn 4096
 user haproxy
 group haproxy
 daemon

   defaults
 log global
 modehttp
 maxconn 2000
 contimeout  5000
 clitimeout  5
 srvtimeout  5


 frontend all 0.0.0.0:80
 timeout client 5000
 default_backend www_backend

 acl is_soio url_dom(host) -i socket.io #if the request contains socket.io

 acl is_chat hdr_dom(host) -i chaturl #if the request comes from chaturl.com

 use_backend chat_backend if is_chat is_soio

 backend www_backend
 balance roundrobin
 option forwardfor # This sets X-Forwarded-For
 timeout server 5000
 timeout connect 4000
 server server1 localhost:6060 weight 1 maxconn 1024 check #forwards to 
 apache2

 backend chat_backend
 balance roundrobin
 option forwardfor # This sets X-Forwarded-For
 timeout queue 5
 timeout server 5
 timeout connect 5
 server server1 localhost:5558 weight 1 maxconn 1024 check #forward to
 node.js app

 my application uses socket.io, so anything that match the domain and
 has socket.io in the request, should forward to the chat_backend.

 The problem is that if i load directly from the browser, let say, the
 socket.io file (it will be something like
 http://www.chaturl.com/socket.io/socket.io.js) loads perfectly, but
 then when i try to load index.html (as
 

Re: HAproxy and detect split-brain (network failures)

2012-11-29 Thread Baptiste
Hi,

This is not HAProxy's role, this is the tool you use to ensure high
availability to do that.

I could see a way where HAProxy can report one interface failing,
maybe this could help you to detect if you're in a split brain
situation.

cheers



On Thu, Nov 29, 2012 at 11:51 AM, Hermes Flying flyingher...@yahoo.com wrote:
 Hi,
 I am looking into using HAProxy as our load balancer.
 I see that you are using a primary/backup approach. I was wondering how does
 HAProxy (if it does) address split-brain situation? Do you have a mechanism
 to detect and avoid it? Do you have some standard recommendation to all
 those using your solution?

 Thanks



Re: HAproxy and detect split-brain (network failures)

2012-11-29 Thread Robert Snyder
Hi,

We use Keepalived http://www.keepalived.org/index.html to manage the Virtual IP 
address management between our two physical HAproxy servers. It maintains 
heartbeat between the servers, and in the event of failure passes ensures that 
the VIPs are migrated and the service is brought up. Also handles migration 
back after a restart of our primary, so that if available, that is the server 
that owns the IPs. 

We use Mercurial to manage the configuration files between the two servers to 
maintain consistency so that we are prepared for consistent fail overs. 

Robert

On Nov 29, 2012, at 8 :02 AM, Baptiste bed...@gmail.com wrote:

 Hi,
 
 This is not HAProxy's role, this is the tool you use to ensure high
 availability to do that.
 
 I could see a way where HAProxy can report one interface failing,
 maybe this could help you to detect if you're in a split brain
 situation.
 
 cheers
 
 
 
 On Thu, Nov 29, 2012 at 11:51 AM, Hermes Flying flyingher...@yahoo.com 
 wrote:
 Hi,
 I am looking into using HAProxy as our load balancer.
 I see that you are using a primary/backup approach. I was wondering how does
 HAProxy (if it does) address split-brain situation? Do you have a mechanism
 to detect and avoid it? Do you have some standard recommendation to all
 those using your solution?
 
 Thanks
 





Robert Snyder
Outreach Technology Services
The Pennsylvania State University
The 329 Building, Suite 306E
University Park  PA  16802
Phone: 814-865-0912
E-mail: rsny...@psu.edu








Re: HAproxy and detect split-brain (network failures)

2012-11-29 Thread Hermes Flying
Hi Robert,
But with keep alive you can only detect that the 2 nodes can not contact each 
other (network failure). How do you know if the other node/process actually 
crashed so that the secondary can become the primary? 
 


 From: Robert Snyder r...@psu.edu
To: Baptiste bed...@gmail.com 
Cc: Hermes Flying flyingher...@yahoo.com; haproxy@formilux.org 
haproxy@formilux.org 
Sent: Thursday, November 29, 2012 3:13 PM
Subject: Re: HAproxy and detect split-brain (network failures)
  
Hi,

We use Keepalived http://www.keepalived.org/index.html to manage the Virtual IP 
address management between our two physical HAproxy servers. It maintains 
heartbeat between the servers, and in the event of failure passes ensures that 
the VIPs are migrated and the service is brought up. Also handles migration 
back after a restart of our primary, so that if available, that is the server 
that owns the IPs. 

We use Mercurial to manage the configuration files between the two servers to 
maintain consistency so that we are prepared for consistent fail overs. 

Robert

On Nov 29, 2012, at 8 :02 AM, Baptiste bed...@gmail.com wrote:

 Hi,
 
 This is not HAProxy's role, this is the tool you use to ensure high
 availability to do that.
 
 I could see a way where HAProxy can report one interface failing,
 maybe this could help you to detect if you're in a split brain
 situation.
 
 cheers
 
 
 
 On Thu, Nov 29, 2012 at 11:51 AM, Hermes Flying flyingher...@yahoo.com 
 wrote:
 Hi,
 I am looking into using HAProxy as our load balancer.
 I see that you are using a primary/backup approach. I was wondering how does
 HAProxy (if it does) address split-brain situation? Do you have a mechanism
 to detect and avoid it? Do you have some standard recommendation to all
 those using your solution?
 
 Thanks
 





Robert Snyder
Outreach Technology Services
The Pennsylvania State University
The 329 Building, Suite 306E
University Park  PA  16802
Phone: 814-865-0912
E-mail: rsny...@psu.edu

Re: HAproxy and detect split-brain (network failures)

2012-11-29 Thread Robert Snyder
David,

Exactly.

Robert


On Nov 29, 2012, at 1 :57 PM, David Coulson da...@davidcoulson.net wrote:

 You can do that, but haproxy doesn't have anything to do with the failover 
 process, other than you run an instance of haproxy on one server, and another 
 instance on your backup system. As I said, neither of the haproxy instances 
 communicate anything, so all you need to do is move the IP clients are using 
 from one server to the other in order to handle a failure. Moving the IP 
 around is something keepalived, pacemaker, etc handles - Look at their 
 documentation for specifics and challenges in a two-node config.
 
 HAProxy doesn't have a concent of primary and backup in terms of it's own 
 instances. Each of them is stand alone. It's up to you, based on your 
 network/IP config which one has traffic routed to it. 
 
 David
 
 
 On 11/29/12 1:53 PM, Hermes Flying wrote:
 But if I install 2 HAProxy as load balancers, doesn't one act as the primary 
 loadbalancer directing the load to the known servers while the secondary 
 takes over load distribution as soon as the heartbeat fails? I remember 
 reading this. Is this wrong?
 
 From: David Coulson da...@davidcoulson.net
 To: Hermes Flying flyingher...@yahoo.com 
 Cc: Baptiste bed...@gmail.com; haproxy@formilux.org 
 haproxy@formilux.org 
 Sent: Thursday, November 29, 2012 8:39 PM
 Subject: Re: HAproxy and detect split-brain (network failures)
 
 You are mixing two totally different things together.
 
 1) HAProxy will do periodic health checks of backend systems you are routing 
 to. Depending if you configure something as 'backup' or 'not backup' will 
 determine if/how traffic is routed to it. The backend systems do not 'take 
 over'. Haproxy just routes traffic to systems based on your configuration. 
 The backend systems don't know/care about the other backend nodes, unless 
 your application requires it which is a different story and nothing to do 
 with haproxy. HAproxy only cares about a single instance of itself - If you 
 have more than one haproxy instance, they do NOT communicate anything 
 between each other.
 
 2) In terms of keepalived, pacemaker, etc, it makes no difference which you 
 use with haproxy - all they do is manage the IP address(es) which haproxy is 
 listening on, and perhaps restart haproxy if it dies. Their configuration 
 and how you maintain quorum in a two-node configuration is a question for 
 one of their mailing lists, or just read their documentation. I personally 
 use pacemaker.
 
 On 11/29/12 1:35 PM, Hermes Flying wrote:
 Well I don't follow:
 You can have a pool of primary that it routes across, then backup systems 
 that are only used when all primary systems are unavailable.
 When you are saying that the backup systems that are used when primary 
 systems are unavailable, how do they decide to take over? How do they know 
 that the other systems are unavailable?
 Are you saying that they depend on third party components like the ones you 
 mentioned (Keepalived etc)? In this case, what is the most suitable tool to 
 be used along with HAProxy? Is there a reference manual for this somewhere?
 
 From: David Coulson mailto:da...@davidcoulson.net
 To: Hermes Flying mailto:flyingher...@yahoo.com 
 Cc: Baptiste mailto:bed...@gmail.com; mailto:haproxy@formilux.org 
 mailto:haproxy@formilux.org 
 Sent: Thursday, November 29, 2012 8:21 PM
 Subject: Re: HAproxy and detect split-brain (network failures)
 
 HAProxy only does primary and backup in terms of active backend systems - 
 You can have a pool of primary that it routes across, then backup systems 
 that are only used when all primary systems are unavailable.
 
 There is no concept of a cluster in terms of haproxy instances, although 
 you can run more than one and manage them via something like pacemaker, 
 keepalived or rgmanager.
 
 On 11/29/12 1:19 PM, Hermes Flying wrote:
 Hi,
 From a quick look into HAProxy, I see that it is a Primary/backup 
 architecture. So isn't ensuring that both nodes don't become primary 
 part of HAProxy's primary/backup protocol ?
 
 From: Baptiste mailto:bed...@gmail.com
 To: Hermes Flying mailto:flyingher...@yahoo.com 
 Cc: mailto:haproxy@formilux.org mailto:haproxy@formilux.org 
 Sent: Thursday, November 29, 2012 3:02 PM
 Subject: Re: HAproxy and detect split-brain (network failures)
 
 Hi,
 
 This is not HAProxy's role, this is the tool you use to ensure high
 availability to do that.
 
 I could see a way where HAProxy can report one interface failing,
 maybe this could help you to detect if you're in a split brain
 situation.
 
 cheers
 
 
 
 On Thu, Nov 29, 2012 at 11:51 AM, Hermes Flying flyingher...@yahoo.com 
 wrote:
  Hi,
  I am looking into using HAProxy as our load balancer.
  I see that you are using a primary/backup approach. I was wondering how 
  does
  HAProxy (if it does) address split-brain situation? Do you have a 
  mechanism
  to detect and avoid it? Do you have some standard recommendation to all
  those 

Re: HAproxy and detect split-brain (network failures)

2012-11-29 Thread David Coulson

Again, you are mixing everything up.

HAProxy has it's own configuration - It defines what nodes your port 80 
traffic (or whatever) is routed to. Haproxy does periodic health checks 
of these backend services to make sure they are available for requests. 
If you have multiple haproxy instances they will all independently do 
health checks and not share any of that information with each other. 
HAProxy will route traffic to all systems defined as a backend for a 
particular service based upon whatever criteria is in the haproxy config.


You can run a two-node environment that is active/backup from a VIP 
perspective, but active/active from a haproxy service perspective - Each 
node would run Apache (or whatever your service is) and haproxy would 
distribute requests across both based on your haproxy config. But, at 
any point in time only one node would actually be routing requests 
through it's local instance of haproxy.


I can't make it any simpler than that. Draw a diagram of what you are 
trying to do if it doesn't make sense.



On 11/29/12 2:06 PM, Hermes Flying wrote:
You are saying that one instance of HAProxy runs in each system and 
one instance is assigned the VIP that clients hit-on (out of scope for 
HAProxy).
But this HAProxy distributes the requests according to the load, 
either on system-A or system-B for which you seem to refer to as 
backup system. In what way are you now refering to it as backup 
system? Because I am interested in distributing the load to all the nodes.


*From:* David Coulson da...@davidcoulson.net
*To:* Hermes Flying flyingher...@yahoo.com
*Cc:* Baptiste bed...@gmail.com; haproxy@formilux.org 
haproxy@formilux.org

*Sent:* Thursday, November 29, 2012 8:57 PM
*Subject:* Re: HAproxy and detect split-brain (network failures)

You can do that, but haproxy doesn't have anything to do with the 
failover process, other than you run an instance of haproxy on one 
server, and another instance on your backup system. As I said, neither 
of the haproxy instances communicate anything, so all you need to do 
is move the IP clients are using from one server to the other in order 
to handle a failure. Moving the IP around is something keepalived, 
pacemaker, etc handles - Look at their documentation for specifics and 
challenges in a two-node config.


HAProxy doesn't have a concent of primary and backup in terms of it's 
own instances. Each of them is stand alone. It's up to you, based on 
your network/IP config which one has traffic routed to it.


David


On 11/29/12 1:53 PM, Hermes Flying wrote:
But if I install 2 HAProxy as load balancers, doesn't one act as the 
primary loadbalancer directing the load to the known servers while 
the secondary takes over load distribution as soon as the heartbeat 
fails? I remember reading this. Is this wrong?


*From:* David Coulson mailto:da...@davidcoulson.net
*To:* Hermes Flying mailto:flyingher...@yahoo.com
*Cc:* Baptiste mailto:bed...@gmail.com; mailto:haproxy@formilux.org 
mailto:haproxy@formilux.org

*Sent:* Thursday, November 29, 2012 8:39 PM
*Subject:* Re: HAproxy and detect split-brain (network failures)

You are mixing two totally different things together.

1) HAProxy will do periodic health checks of backend systems you are 
routing to. Depending if you configure something as 'backup' or 'not 
backup' will determine if/how traffic is routed to it. The backend 
systems do not 'take over'. Haproxy just routes traffic to systems 
based on your configuration. The backend systems don't know/care 
about the other backend nodes, unless your application requires it 
which is a different story and nothing to do with haproxy. HAproxy 
only cares about a single instance of itself - If you have more than 
one haproxy instance, they do NOT communicate anything between each 
other.


2) In terms of keepalived, pacemaker, etc, it makes no difference 
which you use with haproxy - all they do is manage the IP address(es) 
which haproxy is listening on, and perhaps restart haproxy if it 
dies. Their configuration and how you maintain quorum in a two-node 
configuration is a question for one of their mailing lists, or just 
read their documentation. I personally use pacemaker.


On 11/29/12 1:35 PM, Hermes Flying wrote:

Well I don't follow:
You can have a pool of primary that it routes across, then backup 
systems that are only used when all primary systems are unavailable.
When you are saying that the backup systems that are used when 
primary systems are unavailable, how do they decide to take over? 
How do they know that the other systems are unavailable?
Are you saying that they depend on third party components like the 
ones you mentioned (Keepalived etc)? In this case, what is the most 
suitable tool to be used along with HAProxy? Is there a reference 
manual for this somewhere?


*From:* David Coulson mailto:da...@davidcoulson.net
*To:* Hermes Flying mailto:flyingher...@yahoo.com
*Cc:* Baptiste mailto:bed...@gmail.com; 

Re: HAproxy and detect split-brain (network failures)

2012-11-29 Thread Hermes Flying
Something like the following:
 
 HAProxy1  Tomcat1  
 |    +/\
 |   +
 |+---Tomcat2
+    /+\
    + +
HAProxy2+++
 
HAProxy1 is in the same machine as Tomcat1
HAproxy2 is in the same machine as Tomcat2
HAProxy1 distributes the load among Tomcat1 and Tomcat2.
I erroneously thought that HAProxy2 would take over when HAProxy1 crashed to 
distribute the load among Tomcat1/Tomcat2.  
So if both are independent what can I do?
 
 


 From: David Coulson da...@davidcoulson.net
To: Hermes Flying flyingher...@yahoo.com 
Cc: Baptiste bed...@gmail.com; haproxy@formilux.org haproxy@formilux.org 
Sent: Thursday, November 29, 2012 9:12 PM
Subject: Re: HAproxy and detect split-brain (network failures)
  

Again, you are mixing everything up.

HAProxy has it's own configuration - It defines what nodes your port
80 traffic (or whatever) is routed to. Haproxy does periodic health
checks of these backend services to make sure they are available for
requests. If you have multiple haproxy instances they will all
independently do health checks and not share any of that information
with each other. HAProxy will route traffic to all systems defined
as a backend for a particular service based upon whatever criteria
is in the haproxy config. 

You can run a two-node environment that is active/backup from a VIP
perspective, but active/active from a haproxy service perspective -
Each node would run Apache (or whatever your service is) and haproxy
would distribute requests across both based on your haproxy config.
But, at any point in time only one node would actually be routing
requests through it's local instance of haproxy.

I can't make it any simpler than that. Draw a diagram of what you
are trying to do if it doesn't make sense.



On 11/29/12 2:06 PM, Hermes Flying wrote:
 
You are saying that one instance of HAProxy runs in each system and one 
instance is assigned the VIP that clients hit-on (out of scope for HAProxy). 
But this HAProxy distributes the requests according to the load, either on 
system-A or system-B for which you seem to refer to as backup system. In what 
way are you now refering to it as backup system? Because I am interested in 
distributing the load to all the nodes.
 

 
From: David Coulson mailto:da...@davidcoulson.net
To: Hermes Flying mailto:flyingher...@yahoo.com 
Cc: Baptiste mailto:bed...@gmail.com; mailto:haproxy@formilux.org 
mailto:haproxy@formilux.org 
Sent: Thursday, November 29, 2012 8:57 PM
Subject: Re: HAproxy and detect split-brain (network failures)
  

You can do that, but haproxy doesn't have anything to do with the failover 
process, other than you run an instance of haproxy on one server, and another 
instance on your backup system. As I said, neither of the haproxy instances 
communicate anything, so all you need to do is move the IP clients are using 
from one server to the other in order to handle a failure. Moving the IP 
around is something keepalived, pacemaker, etc handles - Look at their 
documentation for specifics and challenges in a two-node config.

HAProxy doesn't have a concent of primary and backup
  in terms of it's own instances. Each of them is stand
  alone. It's up to you, based on your network/IP config
  which one has traffic routed to it. 

David



On 11/29/12 1:53 PM, Hermes Flying wrote:
 
But if I install 2 HAProxy as load balancers, doesn't one act as the primary 
loadbalancer directing the load to the known servers while the secondary takes 
over load distribution as soon as the heartbeat fails? I remember reading 
this. Is this wrong? 

 
From: David Coulson mailto:da...@davidcoulson.net
To: Hermes Flying mailto:flyingher...@yahoo.com 
Cc: Baptiste mailto:bed...@gmail.com; mailto:haproxy@formilux.org 
mailto:haproxy@formilux.org 
Sent: Thursday, November 29, 2012 8:39 PM
Subject: Re: HAproxy and detect split-brain (network failures)
  

You are mixing two totally different things together.

1) HAProxy will do periodic health checks
  of backend systems you are routing to.
  Depending if you configure something as
  'backup' or 'not backup' will determine
  if/how traffic is routed to it. The
  backend systems do not 'take over'.
  Haproxy just routes traffic to systems
  based on your configuration. The backend
  systems don't know/care about the other
  backend nodes, unless your application
  requires it which is a different story and
  nothing to do with haproxy. HAproxy only
  cares about a 

Re: HAproxy and detect split-brain (network failures)

2012-11-29 Thread David Coulson
Both haproxy instances have the same config, with the tomcat instances 
with the same weight, etc. Run something like keepalived or pacemaker to 
manage a VIP between the two boxes. That's it. Not sure about 
keepalived, but pacemaker can make sure haproxy is running, then either 
restart it or move the VIP if it is not running.


David

On 11/29/12 2:27 PM, Hermes Flying wrote:

Something like the following:
 HAProxy1  Tomcat1
 |+/\
 |   +
 |+---Tomcat2
+/+\
+ +
HAProxy2+++
HAProxy1 is in the same machine as Tomcat1
HAproxy2 is in the same machine as Tomcat2
HAProxy1 distributes the load among Tomcat1 and Tomcat2.
I erroneously thought that HAProxy2 would take over when HAProxy1 
crashed to distribute the load among Tomcat1/Tomcat2.

So if both are independent what can I do?

*From:* David Coulson da...@davidcoulson.net
*To:* Hermes Flying flyingher...@yahoo.com
*Cc:* Baptiste bed...@gmail.com; haproxy@formilux.org 
haproxy@formilux.org

*Sent:* Thursday, November 29, 2012 9:12 PM
*Subject:* Re: HAproxy and detect split-brain (network failures)

Again, you are mixing everything up.

HAProxy has it's own configuration - It defines what nodes your port 
80 traffic (or whatever) is routed to. Haproxy does periodic health 
checks of these backend services to make sure they are available for 
requests. If you have multiple haproxy instances they will all 
independently do health checks and not share any of that information 
with each other. HAProxy will route traffic to all systems defined as 
a backend for a particular service based upon whatever criteria is in 
the haproxy config.


You can run a two-node environment that is active/backup from a VIP 
perspective, but active/active from a haproxy service perspective - 
Each node would run Apache (or whatever your service is) and haproxy 
would distribute requests across both based on your haproxy config. 
But, at any point in time only one node would actually be routing 
requests through it's local instance of haproxy.


I can't make it any simpler than that. Draw a diagram of what you are 
trying to do if it doesn't make sense.



On 11/29/12 2:06 PM, Hermes Flying wrote:
You are saying that one instance of HAProxy runs in each system and 
one instance is assigned the VIP that clients hit-on (out of scope 
for HAProxy).
But this HAProxy distributes the requests according to the load, 
either on system-A or system-B for which you seem to refer to as 
backup system. In what way are you now refering to it as backup 
system? Because I am interested in distributing the load to all the 
nodes.


*From:* David Coulson mailto:da...@davidcoulson.net
*To:* Hermes Flying mailto:flyingher...@yahoo.com
*Cc:* Baptiste mailto:bed...@gmail.com; mailto:haproxy@formilux.org 
mailto:haproxy@formilux.org

*Sent:* Thursday, November 29, 2012 8:57 PM
*Subject:* Re: HAproxy and detect split-brain (network failures)

You can do that, but haproxy doesn't have anything to do with the 
failover process, other than you run an instance of haproxy on one 
server, and another instance on your backup system. As I said, 
neither of the haproxy instances communicate anything, so all you 
need to do is move the IP clients are using from one server to the 
other in order to handle a failure. Moving the IP around is something 
keepalived, pacemaker, etc handles - Look at their documentation for 
specifics and challenges in a two-node config.


HAProxy doesn't have a concent of primary and backup in terms of it's 
own instances. Each of them is stand alone. It's up to you, based on 
your network/IP config which one has traffic routed to it.


David


On 11/29/12 1:53 PM, Hermes Flying wrote:
But if I install 2 HAProxy as load balancers, doesn't one act as the 
primary loadbalancer directing the load to the known servers while 
the secondary takes over load distribution as soon as the heartbeat 
fails? I remember reading this. Is this wrong?


*From:* David Coulson mailto:da...@davidcoulson.net
*To:* Hermes Flying mailto:flyingher...@yahoo.com
*Cc:* Baptiste mailto:bed...@gmail.com; mailto:haproxy@formilux.org 
mailto:haproxy@formilux.org

*Sent:* Thursday, November 29, 2012 8:39 PM
*Subject:* Re: HAproxy and detect split-brain (network failures)

You are mixing two totally different things together.

1) HAProxy will do periodic health checks of backend systems you are 
routing to. Depending if you configure something as 'backup' or 'not 
backup' will determine if/how traffic is routed to it. The backend 
systems do not 'take over'. Haproxy just routes traffic to systems 
based on your configuration. The backend systems don't know/care 
about the other backend nodes, unless your application requires it 
which is a different story and nothing to do with haproxy. HAproxy 
only cares about a single instance of itself - If you have more than 
one haproxy instance, they do NOT 

Re: HAproxy and detect split-brain (network failures)

2012-11-29 Thread Hermes Flying
I see now!
One last question since you are using Pacemaker. Do you recommend it for 
splitbrain so that I look into that direction?
I mean when you say that pacemaker restart HAProxy, does it detect network 
failures as well? Or only SW crashes?  
I assume pacemaker will be aware of both HAProxy1 and HAProxy2 in my described 
deployment

 


 From: David Coulson da...@davidcoulson.net
To: Hermes Flying flyingher...@yahoo.com 
Cc: Baptiste bed...@gmail.com; haproxy@formilux.org haproxy@formilux.org 
Sent: Thursday, November 29, 2012 9:29 PM
Subject: Re: HAproxy and detect split-brain (network failures)
  

Both haproxy instances have the same config, with the tomcat instances with the 
same weight, etc. Run something like keepalived or pacemaker to manage a VIP 
between the two boxes. That's it. Not sure about keepalived, but pacemaker can 
make sure haproxy is running, then either restart it or move the VIP if it is 
not running.

David


On 11/29/12 2:27 PM, Hermes Flying wrote:
 
Something like the following: 
  
 HAProxy1  Tomcat1  
 |    +/\
 |   +
 |+---Tomcat2
+    /+\
    + + 
HAProxy2+++ 
  
HAProxy1 is in the same machine as Tomcat1 
HAproxy2 is in the same machine as Tomcat2 
HAProxy1 distributes the load among Tomcat1 and Tomcat2. 
I erroneously thought that HAProxy2 would take over when HAProxy1 crashed to 
distribute the load among Tomcat1/Tomcat2.   
So if both are independent what can I do? 
  

 
From: David Coulson mailto:da...@davidcoulson.net
To: Hermes Flying mailto:flyingher...@yahoo.com 
Cc: Baptiste mailto:bed...@gmail.com; mailto:haproxy@formilux.org 
mailto:haproxy@formilux.org 
Sent: Thursday, November 29, 2012 9:12 PM
Subject: Re: HAproxy and detect split-brain (network failures)
  

Again, you are mixing everything up.

HAProxy has it's own configuration - It defines what
nodes your port 80 traffic (or whatever) is routed to.
Haproxy does periodic health checks of these backend
services to make sure they are available for requests.
If you have multiple haproxy instances they will all
independently do health checks and not share any of that
information with each other. HAProxy will route traffic
to all systems defined as a backend for a particular
service based upon whatever criteria is in the haproxy
config. 

You can run a two-node environment that is active/backup
from a VIP perspective, but active/active from a haproxy
service perspective - Each node would run Apache (or
whatever your service is) and haproxy would distribute
requests across both based on your haproxy config. But,
at any point in time only one node would actually be
routing requests through it's local instance of haproxy.

I can't make it any simpler than that. Draw a diagram of
what you are trying to do if it doesn't make sense.



On 11/29/12 2:06 PM, Hermes Flying wrote:
 
You are saying that one instance of HAProxy runs in each system and one 
instance is assigned the VIP that clients hit-on (out of scope for HAProxy). 
But this HAProxy distributes the requests according to the load, either on 
system-A or system-B for which you seem to refer to as backup system. In what 
way are you now refering to it as backup system? Because I am interested in 
distributing the load to all the nodes.
 

 
From: David Coulson mailto:da...@davidcoulson.net
To: Hermes Flying mailto:flyingher...@yahoo.com 
Cc: Baptiste mailto:bed...@gmail.com; mailto:haproxy@formilux.org 
mailto:haproxy@formilux.org 
Sent: Thursday, November 29, 2012 8:57 PM
Subject: Re: HAproxy and detect split-brain (network failures)
  

You can do that, but haproxy doesn't have anything to do with the failover 
process, other than you run an instance of haproxy on one server, and another 
instance on your backup system. As I said, neither of the haproxy instances 
communicate anything, so all you need to do is move the IP clients are using 
from one server to the other in order to handle a failure. Moving the IP 
around is something keepalived, pacemaker, etc handles - Look at their 
documentation for specifics and challenges in a two-node config.

HAProxy doesn't have a concent of primary
  and backup in terms of it's own instances.
  Each of them is stand alone. It's up to
  you, based on your network/IP config which
  one has traffic routed to it. 

David



On 11/29/12 1:53 PM, Hermes Flying wrote:
 
But if I install 2 HAProxy as load balancers, doesn't one act as the primary 
loadbalancer directing the load to the known servers while the secondary 

Re: HAproxy and detect split-brain (network failures)

2012-11-29 Thread Owen MArinas

we have exactly that setup with heartbeat, and 2 floating IPs.
Working in production for 3 years now

Owen



On 29/11/2012 3:26 PM, David Coulson wrote:


On 11/29/12 3:11 PM, Hermes Flying wrote:

I see now!
One last question since you are using Pacemaker. Do you recommend it 
for splitbrain so that I look into that direction?


Any two node cluster has risk of split brain. if you implement 
fencing/STONITH, you are in a better place. If you have a third node, 
that's even better, even if it does not actually run any services 
beyond the cluster software
I mean when you say that pacemaker restart HAProxy, does it detect 
network failures as well? Or only SW crashes?
I assume pacemaker will be aware of both HAProxy1 and HAProxy2 in my 
described deployment
You can have pacemaker ping an IP (gateway for example) and migrate 
the VIP based on that. In my config I have haproxy configured as a 
cloned resource in pacemaker, so all nodes have the same pacemaker 
config for haproxy and it keeps haproxy running on all nodes all of 
the time.




Re: HAproxy and detect split-brain (network failures)

2012-11-29 Thread Hermes Flying
Hi Owen,
 How does the heartbeat this help for splitbrain?
With heartbeat the nodes know that it can't talk to each other. They don't know 
if the other is down. If there is a different communication path between the 
nodes and the incoming requests, both can become primary assuming the other is 
down due to network failure of the communcation link
So how does this work for your system?
 


 From: Owen MArinas omari...@woozworld.com
To: haproxy@formilux.org 
Sent: Thursday, November 29, 2012 10:40 PM
Subject: Re: HAproxy and detect split-brain (network failures)
  

we have exactly that setup with heartbeat, and 2 floating IPs. 
Working in production for 3 years now

Owen



On 29/11/2012 3:26 PM, David Coulson wrote:
 


On 11/29/12 3:11 PM, Hermes Flying wrote:
 
I see now!
One last question since you are using Pacemaker. Do you
  recommend it for splitbrain so that I look into that
  direction?
   
Any two node cluster has risk of split brain. if you implement
  fencing/STONITH, you are in a better place. If you have a third
  node, that's even better, even if it does not actually run any
  services beyond the cluster software

I mean when you say that pacemaker restart HAProxy, does it detect network 
failures as well? Or only SW crashes?  
I assume pacemaker will be aware of both HAProxy1 and
  HAProxy2 in my described deployment
  
You can have pacemaker ping an IP (gateway for example) and migrate the VIP 
based on that. In my config I have haproxy configured as a cloned resource in 
pacemaker, so all nodes have the same pacemaker config for haproxy and it keeps 
haproxy running on all nodes all of the time.
 

Re: HAproxy and detect split-brain (network failures)

2012-11-29 Thread David Coulson
Again, you need to talk to the pacemaker people for actual clustering 
information.


The ping was so a node could detect it lost upstream connectivity, and 
move the VIP, otherwise the VIP may continue to run on a system which 
does not have access to your network. This has nothing at all to do with 
split brain.


If you want to deal with split brain, add a third node. Period. You also 
want to have redundant heartbeat communication paths. You also want 
STONITH/fencing so if one node detects the other is down it'll power it 
off or crash it. I've not had issues with a two-node cluster with two 
diverse backend communication links and fencing enabled.


David

On 11/29/12 3:58 PM, Hermes Flying wrote:
You can have pacemaker ping an IP (gateway for example) and migrate 
the VIP based on that

How does this help for splitbrain?
If I understand what you say, pacemaker will ping an IP and if 
successfull will assume that the other node has crashed. But what if 
the other node hasn't and it is just their communication link that 
failed? Won't both become primary?

How does the ping help?

*From:* David Coulson da...@davidcoulson.net
*To:* Hermes Flying flyingher...@yahoo.com
*Cc:* Baptiste bed...@gmail.com; haproxy@formilux.org 
haproxy@formilux.org

*Sent:* Thursday, November 29, 2012 10:26 PM
*Subject:* Re: HAproxy and detect split-brain (network failures)


On 11/29/12 3:11 PM, Hermes Flying wrote:

I see now!
One last question since you are using Pacemaker. Do you recommend it 
for splitbrain so that I look into that direction?


Any two node cluster has risk of split brain. if you implement 
fencing/STONITH, you are in a better place. If you have a third node, 
that's even better, even if it does not actually run any services 
beyond the cluster software
I mean when you say that pacemaker restart HAProxy, does it detect 
network failures as well? Or only SW crashes?
I assume pacemaker will be aware of both HAProxy1 and HAProxy2 in my 
described deployment
You can have pacemaker ping an IP (gateway for example) and migrate 
the VIP based on that. In my config I have haproxy configured as a 
cloned resource in pacemaker, so all nodes have the same pacemaker 
config for haproxy and it keeps haproxy running on all nodes all of 
the time.







Re: HAproxy and detect split-brain (network failures)

2012-11-29 Thread Hermes Flying
Thank you for your help.
I take it that you are find Pacemaker reliable in your experience? Should I 
look into it? 
 


 From: David Coulson da...@davidcoulson.net
To: Hermes Flying flyingher...@yahoo.com 
Cc: Baptiste bed...@gmail.com; haproxy@formilux.org haproxy@formilux.org 
Sent: Thursday, November 29, 2012 11:04 PM
Subject: Re: HAproxy and detect split-brain (network failures)
  

Again, you need to talk to the pacemaker people for actual clustering 
information.

The ping was so a node could detect it lost upstream connectivity,
and move the VIP, otherwise the VIP may continue to run on a system
which does not have access to your network. This has nothing at all
to do with split brain.

If you want to deal with split brain, add a third node. Period. You
also want to have redundant heartbeat communication paths. You also
want STONITH/fencing so if one node detects the other is down it'll
power it off or crash it. I've not had issues with a two-node
cluster with two diverse backend communication links and fencing
enabled.

David


On 11/29/12 3:58 PM, Hermes Flying wrote:
 
You can have pacemaker ping an IP (gateway for example) and migrate the VIP 
based on that 
How does this help for splitbrain? 
If I understand what you say, pacemaker will ping an IP and if successfull 
will assume that the other node has crashed. But what if the other node hasn't 
and it is just their communication link that failed? Won't both become 
primary? 
How does the ping help? 
  

 
From: David Coulson mailto:da...@davidcoulson.net
To: Hermes Flying mailto:flyingher...@yahoo.com 
Cc: Baptiste mailto:bed...@gmail.com; mailto:haproxy@formilux.org 
mailto:haproxy@formilux.org 
Sent: Thursday, November 29, 2012 10:26 PM
Subject: Re: HAproxy and detect split-brain (network failures)
  



On 11/29/12 3:11 PM, Hermes Flying wrote:
 
I see now!
One last question since you are using Pacemaker.
Do you recommend it for splitbrain so that I
look into that direction?
   
Any two node cluster has risk of split brain. if you
implement fencing/STONITH, you are in a better place. If
you have a third node, that's even better, even if it
does not actually run any services beyond the cluster
software

I mean when you say that pacemaker restart HAProxy, does it detect network 
failures as well? Or only SW crashes?  
I assume pacemaker will be aware of both
HAProxy1 and HAProxy2 in my described deployment
  
You can have pacemaker ping an IP (gateway for example) and migrate the VIP 
based on that. In my config I have haproxy configured as a cloned resource in 
pacemaker, so all nodes have the same pacemaker config for haproxy and it keeps 
haproxy running on all nodes all of the time.
  



Re: HAproxy and detect split-brain (network failures)

2012-11-29 Thread David Coulson
In general, yes, Pacemaker is reliable. If your config is wrong, you may 
still have an outage in the event of a failure.


That said, if you are a business and need support, you probably want to 
use whatever clustering software ships with the distribution you use. I 
belive SuSE uses pacemaker, but RedHat still uses rgmanager. Pacemaker 
is tech preview in RHEL6 but will be mainline in 7. I believe RedHat 
employ some core developers of pacemaker.


David

On 11/29/12 4:10 PM, Hermes Flying wrote:

Thank you for your help.
I take it that you are find Pacemaker reliable in your experience? 
Should I look into it?


*From:* David Coulson da...@davidcoulson.net
*To:* Hermes Flying flyingher...@yahoo.com
*Cc:* Baptiste bed...@gmail.com; haproxy@formilux.org 
haproxy@formilux.org

*Sent:* Thursday, November 29, 2012 11:04 PM
*Subject:* Re: HAproxy and detect split-brain (network failures)

Again, you need to talk to the pacemaker people for actual clustering 
information.


The ping was so a node could detect it lost upstream connectivity, and 
move the VIP, otherwise the VIP may continue to run on a system which 
does not have access to your network. This has nothing at all to do 
with split brain.


If you want to deal with split brain, add a third node. Period. You 
also want to have redundant heartbeat communication paths. You also 
want STONITH/fencing so if one node detects the other is down it'll 
power it off or crash it. I've not had issues with a two-node cluster 
with two diverse backend communication links and fencing enabled.


David

On 11/29/12 3:58 PM, Hermes Flying wrote:
You can have pacemaker ping an IP (gateway for example) and migrate 
the VIP based on that

How does this help for splitbrain?
If I understand what you say, pacemaker will ping an IP and if 
successfull will assume that the other node has crashed. But what if 
the other node hasn't and it is just their communication link that 
failed? Won't both become primary?

How does the ping help?

*From:* David Coulson mailto:da...@davidcoulson.net
*To:* Hermes Flying mailto:flyingher...@yahoo.com
*Cc:* Baptiste mailto:bed...@gmail.com; mailto:haproxy@formilux.org 
mailto:haproxy@formilux.org

*Sent:* Thursday, November 29, 2012 10:26 PM
*Subject:* Re: HAproxy and detect split-brain (network failures)


On 11/29/12 3:11 PM, Hermes Flying wrote:

I see now!
One last question since you are using Pacemaker. Do you recommend it 
for splitbrain so that I look into that direction?


Any two node cluster has risk of split brain. if you implement 
fencing/STONITH, you are in a better place. If you have a third node, 
that's even better, even if it does not actually run any services 
beyond the cluster software
I mean when you say that pacemaker restart HAProxy, does it detect 
network failures as well? Or only SW crashes?
I assume pacemaker will be aware of both HAProxy1 and HAProxy2 in my 
described deployment
You can have pacemaker ping an IP (gateway for example) and migrate 
the VIP based on that. In my config I have haproxy configured as a 
cloned resource in pacemaker, so all nodes have the same pacemaker 
config for haproxy and it keeps haproxy running on all nodes all of 
the time.











Re: HAproxy and detect split-brain (network failures)

2012-11-29 Thread Hermes Flying
Great help! Thank you for your time! Much appreciated!

 


 From: David Coulson da...@davidcoulson.net
To: Hermes Flying flyingher...@yahoo.com 
Cc: Baptiste bed...@gmail.com; haproxy@formilux.org haproxy@formilux.org 
Sent: Thursday, November 29, 2012 11:13 PM
Subject: Re: HAproxy and detect split-brain (network failures)
  

In general, yes, Pacemaker is reliable. If your config is wrong, you may still 
have an outage in the event of a failure.

That said, if you are a business and need support, you probably want
to use whatever clustering software ships with the distribution you
use. I belive SuSE uses pacemaker, but RedHat still uses rgmanager.
Pacemaker is tech preview in RHEL6 but will be mainline in 7. I
believe RedHat employ some core developers of pacemaker.

David


On 11/29/12 4:10 PM, Hermes Flying wrote:
 
Thank you for your help. 
I take it that you are find Pacemaker reliable in your experience? Should I 
look into it?  

 
From: David Coulson mailto:da...@davidcoulson.net
To: Hermes Flying mailto:flyingher...@yahoo.com 
Cc: Baptiste mailto:bed...@gmail.com; mailto:haproxy@formilux.org 
mailto:haproxy@formilux.org 
Sent: Thursday, November 29, 2012 11:04 PM
Subject: Re: HAproxy and detect split-brain (network failures)
  

Again, you need to talk to the pacemaker people for actual clustering 
information.

The ping was so a node could detect it lost upstream
connectivity, and move the VIP, otherwise the VIP may
continue to run on a system which does not have access
to your network. This has nothing at all to do with
split brain.

If you want to deal with split brain, add a third node.
Period. You also want to have redundant heartbeat
communication paths. You also want STONITH/fencing so if
one node detects the other is down it'll power it off or
crash it. I've not had issues with a two-node cluster
with two diverse backend communication links and fencing
enabled.

David


On 11/29/12 3:58 PM, Hermes Flying wrote:
 
You can have pacemaker ping an IP (gateway for example) and migrate the VIP 
based on that 
How does this help for splitbrain? 
If I understand what you say, pacemaker will ping an IP and if successfull 
will assume that the other node has crashed. But what if the other node 
hasn't and it is just their communication link that failed? Won't both become 
primary? 
How does the ping help? 
  

 
From: David Coulson mailto:da...@davidcoulson.net
To: Hermes Flying mailto:flyingher...@yahoo.com 
Cc: Baptiste mailto:bed...@gmail.com; mailto:haproxy@formilux.org 
mailto:haproxy@formilux.org 
Sent: Thursday, November 29, 2012 10:26 PM
Subject: Re: HAproxy and detect split-brain (network failures)
  



On 11/29/12 3:11 PM, Hermes Flying wrote:
 
I see now!
One last question since you are
using Pacemaker. Do you recommend it
for splitbrain so that I look into
that direction?
   
Any two node cluster has risk of split
brain. if you implement fencing/STONITH, you
are in a better place. If you have a third
node, that's even better, even if it does
not actually run any services beyond the
cluster software

I mean when you say that pacemaker restart HAProxy, does it detect network 
failures as well? Or only SW crashes?  
I assume pacemaker will be aware of
both HAProxy1 and HAProxy2 in my
described deployment
  
You can have pacemaker ping an IP (gateway for example) and migrate the VIP 
based on that. In my config I have haproxy configured as a cloned resource in 
pacemaker, so all nodes have the same pacemaker config for haproxy and it keeps 
haproxy running on all nodes all of the time.
  


  



RE: stunnel + haproxy + ssl + ddns + multiple domains

2012-11-29 Thread Rob Cluett
Thank you Baptiste. I am implementing this now. The procedure I was looking
at had me making it more complicated than it needed to be.

-Original Message-
From: Baptiste [mailto:bed...@gmail.com]
Sent: Thursday, November 29, 2012 2:29 AM
To: Rob Cluett
Cc: haproxy@formilux.org
Subject: Re: stunnel + haproxy + ssl + ddns + multiple domains

Hi Rob,

Just make you stunnel point to your frontend on the port 80, and you're
done.

cheers

On Thu, Nov 29, 2012 at 1:05 AM, Rob Cluett r...@robcluett.com wrote:
 All, wondering if you can  point me in the right direction. I have
 stunnel installed with the x-forwarded-for patch. I also have haproxy
 working so all incoming http requests are forwarded from my router to
 happroxy. haproxy then determines where to route the request based on the
domain name.
 Configs below.  I'd like to implement something similar with stunnel
 and haproxy so that all inbound requests can be routed in the same
 manner for https.



 global

 log 127.0.0.1 local2

 chroot  /var/lib/haproxy

 pidfile /var/run/haproxy.pid

 maxconn 4000

 userhaproxy

 group   haproxy

 daemon

 # turn on stats unix socket

 stats socket /var/lib/haproxy/stats



 defaults

 modehttp

 log global

 option  httplog

 option  dontlognull

 option http-server-close

 option forwardfor   except 127.0.0.0/8

 option  redispatch

 retries 3

 timeout http-request10s

 timeout queue   1m

 timeout connect 10s

 timeout client  1m

 timeout server  1m

 timeout http-keep-alive 10s

 timeout check   10s

 maxconn 3000



 frontend http_proxy

   bind *:80

   acl is_rbc-com hdr_dom(host) -i robcluett.com

   acl is_rbc-net hdr_dom(host) -i robcluett.net

   acl is_iom-com hdr_dom(host) -i iomerge.com

   use_backend cluster1 if is_rbc-com

   use_backend cluster2 if is_rbc-net

   use_backend cluster3 if is_iom-com



 backend cluster1

   server web2 10.10.10.51:80

   #server web5 192.168.1.128



 backend cluster2

   server web3 10.10.10.52:80

   #server web6 192.168.1.129:80



 backend cluster3

   server web4 10.10.10.53:80



 Rob Cluett

 r...@robcluett.com

 978.381.3005



 *Please use this address for all email correspondence. The phone
 number listed in the signature above replaces any other phone number
 you may have for me.



 This email contains a digitally signed certificate authenticating the
 sender. This certificate prevents others from posing as or spoofing
 the sender, guarantees that it was sent from the named sender and when
 necessary encrypts the email such that only the sender and
 reciepient(s) can read it's contents. If you receive an email from
 this sender without the digitally signed certificate it is not from
 the sender and therefore it's contents should be disregarded.



 This e-mail, and any files transmitted with it, is intended solely for
 the use of the recipient(s) to whom it is addressed and may contain
 confidential information. If you are not the intended recipient,
 please notify the sender immediately and delete the record from your
 computer or other device as its contents may be confidential and its
 disclosure, copying or distribution unlawful.




smime.p7s
Description: S/MIME cryptographic signature