Re: IP binding and standby health-checks

2020-10-20 Thread Lukas Tribus
Hello,

On Tue, 20 Oct 2020 at 05:36, Dave Hall  wrote:
> HAProxy Active/Standby pair using keepalived and a virtual IP.
> Load balance SSH connections to a group of user access systems (long-running 
> Layer 4 connections).
> Using Fail2Ban to protect against password attacks, so using send-proxy-v2 
> and go-mmproxy to present client IP to target servers.
>
> Our objective is to preserve connections through a fail-over.

This is not possible today and I doubt it ever will.

Haproxy is terminating the Layer 4 sessions on both ends and thus
would have to migrate the sockets from one box to another. While linux
does have "TCP connection repair" I'm not sure it's actually possible
to do this in the load-balancer scenario, where the active box would
just suddenly die (as opposed to a graceful and planned failover).

You need to look at a solution that does not involve socket
termination. Like IPVS Connection Synchronization for example.

Or look at what the hyperscalers do nowadays. Google's Maglev,
Github's glb-director or Facebook's katran probably can give some
inspiration.


Lukas



Re: IP binding and standby health-checks

2020-10-20 Thread Gibson, Brian (IMS)
I think what you need is a stick-table and peers setup.

https://www.haproxy.com/blog/emulating-activepassing-application-clustering-with-haproxy/

Sent from Nine<http://www.9folders.com/>

From: Dave Hall 
Sent: Monday, October 19, 2020 11:38 PM
To: HAProxy
Subject: IP binding and standby health-checks

Hello,

I'm new to this list and somewhat new to HAProxy.  Before posting I scanned the 
archives and found a thread from 2015 that seems to apply to my situation:

IP binding and standby health-checks 
https://www.mail-archive.com/haproxy@formilux.org/msg18728.html

The specifics of our setup:

  *   HAProxy Active/Standby pair using keepalived and a virtual IP.
  *   Load balance SSH connections to a group of user access systems 
(long-running Layer 4 connections).
  *   Using Fail2Ban to protect against password attacks, so using 
send-proxy-v2 and go-mmproxy to present client IP to target servers.

Our objective is to preserve connections through a fail-over.  It would seem 
that it is necessary to use the virtual IP as the source address for 
connections to the target servers.  The problem, though, is how get get HAProxy 
not to use the virtual IP for health checks.  Since the HAProxy code-base has 
likely evolved since 2015 I'd like to know the current recommended approach for 
this situation.

Thanks.

-Dave

--
Dave Hall
Binghamton University




Information in this e-mail may be confidential. It is intended only for the 
addressee(s) identified above. If you are not the addressee(s), or an employee 
or agent of the addressee(s), please note that any dissemination, distribution, 
or copying of this communication is strictly prohibited. If you have received 
this e-mail in error, please notify the sender of the error.



IP binding and standby health-checks

2020-10-19 Thread Dave Hall

Hello,

I'm new to this list and somewhat new to HAProxy.  Before posting I 
scanned the archives and found a thread from 2015 that seems to apply to 
my situation:


IP binding and standby health-checks 
https://www.mail-archive.com/haproxy@formilux.org/msg18728.html


The specifics of our setup:

 * HAProxy Active/Standby pair using keepalived and a virtual IP.
 * Load balance SSH connections to a group of user access systems
   (long-running Layer 4 connections).
 * Using Fail2Ban to protect against password attacks, so using
   send-proxy-v2 and go-mmproxy to present client IP to target servers.

Our objective is to preserve connections through a fail-over.  It would 
seem that it is necessary to use the virtual IP as the source address 
for connections to the target servers.  The problem, though, is how get 
get HAProxy not to use the virtual IP for health checks.  Since the 
HAProxy code-base has likely evolved since 2015 I'd like to know the 
current recommended approach for this situation.


Thanks.

-Dave

--
Dave Hall
Binghamton University



Re: IP binding and standby health-checks

2015-07-17 Thread Baptiste
Hi Nathan,

The 'usesrc' keyword triggers this error. It needs root privileges.
(just checked in the source code)

Baptiste


On Thu, Jul 16, 2015 at 5:13 PM, Nathan Williams nath.e.w...@gmail.com wrote:
 oh, i think this comment thread explains it:
 http://comments.gmane.org/gmane.comp.web.haproxy/20366. I'll see about
 assigning

 CAP_NET_ADMIN


 On Wed, Jul 15, 2015 at 4:56 PM Nathan Williams nath.e.w...@gmail.com
 wrote:

 Hi Baptiste,

 Sorry for the delayed response, had some urgent things come up that
 required more immediate attention... thanks again for your continued
 support.


  Why not using proxy-protocol between HAProxy and nginx?

 Sounds interesting; I'd definitely heard of it before, but hadn't looked
 into it since what we've been doing has been working. My initial impression
 is that it's a pretty big change from what we're currently doing (looks like
 it would at least require a brief maintenance to roll out since it requires
 coordinated change between client and load-balancer), but I'm not
 fundamentally opposed if there's significant advantages. I'll definitely
 take a look to see if it satisfies our requirements.


  I disagree, it would be only 2: the 'real' IP addresses of the
 load-balancers only.

 OK, fair point. Maybe it's just being paranoid to think that unless we're
 explicitly setting the source, we should account for *all* possible sources.
 The VIP wouldn't be the default route, so we could probably get away with
 ignoring it. Come to think of it... maybe having keepalived change the
 default route on the primary and skipping hardcoding the source in haproxy
 would address what we're aiming for? seems worth further investigation, as
 I'm not sure whether it supports this out of the box.


  there is no 0.0.0.0 magic values neither subnet values accepted in nginx
  XFF  module?

 I wouldn't use 0.0.0.0 whether there is or not, as i wouldn't want it to
 be that open. It might be a different case for a subnet value, if we were
 able to put the load-balancer cluster in a separate subnet, but our current
 situation (managed private openstack deployment) doesn't give us quite that
 much network control. maybe someday soon with VXLAN or another overlay (of
 course, that comes with performance penalties, so maybe not).


  Then instead of using a VIP, you can book 2 IPs in your subnet that
  could be used, whatever the LB is using.

 Pre-allocating network IPs from the subnet that aren't permitted to be
 assigned to anything other than whatever instance is currently filling the
 load-balancer role would certainly work (I like this idea!); that's actually
 pretty similar to what we're doing for the internal VIP currently (the
 external VIP is just an openstack floating IP, aka a DNAT in the underlying
 infrastructure), and then adding it as an allowed address for the
 instance-associated network port instance in Neutron's
 allowed-address-pairs... It'd be an extra step when creating an LB node, but
 a pretty reasonable one I think, and we're already treating them differently
 from generic instances anyways... definitely food for thought.

  HAProxy rocks !

 +1 * 100. :)


  Can you start it up with strace ??

 Yep! https://gist.github.com/nathwill/ea52324867072183b695

 So far, I still like the source 0.0.0.0 usesrc 10.240.36.13 solution the
 best, as it seems the most direct and easily understood. Fingers crossed the
 permissions issue is easily overcome.

 Cheers,

 Nathan W

 On Tue, Jul 14, 2015 at 2:58 PM Baptiste bed...@gmail.com wrote:

  As for details, it's advantageous for us for a couple of reasons... the
  realip module in nginx requires that you list trusted hosts which are
  permitted to set the X-Forwarded-For header before it will set the
  source
  address in the logs to the x-forwarded-for address. as a result, using
  anything other than the VIP means:

 Why not using proxy-protocol between HAProxy and nginx?
 http://blog.haproxy.com/haproxy/proxy-protocol/

 So you can get rid of X-FF header limitation in nginx. (don't know if
 proxy-protocol implementation in nginx suffers from the same
 limitations).

  - not using the vip means we have to trust 3 addresses instead of 1 to
  set
  x-forwarded-for

 I disagree, it would be only 2: the 'real' IP addresses of the
 load-balancers only.

  - we have to update the list of allowed hosts on all of our backends
  any
  time we replace a load-balancer node. We're using config management, so
  it's
  automated, but that's still more changes than should ideally be
  necessary to
  replace a no-data node that we ideally can trash and replace at will.

 there is no 0.0.0.0 magic values neither subnet values accepted in
 nginx XFF  module?
 If not, it deserves a patch !

  - there's a lag between the time of a change(e.g. node replacement)
  and the
  next converge cycle of the config mgmt on the backends, so for some
  period
  the backend config will be out of sync, incorrectly trusting IP(s) that
  may
  now be 

Re: IP binding and standby health-checks

2015-07-16 Thread Nathan Williams
oh, i think this comment thread explains it:
http://comments.gmane.org/gmane.comp.web.haproxy/20366. I'll see about
assigning

CAP_NET_ADMIN


On Wed, Jul 15, 2015 at 4:56 PM Nathan Williams nath.e.w...@gmail.com
wrote:

 Hi Baptiste,

 Sorry for the delayed response, had some urgent things come up that
 required more immediate attention... thanks again for your continued
 support.


  Why not using proxy-protocol between HAProxy and nginx?

 Sounds interesting; I'd definitely heard of it before, but hadn't looked
 into it since what we've been doing has been working. My initial impression
 is that it's a pretty big change from what we're currently doing (looks
 like it would at least require a brief maintenance to roll out since it
 requires coordinated change between client and load-balancer), but I'm not
 fundamentally opposed if there's significant advantages. I'll definitely
 take a look to see if it satisfies our requirements.


  I disagree, it would be only 2: the 'real' IP addresses of the
 load-balancers only.

 OK, fair point. Maybe it's just being paranoid to think that unless we're
 explicitly setting the source, we should account for *all* possible
 sources. The VIP wouldn't be the default route, so we could probably get
 away with ignoring it. Come to think of it... maybe having keepalived
 change the default route on the primary and skipping hardcoding the source
 in haproxy would address what we're aiming for? seems worth further
 investigation, as I'm not sure whether it supports this out of the box.


  there is no 0.0.0.0 magic values neither subnet values accepted in nginx
 XFF  module?

 I wouldn't use 0.0.0.0 whether there is or not, as i wouldn't want it to
 be that open. It might be a different case for a subnet value, if we were
 able to put the load-balancer cluster in a separate subnet, but our current
 situation (managed private openstack deployment) doesn't give us quite that
 much network control. maybe someday soon with VXLAN or another overlay (of
 course, that comes with performance penalties, so maybe not).


  Then instead of using a VIP, you can book 2 IPs in your subnet that
 could be used, whatever the LB is using.

 Pre-allocating network IPs from the subnet that aren't permitted to be
 assigned to anything other than whatever instance is currently filling the
 load-balancer role would certainly work (I like this idea!); that's
 actually pretty similar to what we're doing for the internal VIP currently
 (the external VIP is just an openstack floating IP, aka a DNAT in the
 underlying infrastructure), and then adding it as an allowed address for
 the instance-associated network port instance in Neutron's
 allowed-address-pairs... It'd be an extra step when creating an LB node,
 but a pretty reasonable one I think, and we're already treating them
 differently from generic instances anyways... definitely food for thought.

  HAProxy rocks !

 +1 * 100. :)


  Can you start it up with strace ??

 Yep! https://gist.github.com/nathwill/ea52324867072183b695

 So far, I still like the source 0.0.0.0 usesrc 10.240.36.13 solution the
 best, as it seems the most direct and easily understood. Fingers crossed
 the permissions issue is easily overcome.

 Cheers,

 Nathan W

 On Tue, Jul 14, 2015 at 2:58 PM Baptiste bed...@gmail.com wrote:

  As for details, it's advantageous for us for a couple of reasons... the
  realip module in nginx requires that you list trusted hosts which are
  permitted to set the X-Forwarded-For header before it will set the
 source
  address in the logs to the x-forwarded-for address. as a result, using
  anything other than the VIP means:

 Why not using proxy-protocol between HAProxy and nginx?
 http://blog.haproxy.com/haproxy/proxy-protocol/

 So you can get rid of X-FF header limitation in nginx. (don't know if
 proxy-protocol implementation in nginx suffers from the same
 limitations).

  - not using the vip means we have to trust 3 addresses instead of 1 to
 set
  x-forwarded-for

 I disagree, it would be only 2: the 'real' IP addresses of the
 load-balancers only.

  - we have to update the list of allowed hosts on all of our backends any
  time we replace a load-balancer node. We're using config management, so
 it's
  automated, but that's still more changes than should ideally be
 necessary to
  replace a no-data node that we ideally can trash and replace at will.

 there is no 0.0.0.0 magic values neither subnet values accepted in
 nginx XFF  module?
 If not, it deserves a patch !

  - there's a lag between the time of a change(e.g. node replacement)
 and the
  next converge cycle of the config mgmt on the backends, so for some
 period
  the backend config will be out of sync, incorrectly trusting IP(s) that
 may
  now be associated with another host, or wrongly refusing to set the
 source
  ip to the x-forwarded-for address. this is problematic for us, since we
 have
  a highly-restricted internal environment, due to our business model
 

Re: IP binding and standby health-checks

2015-07-15 Thread Nathan Williams
Hi Baptiste,

Sorry for the delayed response, had some urgent things come up that
required more immediate attention... thanks again for your continued
support.

 Why not using proxy-protocol between HAProxy and nginx?

Sounds interesting; I'd definitely heard of it before, but hadn't looked
into it since what we've been doing has been working. My initial impression
is that it's a pretty big change from what we're currently doing (looks
like it would at least require a brief maintenance to roll out since it
requires coordinated change between client and load-balancer), but I'm not
fundamentally opposed if there's significant advantages. I'll definitely
take a look to see if it satisfies our requirements.

 I disagree, it would be only 2: the 'real' IP addresses of the
load-balancers only.

OK, fair point. Maybe it's just being paranoid to think that unless we're
explicitly setting the source, we should account for *all* possible
sources. The VIP wouldn't be the default route, so we could probably get
away with ignoring it. Come to think of it... maybe having keepalived
change the default route on the primary and skipping hardcoding the source
in haproxy would address what we're aiming for? seems worth further
investigation, as I'm not sure whether it supports this out of the box.

 there is no 0.0.0.0 magic values neither subnet values accepted in nginx
XFF  module?

I wouldn't use 0.0.0.0 whether there is or not, as i wouldn't want it to be
that open. It might be a different case for a subnet value, if we were able
to put the load-balancer cluster in a separate subnet, but our current
situation (managed private openstack deployment) doesn't give us quite that
much network control. maybe someday soon with VXLAN or another overlay (of
course, that comes with performance penalties, so maybe not).

 Then instead of using a VIP, you can book 2 IPs in your subnet that could
be used, whatever the LB is using.

Pre-allocating network IPs from the subnet that aren't permitted to be
assigned to anything other than whatever instance is currently filling the
load-balancer role would certainly work (I like this idea!); that's
actually pretty similar to what we're doing for the internal VIP currently
(the external VIP is just an openstack floating IP, aka a DNAT in the
underlying infrastructure), and then adding it as an allowed address for
the instance-associated network port instance in Neutron's
allowed-address-pairs... It'd be an extra step when creating an LB node,
but a pretty reasonable one I think, and we're already treating them
differently from generic instances anyways... definitely food for thought.

 HAProxy rocks !

+1 * 100. :)

 Can you start it up with strace ??

Yep! https://gist.github.com/nathwill/ea52324867072183b695

So far, I still like the source 0.0.0.0 usesrc 10.240.36.13 solution the
best, as it seems the most direct and easily understood. Fingers crossed
the permissions issue is easily overcome.

Cheers,

Nathan W

On Tue, Jul 14, 2015 at 2:58 PM Baptiste bed...@gmail.com wrote:

  As for details, it's advantageous for us for a couple of reasons... the
  realip module in nginx requires that you list trusted hosts which are
  permitted to set the X-Forwarded-For header before it will set the
 source
  address in the logs to the x-forwarded-for address. as a result, using
  anything other than the VIP means:

 Why not using proxy-protocol between HAProxy and nginx?
 http://blog.haproxy.com/haproxy/proxy-protocol/

 So you can get rid of X-FF header limitation in nginx. (don't know if
 proxy-protocol implementation in nginx suffers from the same
 limitations).

  - not using the vip means we have to trust 3 addresses instead of 1 to
 set
  x-forwarded-for

 I disagree, it would be only 2: the 'real' IP addresses of the
 load-balancers only.

  - we have to update the list of allowed hosts on all of our backends any
  time we replace a load-balancer node. We're using config management, so
 it's
  automated, but that's still more changes than should ideally be
 necessary to
  replace a no-data node that we ideally can trash and replace at will.

 there is no 0.0.0.0 magic values neither subnet values accepted in
 nginx XFF  module?
 If not, it deserves a patch !

  - there's a lag between the time of a change(e.g. node replacement)  and
 the
  next converge cycle of the config mgmt on the backends, so for some
 period
  the backend config will be out of sync, incorrectly trusting IP(s) that
 may
  now be associated with another host, or wrongly refusing to set the
 source
  ip to the x-forwarded-for address. this is problematic for us, since we
 have
  a highly-restricted internal environment, due to our business model
 (online
  learn-to-code school) being essentially running untrusted code as a
  service.

 Then instead of using a VIP, you can book 2 IPs in your subnet that
 could be used, whatever the LB is using.
 So you don't rely on the VIP, whatever the HAProxy box real IP, you
 configure 

Re: IP binding and standby health-checks

2015-07-14 Thread Jarno Huuskonen
Hi,

On Mon, Jul 13, Nathan Williams wrote:
 It seems like the easiest way to sort it out would be if the health-checks
 weren't also bound to the VIP so that the standby could complete them
 successfully. i do still want the proxied requests bound to the VIP though,
 forthe benefit of our backends' real-ip configuration.

Maybe with addr in backend server config:
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#5.2-addr

-Jarno

-- 
Jarno Huuskonen



Re: IP binding and standby health-checks

2015-07-14 Thread Baptiste
On Mon, Jul 13, 2015 at 6:03 PM, Nathan Williams nath.e.w...@gmail.com wrote:
 Hi all,

 I'm hoping I can get some advice on how we can improve our failover setup.

 At present, we have an active-standby setup. Failover works really well, but
 on the standby, none of the backend servers are marked as up since haproxy
 is bound to the VIP that is currently on the active member (managed with
 keepalived). as a result, there's an initial period of a second or two after
 the failover triggers and the standby claims the VIP where the backend
 servers have not yet passed a health-check on the new active member.

 It seems like the easiest way to sort it out would be if the health-checks
 weren't also bound to the VIP so that the standby could complete them
 successfully. i do still want the proxied requests bound to the VIP though,
 forthe benefit of our backends' real-ip configuration.

 is that doable? if not, is there some way to have the standby follow the
 active-member's view on the backends, or another way i haven't seen yet?

 Thanks!

 Nathan W

Hi Nathan,

Maybe you could share your configuration.
Please also let us know the real and virtual IPs configured on your
master and slave HAProxy servers.

Baptiste



Re: IP binding and standby health-checks

2015-07-14 Thread Nathan Williams
Hi Baptiste/Jarno,

Thanks so much for responding.

addr does indeed look like a promising option (though a strangely lacking
explanation in the docs, which explains what it makes possible while
leaving the reader to deduce what it actually does), thanks for pointing
that out.

Here's our config: https://gist.github.com/nathwill/d30f2e9cc0c97bc5fc6f
(believe it or not this is the trimmed down version from what we used to
have :), but backends, how they propagate in this microservice-oriented
world of ours... ).

As for addresses, the VIP is 10.240.36.13, and the active/standby local
addresses are .11 and .12.

fthe problem is basically that the way it's currently configured, when the
.11 is active and has the .13 address, health-checks from haproxy on the
.12 host also originate from the .13 address (guessing due to the source
line), and so never return and are (rightfully) marked by haproxy as L4CON
network timeouts.

i'm going to try the addr config and report back; fingers crossed!

cheers,

Nathan W

On Tue, Jul 14, 2015 at 5:21 AM Baptiste bed...@gmail.com wrote:

 On Mon, Jul 13, 2015 at 6:03 PM, Nathan Williams nath.e.w...@gmail.com
 wrote:
  Hi all,
 
  I'm hoping I can get some advice on how we can improve our failover
 setup.
 
  At present, we have an active-standby setup. Failover works really well,
 but
  on the standby, none of the backend servers are marked as up since
 haproxy
  is bound to the VIP that is currently on the active member (managed with
  keepalived). as a result, there's an initial period of a second or two
 after
  the failover triggers and the standby claims the VIP where the backend
  servers have not yet passed a health-check on the new active member.
 
  It seems like the easiest way to sort it out would be if the
 health-checks
  weren't also bound to the VIP so that the standby could complete them
  successfully. i do still want the proxied requests bound to the VIP
 though,
  forthe benefit of our backends' real-ip configuration.
 
  is that doable? if not, is there some way to have the standby follow
 the
  active-member's view on the backends, or another way i haven't seen yet?
 
  Thanks!
 
  Nathan W

 Hi Nathan,

 Maybe you could share your configuration.
 Please also let us know the real and virtual IPs configured on your
 master and slave HAProxy servers.

 Baptiste



Re: IP binding and standby health-checks

2015-07-14 Thread Baptiste
 As for details, it's advantageous for us for a couple of reasons... the
 realip module in nginx requires that you list trusted hosts which are
 permitted to set the X-Forwarded-For header before it will set the source
 address in the logs to the x-forwarded-for address. as a result, using
 anything other than the VIP means:

Why not using proxy-protocol between HAProxy and nginx?
http://blog.haproxy.com/haproxy/proxy-protocol/

So you can get rid of X-FF header limitation in nginx. (don't know if
proxy-protocol implementation in nginx suffers from the same
limitations).

 - not using the vip means we have to trust 3 addresses instead of 1 to set
 x-forwarded-for

I disagree, it would be only 2: the 'real' IP addresses of the
load-balancers only.

 - we have to update the list of allowed hosts on all of our backends any
 time we replace a load-balancer node. We're using config management, so it's
 automated, but that's still more changes than should ideally be necessary to
 replace a no-data node that we ideally can trash and replace at will.

there is no 0.0.0.0 magic values neither subnet values accepted in
nginx XFF  module?
If not, it deserves a patch !

 - there's a lag between the time of a change(e.g. node replacement)  and the
 next converge cycle of the config mgmt on the backends, so for some period
 the backend config will be out of sync, incorrectly trusting IP(s) that may
 now be associated with another host, or wrongly refusing to set the source
 ip to the x-forwarded-for address. this is problematic for us, since we have
 a highly-restricted internal environment, due to our business model (online
 learn-to-code school) being essentially running untrusted code as a
 service.

Then instead of using a VIP, you can book 2 IPs in your subnet that
could be used, whatever the LB is using.
So you don't rely on the VIP, whatever the HAProxy box real IP, you
configure one of the IP above as an alias and you use it from HAProxy.

 Happily, your suggested solution seems to achieve what we're aiming for
 (thanks!). The health-checks are coming from the local IP, and proxied
 requests from clients are coming from the VIP. The standby is seeing
 backends as UP since they're able to pass the health-checks. Progress!

Finally we made it :)
HAProxy rocks !

 Unfortunately, this seems to cause another problem with our config... though
 haproxy passes the config validation (haproxy -c -f /etc/haproxy.cfg), it
 fails to start up, logging an error like Jul 14 20:22:48
 lb01.stage.iad01.treehouse haproxy-systemd-wrapper[25225]: [ALERT]
 194/202248 (25226) : [/usr/sbin/haproxy.main()] Some configuration options
 require full privileges, so global.uid cannot be changed.. We can get it to
 work by removing the user and group directives from the global section and
 letting haproxy run as root, but having to escalate privileges is also less
 than ideal... I almost hate to ask for further assistance, but do you have
 any suggestions related to the above? FWIW, we're using haproxy 1.5.4 and
 kernel 4.0.4 on CentOS 7.

Some features require root privileges, that said, from a documentation
point of view, It doesn't seem the 'source' keyword like I asked you
to set it up is one of them.

Can you start it up with strace ??

Baptiste


 Regards,

 Nathan W

 On Tue, Jul 14, 2015 at 12:31 PM Baptiste bed...@gmail.com wrote:

 Nathan,

 The question is: why do you want to use the VIP to get connected on
 your backend server?

 Please give a try to the following source line, instead of your current
 one:
   source 0.0.0.0 usesrc 10.240.36.13

 Baptiste


 On Tue, Jul 14, 2015 at 9:06 PM, Nathan Williams nath.e.w...@gmail.com
 wrote:
  OK, that did not seem to work, so I think the correct interpretation of
  that
  addr option must be as an override for what address/port to perform
  the
  health-check *against* instead of from (which makes more sense in
  context of
  it being a server option).
 
  i was hoping for an option like health-check-source or similar, if
  that
  makes sense; I also tried removing the source directive and binding
  the
  frontend to the VIP explicitly, hoping that would cause the proxied
  requests
  to originate from the bound IP, but that didn't seem to do it either.
  While
  the standby was then able to see the backends as up, the proxied
  requests
  to the backends came from the local IP instead of the VIP.
 
  Regards,
 
  Nathan W
 
  On Tue, Jul 14, 2015 at 8:58 AM Nathan Williams nath.e.w...@gmail.com
  wrote:
 
  Hi Baptiste/Jarno,
 
  Thanks so much for responding.
 
  addr does indeed look like a promising option (though a strangely
  lacking explanation in the docs, which explains what it makes possible
  while
  leaving the reader to deduce what it actually does), thanks for
  pointing
  that out.
 
  Here's our config:
  https://gist.github.com/nathwill/d30f2e9cc0c97bc5fc6f
  (believe it or not this is the trimmed down version from what we used
  to
  have :), but backends, how they 

Re: IP binding and standby health-checks

2015-07-14 Thread Baptiste
Nathan,

The question is: why do you want to use the VIP to get connected on
your backend server?

Please give a try to the following source line, instead of your current one:
  source 0.0.0.0 usesrc 10.240.36.13

Baptiste


On Tue, Jul 14, 2015 at 9:06 PM, Nathan Williams nath.e.w...@gmail.com wrote:
 OK, that did not seem to work, so I think the correct interpretation of that
 addr option must be as an override for what address/port to perform the
 health-check *against* instead of from (which makes more sense in context of
 it being a server option).

 i was hoping for an option like health-check-source or similar, if that
 makes sense; I also tried removing the source directive and binding the
 frontend to the VIP explicitly, hoping that would cause the proxied requests
 to originate from the bound IP, but that didn't seem to do it either. While
 the standby was then able to see the backends as up, the proxied requests
 to the backends came from the local IP instead of the VIP.

 Regards,

 Nathan W

 On Tue, Jul 14, 2015 at 8:58 AM Nathan Williams nath.e.w...@gmail.com
 wrote:

 Hi Baptiste/Jarno,

 Thanks so much for responding.

 addr does indeed look like a promising option (though a strangely
 lacking explanation in the docs, which explains what it makes possible while
 leaving the reader to deduce what it actually does), thanks for pointing
 that out.

 Here's our config: https://gist.github.com/nathwill/d30f2e9cc0c97bc5fc6f
 (believe it or not this is the trimmed down version from what we used to
 have :), but backends, how they propagate in this microservice-oriented
 world of ours... ).

 As for addresses, the VIP is 10.240.36.13, and the active/standby local
 addresses are .11 and .12.

 fthe problem is basically that the way it's currently configured, when the
 .11 is active and has the .13 address, health-checks from haproxy on the .12
 host also originate from the .13 address (guessing due to the source
 line), and so never return and are (rightfully) marked by haproxy as L4CON
 network timeouts.

 i'm going to try the addr config and report back; fingers crossed!

 cheers,

 Nathan W

 On Tue, Jul 14, 2015 at 5:21 AM Baptiste bed...@gmail.com wrote:

 On Mon, Jul 13, 2015 at 6:03 PM, Nathan Williams nath.e.w...@gmail.com
 wrote:
  Hi all,
 
  I'm hoping I can get some advice on how we can improve our failover
  setup.
 
  At present, we have an active-standby setup. Failover works really
  well, but
  on the standby, none of the backend servers are marked as up since
  haproxy
  is bound to the VIP that is currently on the active member (managed
  with
  keepalived). as a result, there's an initial period of a second or two
  after
  the failover triggers and the standby claims the VIP where the backend
  servers have not yet passed a health-check on the new active member.
 
  It seems like the easiest way to sort it out would be if the
  health-checks
  weren't also bound to the VIP so that the standby could complete them
  successfully. i do still want the proxied requests bound to the VIP
  though,
  forthe benefit of our backends' real-ip configuration.
 
  is that doable? if not, is there some way to have the standby follow
  the
  active-member's view on the backends, or another way i haven't seen
  yet?
 
  Thanks!
 
  Nathan W

 Hi Nathan,

 Maybe you could share your configuration.
 Please also let us know the real and virtual IPs configured on your
 master and slave HAProxy servers.

 Baptiste



Re: IP binding and standby health-checks

2015-07-14 Thread Nathan Williams
Hi Baptiste,

That's a fair question :) I understand it's a rather particular request,
it's just the first time we've really hit something that we weren't easily
able to address with haproxy (really marvelous software, thanks y'all), so
I figured we'd ask before accepting an inferior solution...

As for details, it's advantageous for us for a couple of reasons... the
realip module in nginx requires that you list trusted hosts which are
permitted to set the X-Forwarded-For header before it will set the source
address in the logs to the x-forwarded-for address. as a result, using
anything other than the VIP means:

- not using the vip means we have to trust 3 addresses instead of 1 to set
x-forwarded-for
- we have to update the list of allowed hosts on all of our backends any
time we replace a load-balancer node. We're using config management, so
it's automated, but that's still more changes than should ideally be
necessary to replace a no-data node that we ideally can trash and replace
at will.
- there's a lag between the time of a change(e.g. node replacement)  and
the next converge cycle of the config mgmt on the backends, so for some
period the backend config will be out of sync, incorrectly trusting IP(s)
that may now be associated with another host, or wrongly refusing to set
the source ip to the x-forwarded-for address. this is problematic for us,
since we have a highly-restricted internal environment, due to our business
model (online learn-to-code school) being essentially running untrusted
code as a service.

Happily, your suggested solution seems to achieve what we're aiming for
(thanks!). The health-checks are coming from the local IP, and proxied
requests from clients are coming from the VIP. The standby is seeing
backends as UP since they're able to pass the health-checks. Progress!

Unfortunately, this seems to cause another problem with our config...
though haproxy passes the config validation (haproxy -c -f
/etc/haproxy.cfg), it fails to start up, logging an error like Jul 14
20:22:48 lb01.stage.iad01.treehouse haproxy-systemd-wrapper[25225]: [ALERT]
194/202248 (25226) : [/usr/sbin/haproxy.main()] Some configuration options
require full privileges, so global.uid cannot be changed.. We can get it
to work by removing the user and group directives from the global section
and letting haproxy run as root, but having to escalate privileges is also
less than ideal... I almost hate to ask for further assistance, but do you
have any suggestions related to the above? FWIW, we're using haproxy 1.5.4
and kernel 4.0.4 on CentOS 7.

Regards,

Nathan W

On Tue, Jul 14, 2015 at 12:31 PM Baptiste bed...@gmail.com wrote:

 Nathan,

 The question is: why do you want to use the VIP to get connected on
 your backend server?

 Please give a try to the following source line, instead of your current
 one:
   source 0.0.0.0 usesrc 10.240.36.13

 Baptiste


 On Tue, Jul 14, 2015 at 9:06 PM, Nathan Williams nath.e.w...@gmail.com
 wrote:
  OK, that did not seem to work, so I think the correct interpretation of
 that
  addr option must be as an override for what address/port to perform the
  health-check *against* instead of from (which makes more sense in
 context of
  it being a server option).
 
  i was hoping for an option like health-check-source or similar, if that
  makes sense; I also tried removing the source directive and binding the
  frontend to the VIP explicitly, hoping that would cause the proxied
 requests
  to originate from the bound IP, but that didn't seem to do it either.
 While
  the standby was then able to see the backends as up, the proxied
 requests
  to the backends came from the local IP instead of the VIP.
 
  Regards,
 
  Nathan W
 
  On Tue, Jul 14, 2015 at 8:58 AM Nathan Williams nath.e.w...@gmail.com
  wrote:
 
  Hi Baptiste/Jarno,
 
  Thanks so much for responding.
 
  addr does indeed look like a promising option (though a strangely
  lacking explanation in the docs, which explains what it makes possible
 while
  leaving the reader to deduce what it actually does), thanks for pointing
  that out.
 
  Here's our config:
 https://gist.github.com/nathwill/d30f2e9cc0c97bc5fc6f
  (believe it or not this is the trimmed down version from what we used to
  have :), but backends, how they propagate in this microservice-oriented
  world of ours... ).
 
  As for addresses, the VIP is 10.240.36.13, and the active/standby local
  addresses are .11 and .12.
 
  fthe problem is basically that the way it's currently configured, when
 the
  .11 is active and has the .13 address, health-checks from haproxy on
 the .12
  host also originate from the .13 address (guessing due to the source
  line), and so never return and are (rightfully) marked by haproxy as
 L4CON
  network timeouts.
 
  i'm going to try the addr config and report back; fingers crossed!
 
  cheers,
 
  Nathan W
 
  On Tue, Jul 14, 2015 at 5:21 AM Baptiste bed...@gmail.com wrote:
 
  On Mon, Jul 13, 2015 at 6:03 PM, Nathan Williams 
 

Re: IP binding and standby health-checks

2015-07-14 Thread Nathan Williams
OK, that did not seem to work, so I think the correct interpretation of
that addr option must be as an override for what address/port to perform
the health-check *against* instead of from (which makes more sense in
context of it being a server option).

i was hoping for an option like health-check-source or similar, if that
makes sense; I also tried removing the source directive and binding the
frontend to the VIP explicitly, hoping that would cause the proxied
requests to originate from the bound IP, but that didn't seem to do it
either. While the standby was then able to see the backends as up, the
proxied requests to the backends came from the local IP instead of the VIP.

Regards,

Nathan W

On Tue, Jul 14, 2015 at 8:58 AM Nathan Williams nath.e.w...@gmail.com
wrote:

 Hi Baptiste/Jarno,

 Thanks so much for responding.

 addr does indeed look like a promising option (though a strangely
 lacking explanation in the docs, which explains what it makes possible
 while leaving the reader to deduce what it actually does), thanks for
 pointing that out.

 Here's our config: https://gist.github.com/nathwill/d30f2e9cc0c97bc5fc6f
 (believe it or not this is the trimmed down version from what we used to
 have :), but backends, how they propagate in this microservice-oriented
 world of ours... ).

 As for addresses, the VIP is 10.240.36.13, and the active/standby local
 addresses are .11 and .12.

 fthe problem is basically that the way it's currently configured, when the
 .11 is active and has the .13 address, health-checks from haproxy on the
 .12 host also originate from the .13 address (guessing due to the source
 line), and so never return and are (rightfully) marked by haproxy as L4CON
 network timeouts.

 i'm going to try the addr config and report back; fingers crossed!

 cheers,

 Nathan W

 On Tue, Jul 14, 2015 at 5:21 AM Baptiste bed...@gmail.com wrote:

 On Mon, Jul 13, 2015 at 6:03 PM, Nathan Williams nath.e.w...@gmail.com
 wrote:
  Hi all,
 
  I'm hoping I can get some advice on how we can improve our failover
 setup.
 
  At present, we have an active-standby setup. Failover works really
 well, but
  on the standby, none of the backend servers are marked as up since
 haproxy
  is bound to the VIP that is currently on the active member (managed with
  keepalived). as a result, there's an initial period of a second or two
 after
  the failover triggers and the standby claims the VIP where the backend
  servers have not yet passed a health-check on the new active member.
 
  It seems like the easiest way to sort it out would be if the
 health-checks
  weren't also bound to the VIP so that the standby could complete them
  successfully. i do still want the proxied requests bound to the VIP
 though,
  forthe benefit of our backends' real-ip configuration.
 
  is that doable? if not, is there some way to have the standby follow
 the
  active-member's view on the backends, or another way i haven't seen yet?
 
  Thanks!
 
  Nathan W

 Hi Nathan,

 Maybe you could share your configuration.
 Please also let us know the real and virtual IPs configured on your
 master and slave HAProxy servers.

 Baptiste




IP binding and standby health-checks

2015-07-13 Thread Nathan Williams
Hi all,

I'm hoping I can get some advice on how we can improve our failover setup.

At present, we have an active-standby setup. Failover works really well,
but on the standby, none of the backend servers are marked as up since
haproxy is bound to the VIP that is currently on the active member (managed
with keepalived). as a result, there's an initial period of a second or two
after the failover triggers and the standby claims the VIP where the
backend servers have not yet passed a health-check on the new active member.

It seems like the easiest way to sort it out would be if the health-checks
weren't also bound to the VIP so that the standby could complete them
successfully. i do still want the proxied requests bound to the VIP though,
forthe benefit of our backends' real-ip configuration.

is that doable? if not, is there some way to have the standby follow the
active-member's view on the backends, or another way i haven't seen yet?

Thanks!

Nathan W