unsubscribe

2016-05-25 Thread Nathan Williams



Re: ssl offloading

2016-03-31 Thread Nathan Williams
stunnel's what we used before Haproxy had it built in, which worked fine,
but SSL offloading in Haproxy's been excellent in our experience, so my
guess would be that you could make it work with some config tuning.

On Thu, Mar 31, 2016, 12:45 PM Lukas Tribus  wrote:

> > Hi list,
> >
> > what are your ideas about offloading of ssl? ssl inside haproxy is nice
> > but is very expensive.
>
> Why would you think that?
>
>
> Lukas
>
>
>


Re: Easy haproxy redundancy

2015-08-27 Thread Nathan Williams
Yeah, keepalived handles the gratuitous arp on failover, it works nicely. I
do miss the admin tools for pacemaker though. I'm AFK, but I'll write up a
full explanation of our HA setup when I'm back at a PC.

Cheers,

Nathan

On Thu, Aug 27, 2015, 6:11 PM Shawn Heisey hapr...@elyograg.org wrote:

 On 8/27/2015 6:52 PM, Nathan Williams wrote:
  There's a sysctl for that, net.ipv4.ip_nonlocal_bind.

 Interesting.  That's one I had never seen before.  I would assume that
 the OS does this intelligently so that when the IP address *does*
 suddenly appear at a later time, the application works seamlessly.
 That's something I will have to test.

 I might need to rethink my redundancy design with this new bit of
 knowledge.  I have seen a number of incidents where pacemaker froze and
 I had no idea there was a problem until I did maintenance on the standby
 server (either rebooting it or stopping pacemaker) and the online host
 didn't notice it going down.  Everything kept working, but the systems
 were in a state where no failover would have occurred in the event of a
 failure.  I bet keepalived is a lot more stable than pacemaker.

 Thanks,
 Shawn





Re: Easy haproxy redundancy

2015-08-27 Thread Nathan Williams
On Fri, 2015-08-28 at 01:25 +, Nathan Williams wrote:
 Yeah, keepalived handles the gratuitous arp on failover, it works
 nicely. I do miss the admin tools for pacemaker though. I'm AFK, but
 I'll write up a full explanation of our HA setup when I'm back at a
 PC.
 Cheers,
 Nathan
 

Okay, here's the details on how we're doing this with keepalived.

We have 2 OpenStack VMs with IPs on the internal network, a keepalived
-managed VIP on the internal network that's added to each VMs allowed
-address-pairs in neutron, and a floating IP from the external network
mapped to the internal VIP (OpenStack floating IP is just a SNAT/DNAT).
Depending on your environment, that's probably not super relevant, but
it's essential to being able to have a public VIP under neutron, so I
thought I'd mention it.

We set the sysctl to enable binding to an address that we don't have.
There's some other LB sysctl settings we set on the LB, but they're
tunings, and not essential to the HA configuration.

`sysctl net.ipv4.ip_nonlocal_bind=1`

keepalived.conf[0]: keepalived manages the vip, runs scored health
checks: the master has the highest score. keepalived also handles VIP
migration and gratuitous arp to announce the change on failover.

role-check.sh[1]: fail the master if it doesn't have the VIP. not
strictly necessary, but we're paranoid.

keepalive-notify.sh[2]: record history of state changes, last row used
by role-check.sh to determine current state.

It's been really stable over the last 8 months we've been running it;
failover works really cleanly like you'd expect, there's been no
unexplainable failovers we've run into so far, no failure to failover
when it should. Without using LVS to do the actual load-balancing,
there's no keepalived-associated tools I'm aware of to let you inspect
the cluster state (as compared to pacemaker `crm_mon -Af`). It doesn't
really matter with this simple cluster configuration, though; the
master's the one with the VIP, so `ip addr show` tells you which one's
the master, and the notify script-generated log file confirms it.

You can manually force a failover by stopping the keepalived service on
the master or adding an exit 1 to the top of the role-check.sh script
(adjusting the scoring). Ditto for putting the standby into maintenance
mode, you just ensure it can't end up with a higher score than the
master.

One thing to be aware of is that keepalived uses multicast by default,
so you want to make sure every cluster uses a unique router-id, or your
clusters might interfere with each other.

Anyways, hope that helps! Feel free to ask if you have any add'l
questions :)

Regards,

Nathan

[0]: https://gist.github.com/nathwill/2463002f342cc75ae6b0
[1]: https://gist.github.com/nathwill/5475ff3b891c7f2b44b3
[2]: https://gist.github.com/nathwill/ac75957052bd75597780

On Thu, Aug 27, 2015, 6:11 PM Shawn Heisey hapr...@elyograg.org
 wrote:
  On 8/27/2015 6:52 PM, Nathan Williams wrote:
   There's a sysctl for that, net.ipv4.ip_nonlocal_bind.
  
  Interesting.  That's one I had never seen before.  I would assume
  that
  the OS does this intelligently so that when the IP address *does*
  suddenly appear at a later time, the application works seamlessly.
  That's something I will have to test.
  
  I might need to rethink my redundancy design with this new bit of
  knowledge.  I have seen a number of incidents where pacemaker froze
  and
  I had no idea there was a problem until I did maintenance on the
  standby
  server (either rebooting it or stopping pacemaker) and the online
  host
  didn't notice it going down.  Everything kept working, but the
  systems
  were in a state where no failover would have occurred in the event
  of a
  failure.  I bet keepalived is a lot more stable than pacemaker.
  
  Thanks,
  Shawn
  
  
  



Re: Easy haproxy redundancy

2015-08-27 Thread Nathan Williams
There's a sysctl for that, net.ipv4.ip_nonlocal_bind.

On Thu, Aug 27, 2015, 5:49 PM Shawn Heisey hapr...@elyograg.org wrote:

 On 8/24/2015 12:06 PM, Dennis Jacobfeuerborn wrote:
  There is no need to run a full Pacemaker stack. Just run HAProxy on both
  nodes and manage the virtual ips using keepalived.

 All of my bind statements are applied to specific ip addresses, not
 0.0.0.0.

 If you try to start haproxy on a machine that is missing the address(es)
 that you are binding to (which describes the standby server in a
 redundant pair), it won't start.  Public IP addresses redacted in the
 following partial log:

 root@lb4:~# service haproxy start ; service haproxy stop
  * Starting haproxy haproxy
 [ALERT] 238/183842 (32404) : Starting frontend fe-spark-80: cannot bind
 socket [RE.DAC.TED.78:80]
 [ALERT] 238/183842 (32404) : Starting frontend fe-spark-443: cannot bind
 socket [RE.DAC.TED.78:443]

 This is why I run redundant haproxy with a full pacemaker stack that
 starts haproxy and the gratuitous arps *after* the address resources
 have started.

 Thanks,
 Shawn





Re: IP address ACLs

2015-08-15 Thread Nathan Williams
We use a file for about 40 cidr blocks, and don't have any problems with
load speed. Presumably large means more than that, though.

We use comments as well, but they have to be at the beginning of their own
line, not tagged on after the address.

On Fri, Aug 14, 2015, 9:09 PM CJ Ess zxcvbn4...@gmail.com wrote:

 When doing a large number of IP based ACLs in HAProxy, is it more
 efficient to load the ACLs from a file with the -f argument? Or is just as
 good to use multiple ACL statements in the cfg file?

 If I did use a file with the -f parameter, is it possible to put comments
 in the file?




Re: IP binding and standby health-checks

2015-07-16 Thread Nathan Williams
oh, i think this comment thread explains it:
http://comments.gmane.org/gmane.comp.web.haproxy/20366. I'll see about
assigning

CAP_NET_ADMIN


On Wed, Jul 15, 2015 at 4:56 PM Nathan Williams nath.e.w...@gmail.com
wrote:

 Hi Baptiste,

 Sorry for the delayed response, had some urgent things come up that
 required more immediate attention... thanks again for your continued
 support.


  Why not using proxy-protocol between HAProxy and nginx?

 Sounds interesting; I'd definitely heard of it before, but hadn't looked
 into it since what we've been doing has been working. My initial impression
 is that it's a pretty big change from what we're currently doing (looks
 like it would at least require a brief maintenance to roll out since it
 requires coordinated change between client and load-balancer), but I'm not
 fundamentally opposed if there's significant advantages. I'll definitely
 take a look to see if it satisfies our requirements.


  I disagree, it would be only 2: the 'real' IP addresses of the
 load-balancers only.

 OK, fair point. Maybe it's just being paranoid to think that unless we're
 explicitly setting the source, we should account for *all* possible
 sources. The VIP wouldn't be the default route, so we could probably get
 away with ignoring it. Come to think of it... maybe having keepalived
 change the default route on the primary and skipping hardcoding the source
 in haproxy would address what we're aiming for? seems worth further
 investigation, as I'm not sure whether it supports this out of the box.


  there is no 0.0.0.0 magic values neither subnet values accepted in nginx
 XFF  module?

 I wouldn't use 0.0.0.0 whether there is or not, as i wouldn't want it to
 be that open. It might be a different case for a subnet value, if we were
 able to put the load-balancer cluster in a separate subnet, but our current
 situation (managed private openstack deployment) doesn't give us quite that
 much network control. maybe someday soon with VXLAN or another overlay (of
 course, that comes with performance penalties, so maybe not).


  Then instead of using a VIP, you can book 2 IPs in your subnet that
 could be used, whatever the LB is using.

 Pre-allocating network IPs from the subnet that aren't permitted to be
 assigned to anything other than whatever instance is currently filling the
 load-balancer role would certainly work (I like this idea!); that's
 actually pretty similar to what we're doing for the internal VIP currently
 (the external VIP is just an openstack floating IP, aka a DNAT in the
 underlying infrastructure), and then adding it as an allowed address for
 the instance-associated network port instance in Neutron's
 allowed-address-pairs... It'd be an extra step when creating an LB node,
 but a pretty reasonable one I think, and we're already treating them
 differently from generic instances anyways... definitely food for thought.

  HAProxy rocks !

 +1 * 100. :)


  Can you start it up with strace ??

 Yep! https://gist.github.com/nathwill/ea52324867072183b695

 So far, I still like the source 0.0.0.0 usesrc 10.240.36.13 solution the
 best, as it seems the most direct and easily understood. Fingers crossed
 the permissions issue is easily overcome.

 Cheers,

 Nathan W

 On Tue, Jul 14, 2015 at 2:58 PM Baptiste bed...@gmail.com wrote:

  As for details, it's advantageous for us for a couple of reasons... the
  realip module in nginx requires that you list trusted hosts which are
  permitted to set the X-Forwarded-For header before it will set the
 source
  address in the logs to the x-forwarded-for address. as a result, using
  anything other than the VIP means:

 Why not using proxy-protocol between HAProxy and nginx?
 http://blog.haproxy.com/haproxy/proxy-protocol/

 So you can get rid of X-FF header limitation in nginx. (don't know if
 proxy-protocol implementation in nginx suffers from the same
 limitations).

  - not using the vip means we have to trust 3 addresses instead of 1 to
 set
  x-forwarded-for

 I disagree, it would be only 2: the 'real' IP addresses of the
 load-balancers only.

  - we have to update the list of allowed hosts on all of our backends any
  time we replace a load-balancer node. We're using config management, so
 it's
  automated, but that's still more changes than should ideally be
 necessary to
  replace a no-data node that we ideally can trash and replace at will.

 there is no 0.0.0.0 magic values neither subnet values accepted in
 nginx XFF  module?
 If not, it deserves a patch !

  - there's a lag between the time of a change(e.g. node replacement)
 and the
  next converge cycle of the config mgmt on the backends, so for some
 period
  the backend config will be out of sync, incorrectly trusting IP(s) that
 may
  now be associated with another host, or wrongly refusing to set the
 source
  ip to the x-forwarded-for address. this is problematic for us, since we
 have
  a highly-restricted internal environment, due to our business model

Re: IP binding and standby health-checks

2015-07-15 Thread Nathan Williams
 one of the IP above as an alias and you use it from HAProxy.

  Happily, your suggested solution seems to achieve what we're aiming for
  (thanks!). The health-checks are coming from the local IP, and proxied
  requests from clients are coming from the VIP. The standby is seeing
  backends as UP since they're able to pass the health-checks. Progress!

 Finally we made it :)
 HAProxy rocks !

  Unfortunately, this seems to cause another problem with our config...
 though
  haproxy passes the config validation (haproxy -c -f /etc/haproxy.cfg), it
  fails to start up, logging an error like Jul 14 20:22:48
  lb01.stage.iad01.treehouse haproxy-systemd-wrapper[25225]: [ALERT]
  194/202248 (25226) : [/usr/sbin/haproxy.main()] Some configuration
 options
  require full privileges, so global.uid cannot be changed.. We can get
 it to
  work by removing the user and group directives from the global section
 and
  letting haproxy run as root, but having to escalate privileges is also
 less
  than ideal... I almost hate to ask for further assistance, but do you
 have
  any suggestions related to the above? FWIW, we're using haproxy 1.5.4 and
  kernel 4.0.4 on CentOS 7.

 Some features require root privileges, that said, from a documentation
 point of view, It doesn't seem the 'source' keyword like I asked you
 to set it up is one of them.

 Can you start it up with strace ??

 Baptiste


  Regards,
 
  Nathan W
 
  On Tue, Jul 14, 2015 at 12:31 PM Baptiste bed...@gmail.com wrote:
 
  Nathan,
 
  The question is: why do you want to use the VIP to get connected on
  your backend server?
 
  Please give a try to the following source line, instead of your current
  one:
source 0.0.0.0 usesrc 10.240.36.13
 
  Baptiste
 
 
  On Tue, Jul 14, 2015 at 9:06 PM, Nathan Williams nath.e.w...@gmail.com
 
  wrote:
   OK, that did not seem to work, so I think the correct interpretation
 of
   that
   addr option must be as an override for what address/port to perform
   the
   health-check *against* instead of from (which makes more sense in
   context of
   it being a server option).
  
   i was hoping for an option like health-check-source or similar, if
   that
   makes sense; I also tried removing the source directive and binding
   the
   frontend to the VIP explicitly, hoping that would cause the proxied
   requests
   to originate from the bound IP, but that didn't seem to do it either.
   While
   the standby was then able to see the backends as up, the proxied
   requests
   to the backends came from the local IP instead of the VIP.
  
   Regards,
  
   Nathan W
  
   On Tue, Jul 14, 2015 at 8:58 AM Nathan Williams 
 nath.e.w...@gmail.com
   wrote:
  
   Hi Baptiste/Jarno,
  
   Thanks so much for responding.
  
   addr does indeed look like a promising option (though a strangely
   lacking explanation in the docs, which explains what it makes
 possible
   while
   leaving the reader to deduce what it actually does), thanks for
   pointing
   that out.
  
   Here's our config:
   https://gist.github.com/nathwill/d30f2e9cc0c97bc5fc6f
   (believe it or not this is the trimmed down version from what we used
   to
   have :), but backends, how they propagate in this
 microservice-oriented
   world of ours... ).
  
   As for addresses, the VIP is 10.240.36.13, and the active/standby
 local
   addresses are .11 and .12.
  
   fthe problem is basically that the way it's currently configured,
 when
   the
   .11 is active and has the .13 address, health-checks from haproxy on
   the .12
   host also originate from the .13 address (guessing due to the
 source
   line), and so never return and are (rightfully) marked by haproxy as
   L4CON
   network timeouts.
  
   i'm going to try the addr config and report back; fingers crossed!
  
   cheers,
  
   Nathan W
  
   On Tue, Jul 14, 2015 at 5:21 AM Baptiste bed...@gmail.com wrote:
  
   On Mon, Jul 13, 2015 at 6:03 PM, Nathan Williams
   nath.e.w...@gmail.com
   wrote:
Hi all,
   
I'm hoping I can get some advice on how we can improve our
 failover
setup.
   
At present, we have an active-standby setup. Failover works really
well, but
on the standby, none of the backend servers are marked as up
 since
haproxy
is bound to the VIP that is currently on the active member
 (managed
with
keepalived). as a result, there's an initial period of a second or
two
after
the failover triggers and the standby claims the VIP where the
backend
servers have not yet passed a health-check on the new active
 member.
   
It seems like the easiest way to sort it out would be if the
health-checks
weren't also bound to the VIP so that the standby could complete
them
successfully. i do still want the proxied requests bound to the
 VIP
though,
forthe benefit of our backends' real-ip configuration.
   
is that doable? if not, is there some way to have the standby
follow
the
active-member's view

Re: IP binding and standby health-checks

2015-07-14 Thread Nathan Williams
Hi Baptiste/Jarno,

Thanks so much for responding.

addr does indeed look like a promising option (though a strangely lacking
explanation in the docs, which explains what it makes possible while
leaving the reader to deduce what it actually does), thanks for pointing
that out.

Here's our config: https://gist.github.com/nathwill/d30f2e9cc0c97bc5fc6f
(believe it or not this is the trimmed down version from what we used to
have :), but backends, how they propagate in this microservice-oriented
world of ours... ).

As for addresses, the VIP is 10.240.36.13, and the active/standby local
addresses are .11 and .12.

fthe problem is basically that the way it's currently configured, when the
.11 is active and has the .13 address, health-checks from haproxy on the
.12 host also originate from the .13 address (guessing due to the source
line), and so never return and are (rightfully) marked by haproxy as L4CON
network timeouts.

i'm going to try the addr config and report back; fingers crossed!

cheers,

Nathan W

On Tue, Jul 14, 2015 at 5:21 AM Baptiste bed...@gmail.com wrote:

 On Mon, Jul 13, 2015 at 6:03 PM, Nathan Williams nath.e.w...@gmail.com
 wrote:
  Hi all,
 
  I'm hoping I can get some advice on how we can improve our failover
 setup.
 
  At present, we have an active-standby setup. Failover works really well,
 but
  on the standby, none of the backend servers are marked as up since
 haproxy
  is bound to the VIP that is currently on the active member (managed with
  keepalived). as a result, there's an initial period of a second or two
 after
  the failover triggers and the standby claims the VIP where the backend
  servers have not yet passed a health-check on the new active member.
 
  It seems like the easiest way to sort it out would be if the
 health-checks
  weren't also bound to the VIP so that the standby could complete them
  successfully. i do still want the proxied requests bound to the VIP
 though,
  forthe benefit of our backends' real-ip configuration.
 
  is that doable? if not, is there some way to have the standby follow
 the
  active-member's view on the backends, or another way i haven't seen yet?
 
  Thanks!
 
  Nathan W

 Hi Nathan,

 Maybe you could share your configuration.
 Please also let us know the real and virtual IPs configured on your
 master and slave HAProxy servers.

 Baptiste



Re: IP binding and standby health-checks

2015-07-14 Thread Nathan Williams
Hi Baptiste,

That's a fair question :) I understand it's a rather particular request,
it's just the first time we've really hit something that we weren't easily
able to address with haproxy (really marvelous software, thanks y'all), so
I figured we'd ask before accepting an inferior solution...

As for details, it's advantageous for us for a couple of reasons... the
realip module in nginx requires that you list trusted hosts which are
permitted to set the X-Forwarded-For header before it will set the source
address in the logs to the x-forwarded-for address. as a result, using
anything other than the VIP means:

- not using the vip means we have to trust 3 addresses instead of 1 to set
x-forwarded-for
- we have to update the list of allowed hosts on all of our backends any
time we replace a load-balancer node. We're using config management, so
it's automated, but that's still more changes than should ideally be
necessary to replace a no-data node that we ideally can trash and replace
at will.
- there's a lag between the time of a change(e.g. node replacement)  and
the next converge cycle of the config mgmt on the backends, so for some
period the backend config will be out of sync, incorrectly trusting IP(s)
that may now be associated with another host, or wrongly refusing to set
the source ip to the x-forwarded-for address. this is problematic for us,
since we have a highly-restricted internal environment, due to our business
model (online learn-to-code school) being essentially running untrusted
code as a service.

Happily, your suggested solution seems to achieve what we're aiming for
(thanks!). The health-checks are coming from the local IP, and proxied
requests from clients are coming from the VIP. The standby is seeing
backends as UP since they're able to pass the health-checks. Progress!

Unfortunately, this seems to cause another problem with our config...
though haproxy passes the config validation (haproxy -c -f
/etc/haproxy.cfg), it fails to start up, logging an error like Jul 14
20:22:48 lb01.stage.iad01.treehouse haproxy-systemd-wrapper[25225]: [ALERT]
194/202248 (25226) : [/usr/sbin/haproxy.main()] Some configuration options
require full privileges, so global.uid cannot be changed.. We can get it
to work by removing the user and group directives from the global section
and letting haproxy run as root, but having to escalate privileges is also
less than ideal... I almost hate to ask for further assistance, but do you
have any suggestions related to the above? FWIW, we're using haproxy 1.5.4
and kernel 4.0.4 on CentOS 7.

Regards,

Nathan W

On Tue, Jul 14, 2015 at 12:31 PM Baptiste bed...@gmail.com wrote:

 Nathan,

 The question is: why do you want to use the VIP to get connected on
 your backend server?

 Please give a try to the following source line, instead of your current
 one:
   source 0.0.0.0 usesrc 10.240.36.13

 Baptiste


 On Tue, Jul 14, 2015 at 9:06 PM, Nathan Williams nath.e.w...@gmail.com
 wrote:
  OK, that did not seem to work, so I think the correct interpretation of
 that
  addr option must be as an override for what address/port to perform the
  health-check *against* instead of from (which makes more sense in
 context of
  it being a server option).
 
  i was hoping for an option like health-check-source or similar, if that
  makes sense; I also tried removing the source directive and binding the
  frontend to the VIP explicitly, hoping that would cause the proxied
 requests
  to originate from the bound IP, but that didn't seem to do it either.
 While
  the standby was then able to see the backends as up, the proxied
 requests
  to the backends came from the local IP instead of the VIP.
 
  Regards,
 
  Nathan W
 
  On Tue, Jul 14, 2015 at 8:58 AM Nathan Williams nath.e.w...@gmail.com
  wrote:
 
  Hi Baptiste/Jarno,
 
  Thanks so much for responding.
 
  addr does indeed look like a promising option (though a strangely
  lacking explanation in the docs, which explains what it makes possible
 while
  leaving the reader to deduce what it actually does), thanks for pointing
  that out.
 
  Here's our config:
 https://gist.github.com/nathwill/d30f2e9cc0c97bc5fc6f
  (believe it or not this is the trimmed down version from what we used to
  have :), but backends, how they propagate in this microservice-oriented
  world of ours... ).
 
  As for addresses, the VIP is 10.240.36.13, and the active/standby local
  addresses are .11 and .12.
 
  fthe problem is basically that the way it's currently configured, when
 the
  .11 is active and has the .13 address, health-checks from haproxy on
 the .12
  host also originate from the .13 address (guessing due to the source
  line), and so never return and are (rightfully) marked by haproxy as
 L4CON
  network timeouts.
 
  i'm going to try the addr config and report back; fingers crossed!
 
  cheers,
 
  Nathan W
 
  On Tue, Jul 14, 2015 at 5:21 AM Baptiste bed...@gmail.com wrote:
 
  On Mon, Jul 13, 2015 at 6:03 PM, Nathan Williams

Re: IP binding and standby health-checks

2015-07-14 Thread Nathan Williams
OK, that did not seem to work, so I think the correct interpretation of
that addr option must be as an override for what address/port to perform
the health-check *against* instead of from (which makes more sense in
context of it being a server option).

i was hoping for an option like health-check-source or similar, if that
makes sense; I also tried removing the source directive and binding the
frontend to the VIP explicitly, hoping that would cause the proxied
requests to originate from the bound IP, but that didn't seem to do it
either. While the standby was then able to see the backends as up, the
proxied requests to the backends came from the local IP instead of the VIP.

Regards,

Nathan W

On Tue, Jul 14, 2015 at 8:58 AM Nathan Williams nath.e.w...@gmail.com
wrote:

 Hi Baptiste/Jarno,

 Thanks so much for responding.

 addr does indeed look like a promising option (though a strangely
 lacking explanation in the docs, which explains what it makes possible
 while leaving the reader to deduce what it actually does), thanks for
 pointing that out.

 Here's our config: https://gist.github.com/nathwill/d30f2e9cc0c97bc5fc6f
 (believe it or not this is the trimmed down version from what we used to
 have :), but backends, how they propagate in this microservice-oriented
 world of ours... ).

 As for addresses, the VIP is 10.240.36.13, and the active/standby local
 addresses are .11 and .12.

 fthe problem is basically that the way it's currently configured, when the
 .11 is active and has the .13 address, health-checks from haproxy on the
 .12 host also originate from the .13 address (guessing due to the source
 line), and so never return and are (rightfully) marked by haproxy as L4CON
 network timeouts.

 i'm going to try the addr config and report back; fingers crossed!

 cheers,

 Nathan W

 On Tue, Jul 14, 2015 at 5:21 AM Baptiste bed...@gmail.com wrote:

 On Mon, Jul 13, 2015 at 6:03 PM, Nathan Williams nath.e.w...@gmail.com
 wrote:
  Hi all,
 
  I'm hoping I can get some advice on how we can improve our failover
 setup.
 
  At present, we have an active-standby setup. Failover works really
 well, but
  on the standby, none of the backend servers are marked as up since
 haproxy
  is bound to the VIP that is currently on the active member (managed with
  keepalived). as a result, there's an initial period of a second or two
 after
  the failover triggers and the standby claims the VIP where the backend
  servers have not yet passed a health-check on the new active member.
 
  It seems like the easiest way to sort it out would be if the
 health-checks
  weren't also bound to the VIP so that the standby could complete them
  successfully. i do still want the proxied requests bound to the VIP
 though,
  forthe benefit of our backends' real-ip configuration.
 
  is that doable? if not, is there some way to have the standby follow
 the
  active-member's view on the backends, or another way i haven't seen yet?
 
  Thanks!
 
  Nathan W

 Hi Nathan,

 Maybe you could share your configuration.
 Please also let us know the real and virtual IPs configured on your
 master and slave HAProxy servers.

 Baptiste




IP binding and standby health-checks

2015-07-13 Thread Nathan Williams
Hi all,

I'm hoping I can get some advice on how we can improve our failover setup.

At present, we have an active-standby setup. Failover works really well,
but on the standby, none of the backend servers are marked as up since
haproxy is bound to the VIP that is currently on the active member (managed
with keepalived). as a result, there's an initial period of a second or two
after the failover triggers and the standby claims the VIP where the
backend servers have not yet passed a health-check on the new active member.

It seems like the easiest way to sort it out would be if the health-checks
weren't also bound to the VIP so that the standby could complete them
successfully. i do still want the proxied requests bound to the VIP though,
forthe benefit of our backends' real-ip configuration.

is that doable? if not, is there some way to have the standby follow the
active-member's view on the backends, or another way i haven't seen yet?

Thanks!

Nathan W


Re: Haproxy 1.5 ssl redirect

2015-05-27 Thread Nathan Williams
we use redirect scheme https code 301 if !{ ssl_fc } on our SSL-only
backends, many of which are accessed by multiple hostnames. if i understand
correctly what you're trying to accomplish, i think that should do the
trick?

On Wed, May 27, 2015 at 8:38 AM Sean Patronis spatro...@add123.com wrote:

 I have another question to add to the mix. While attempting to
 mirror the proxypass and proxypassreverse capabilities of Apache's
 mod_proxy and force https connections across everything through haproxy,
 I have run into a small snag and want to try and work around it.

 We have multiple front ends that use the same backends but since I
 am forcing the URLs to be absolute to rewrite them to https, we need to
 have a variable host name.  What is the most efficient way to accomplish
 that?

 example: in a backend we have :
# ProxyPassReverse /mirror/foo/ http://bk.dom.com/bar
# Note: we turn the urls into absolute in the mean time
   acl hdr_location res.hdr(Location) -m found
   rspirep ^Location:\ (https?://localtest.test123.com(:[0-9]+)?)?(/.*)
 Location:\ \3 if hdr_location

 which works only for the frontend localtest.test123.com. i have
 another domain dev.test123.com that needs to use the same backend. What
 is the best way to make the host from the request into a variable? how
 can we do something like this so that any frontend can use this backend?

   acl hdr_location res.hdr(Location) -m found
   rspirep ^Location:\ (https?://%[host](:[0-9]+)?)?(/.*) Location:\ \3
 if hdr_location


 This is all in haproxy 1.5

 Thanks.


 --Sean Patronis
 Auto Data Direct Inc.
 850.877.8804

 On 03/18/2015 02:06 PM, Sean Patronis wrote:
  Baptiste,
 
  Thanks for the links, I had run across them earlier this morning in my
  google searching, but your post made me pay more attention to them...
  I have it working now, and the trick that seemed to do it for me was
  making all the paths absolute (since I am forcing https anyhow, and
  each since frontend/backend combo is unique) with this line in my
  backend config:
 
  # ProxyPassReverse /mirror/foo/ http://bk.dom.com/bar
   # Note: we turn the urls into absolute in the mean time
   acl hdr_location res.hdr(Location) -m found
   rspirep ^Location:\ (https?://localtest.test123.com(:[0-9]+)?)?(/.*)
  Location:\ \3 if hdr_location
 
 
  Thanks for all the help from everyone is this thread!
 
  --Sean Patronis
  Auto Data Direct Inc.
  850.877.8804
 
  On 03/18/2015 12:06 PM, Baptiste wrote:
  Hi Sean,
 
  You may find some useful information here:
 
 http://blog.haproxy.com/2014/04/28/howto-write-apache-proxypass-rules-in-haproxy/
  and here:
 
 http://blog.haproxy.com/2013/02/26/ssl-offloading-impact-on-web-applications/
 
  Baptiste
 
 
  On Wed, Mar 18, 2015 at 3:39 PM, Sean Patronis spatro...@add123.com
  wrote:
  Thanks for the link.  That looks promising, but testing did not change
  anything and I am waiting on the developers to give me some
  indication of
  what headers they may expect.  Maybe we can tackle this a different way
  since we know it works in apache.  I am attempting to replace the
  following
  VirtualHost in apache and put it into haproxy:
 
  ## [test.test123.com]
  VirtualHost 10.0.60.5:443
  ServerName test.test123.com
   SSLEngine on
   SSLProtocol all -SSLv3
   SSLHonorCipherOrder On
   SSLCipherSuite
 
 ECDHE-RSA-AES256-SHA384:AES256-SHA256:!RC4:HIGH:!MD5:!aNULL:!EDH:!AESGCM:!SSLV2:!eNULL
 
   ProxyPassReverse / http://10.0.60.5/
   ProxyPass   /  http://10.0.60.5/
  /VirtualHost
 
  what haproxy frontend settings do I need to make this match whatever
  apache
  and mod_proxy is doing?
 
  10.0.60.5:80 is already in haproxy  I think the problem may be
 that
  there are some headers getting set by ProxyPass and ProxyPassReverse
  that I
  am not setting in haproxy.  More specifically, I think that the apache
  ProxyPassReverse is rewiting the problem URI to https, and haproxy
  is not.
 
  --Sean Patronis
  Auto Data Direct Inc.
  850.877.8804
 
  On 03/17/2015 06:24 PM, Cyril Bonté wrote:
  Hi,
 
  Le 17/03/2015 20:42, Sean Patronis a écrit :
  Unfortunately that did not fix it. I mirrored your config and the
  problem still exists.  I am not quite sure how the URL is getting
  built
  on the backend (the developers say it is all relative URL/URI), but
  whatever haproxy is doing, it is doing it differently than apache
  (with
  mod_proxy).  Just for fun, I swapped back the ssl termination to
  apache
  to prove that is does not have an issue (once it passes through
  apache
  for ssl, it still goes through Haproxy and all of the backends/acl
  etc).
 
  My goal in all of this was to ditch apache and go all haproxy on the
  front end.
 
  Any other ideas?
 
  Have a look at this answer :
  http://permalink.gmane.org/gmane.comp.web.haproxy/10361
 
  I assume that your application is not aware of an SSL termination,
  so you
  have to notify it with the right 

Re: socket bind error

2015-05-20 Thread Nathan Williams
arg. ok, it was SELinux... we recently re-worked how we prepare our base
image and the new method seems to leave SELinux enabled... turned that off
and everything's working peachy.

Thanks!

On Wed, May 20, 2015 at 4:16 PM Lukas Tribus luky...@hotmail.com wrote:

  hi all,
 
  I'm working on standing up a new haproxy instance to manage redis
  directly on our redis hosts since our main load-balancer does periodic
  reloads and restarts for things like OCSP stapling that good ol'
  amnesiac HTTP handles just fine, but longer-lived TCP connections like
  our redis clients don't care too much for.
 
  I managed to put together a configuration that works fine in local
  testing (vagrant configured by test-kitchen), but for some reason when
  I try to push this to staging, haproxy is refusing to start,
  complaining that it can't bind to the keepalived-managed VIP. For the
  life of me I can't figure out what the problem is, but hopefully
  someone here will be able to give me some pointers?

 Not sure, can you run haproxy directly (without systemd) through strace,
 to see what exactly the kernel returns?

 Whats the kernel release anyway?

 What happens if you add the transparent keyword on the bind
 configuration line (so that the sysctl setting is not needed)?



 Regards,

 Lukas




socket bind error

2015-05-20 Thread Nathan Williams
hi all,

I'm working on standing up a new haproxy instance to manage redis directly
on our redis hosts since our main load-balancer does periodic reloads and
restarts for things like OCSP stapling that good ol' amnesiac HTTP handles
just fine, but longer-lived TCP connections like our redis clients don't
care too much for.

I managed to put together a configuration that works fine in local testing
(vagrant configured by test-kitchen), but for some reason when I try to
push this to staging, haproxy is refusing to start, complaining that it
can't bind to the keepalived-managed VIP. For the life of me I can't figure
out what the problem is, but hopefully someone here will be able to give me
some pointers? Thanks in advance for your help :)

The error message:

```bash
[root@redis02.stage ~]# journalctl -ln5 -u haproxy.service --no-pager
-- Logs begin at Wed 2015-05-20 22:35:37 UTC, end at Wed 2015-05-20
22:45:55 UTC. --
May 20 22:35:47 redis02.stage.iad01.treehouse systemd[1]: Starting HAProxy
Load Balancer...
May 20 22:35:47 redis02.stage.iad01.treehouse systemd[1]: Started HAProxy
Load Balancer.
May 20 22:35:47 redis02.stage.iad01.treehouse haproxy-systemd-wrapper[794]:
haproxy-systemd-wrapper: executing /usr/sbin/haproxy -f
/etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
May 20 22:35:47 redis02.stage.iad01.treehouse haproxy-systemd-wrapper[794]:
[ALERT] 139/223547 (801) : Starting proxy redis: cannot bind socket [
10.240.36.71:6379]
May 20 22:35:47 redis02.stage.iad01.treehouse haproxy-systemd-wrapper[794]:
haproxy-systemd-wrapper: exit, haproxy RC=256
```

version info:

```bash
[root@redis02.stage ~]# haproxy -vvv
HA-Proxy version 1.5.4 2014/09/02
Copyright 2000-2014 Willy Tarreau w...@1wt.eu

Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing
  OPTIONS = USE_LINUX_TPROXY=1 USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1
USE_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.7
Compression algorithms supported : identity, deflate, gzip
Built with OpenSSL version : OpenSSL 1.0.1e-fips 11 Feb 2013
Running on OpenSSL version : OpenSSL 1.0.1e-fips 11 Feb 2013
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.32 2012-11-30
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT
IP_FREEBIND

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.
```

the configuration:

```bash
[root@redis02.stage ~]# cat /etc/haproxy/haproxy.cfg
# Generated by Chef
# Changes will be overwritten!
global
  user haproxy
  group haproxy
  stats socket /var/lib/haproxy/stats.sock
  log /dev/log local0 info
  maxconn 5

defaults TCP
  mode tcp
  log global
  option tcplog
  option tcpka
  source 10.240.36.71

listen redis
  bind 10.240.36.71:6379
  default-server on-marked-down shutdown-sessions
  option tcp-check
  tcp-check send PING\r\n
  tcp-check expect string +PONG
  tcp-check send info\ replication\r\n
  tcp-check expect string role:master
  tcp-check send QUIT\r\n
  tcp-check expect string +OK
  server redis01.stage 10.240.36.27:6379 backup check inter 1000 rise 2
fall 5
  server redis02.stage 10.240.36.63:6379 backup check inter 1000 rise 2
fall 5
```

listening services:

```bash
[root@redis02.stage ~]# netstat -lptn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address   Foreign Address State
PID/Program name
tcp0  0 0.0.0.0:26379   0.0.0.0:*   LISTEN
 2449/redis-sentinel
tcp0  0 10.240.36.63:6379   0.0.0.0:*   LISTEN
 2388/redis-server 1
tcp0  0 127.0.0.1:3030  0.0.0.0:*   LISTEN
 930/ruby
tcp0  0 0.0.0.0:22  0.0.0.0:*   LISTEN
 782/sshd
tcp0  0 127.0.0.1:250.0.0.0:*   LISTEN
 919/master
tcp0  0 127.0.0.1:2812  0.0.0.0:*   LISTEN
 784/monit
tcp6   0  0 :::26379:::*LISTEN
 2449/redis-sentinel
tcp6   0  0 :::22   :::*LISTEN
 782/sshd
tcp6   0  0 ::1:25  :::*LISTEN
 919/master
```

local addresses:

```bash
[root@redis02.stage ~]# ip a
1: lo: LOOPBACK,UP,LOWER_UP mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
2: eth0: 

Re: timeout values for redis?

2015-03-24 Thread Nathan Williams
i should probably add... setting all members to backups means of course
that only the first server to pass the check will actually receive traffic
(unless you're using option allbackups). this works perfectly for us, but
may not work for you depending on your use-case.

On Tue, Mar 24, 2015 at 3:55 PM, Nathan Williams nath.e.w...@gmail.com
wrote:

 Hey Jim,

 Here's the configuration we're using for our redis pool:

 defaults TCP
   mode tcp
   log global
   option tcplog
   option clitcpka
   option srvtcpka
   timeout connect 5s
   timeout client 300s
   timeout server 300s
   source 12.34.56.78

 listen redis
   bind 0.0.0.0:6379
   option tcp-check
   tcp-check send PING\r\n
   tcp-check expect string +PONG
   tcp-check send info\ replication\r\n
   tcp-check expect string role:master
   tcp-check send QUIT\r\n
   tcp-check expect string +OK
   server redis01.prod 12.34.56.79:6379 backup check inter 1000 rise 2
 fall 5
   server redis02.prod 12.34.56.80:6379 backup check inter 1000 rise 2
 fall 5

 The key items for silencing client errors were the tcpka (keepalive)
 configurations, along with setting the servers all to backups, which helped
 us avoid clients briefly getting connected to the read-only slave
 immediately following an haproxy reload/restart.

 hope that helps!

 regards,

 Nathan W


 On Tue, Mar 24, 2015 at 3:48 PM, Ha Quan Le nlp...@shaw.ca wrote:

 Thanks, I sent request previously to you but I have done it.
 Ha.

 --
 *From: *Jim Gronowski jgronow...@ditronics.com
 *To: *haproxy@formilux.org haproxy@formilux.org
 *Sent: *Tuesday, March 24, 2015 1:25:33 PM
 *Subject: *timeout values for redis?


  Does anyone have any feedback on sane timeout values for load balancing
 redis?



 The testing config I was using had ‘timeout client 5’ and I was
 getting consistent client disconnects in the logs.  I increased it to two
 minutes and things have improved significantly, though I do see client
 disconnects every few hours (but the application is behaving normally).
 Client is StackExchange.Redis if that helps.



 Google wasn’t much use.  HA-Proxy version 1.5.10.   Full config:



 global

 log /dev/loglocal0

 log /dev/loglocal1 notice

 chroot /var/lib/haproxy

 stats socket /run/haproxy/admin.sock mode 660 level admin

 stats timeout 30s

 user haproxy

 group haproxy

 daemon



 defaults

  log global

 modetcp

 option  tcplog

 option  dontlognull

 timeout connect 5000

 timeout client  2m

 timeout server  12

 errorfile 400 /etc/haproxy/errors/400.http

 errorfile 403 /etc/haproxy/errors/403.http

 errorfile 408 /etc/haproxy/errors/408.http

 errorfile 500 /etc/haproxy/errors/500.http

 errorfile 502 /etc/haproxy/errors/502.http

 errorfile 503 /etc/haproxy/errors/503.http

 errorfile 504 /etc/haproxy/errors/504.http



 frontend redisFE

 bind *:6379

 mode tcp

 maxconn 10240

 default_backend redisBE



 backend redisBE

 mode tcp

 option tcplog

 balance source

 option tcp-check

 tcp-check send PING\r\n

 tcp-check expect string +PONG

 tcp-check send info\ replication\r\n

 tcp-check expect string role:master

 tcp-check send QUIT\r\n

 tcp-check expect string +OK

 server A-redis-01 X:6379 maxconn 1024 check inter 1s

 server A-redis-02 X:6379 maxconn 1024 check inter 1s

 server B-redis-01 X:6379 maxconn 1024 check inter 1s

 server B-redis-02 X:6379 maxconn 1024 check inter 1s







 *Jim Gronowski*

 Network Administrator

 *DiTronics, LLC.*





 Ditronics, LLC email disclaimer:
 This communication, including attachments, is intended only for the
 exclusive use of addressee and may contain proprietary, confidential, or
 privileged information. Any use, review, duplication, disclosure,
 dissemination, or distribution is strictly prohibited. If you were not the
 intended recipient, you have received this communication in error. Please
 notify sender immediately by return e-mail, delete this communication, and
 destroy any copies.