Re: HAProxy clustering

2016-12-19 Thread ge...@riseup.net
On 16-12-19 16:01:08, Stephan Müller wrote:
> Different services on the same host, so it has also different health checks,
> balance policies and so on..

Alright -- please show this in your code, next time.

TIA and all the best,
Georg


signature.asc
Description: Digital signature


Re: HAProxy clustering

2016-12-19 Thread Stephan Müller
Different services on the same host, so it has also different health 
checks, balance policies and so on..


On 19.12.2016 15:46, ge...@riseup.net wrote:

On 16-12-19 08:39:17, Stephan Müller wrote:

Another point I encounter frequently, I use the same server (IPs) in
multiple backends, this duplicates configuration.

SRV1_IP=192.168.0.1
CHECK_INTER=1

backend foo
  server service1 $SRV1_IP check inter $CHCECK_INTER

backend bar
  server service2 $SRV1_IP check inter $CHCECK_INTER


Why not use the same backend then?

backend foo_bar
  server service1_2 $SRV1_IP check inter $CHCECK_INTER





Re: HAProxy clustering

2016-12-19 Thread ge...@riseup.net
On 16-12-19 08:39:17, Stephan Müller wrote:
> Another point I encounter frequently, I use the same server (IPs) in
> multiple backends, this duplicates configuration.
> 
> SRV1_IP=192.168.0.1
> CHECK_INTER=1
> 
> backend foo
>   server service1 $SRV1_IP check inter $CHCECK_INTER
> 
> backend bar
>   server service2 $SRV1_IP check inter $CHCECK_INTER

Why not use the same backend then?

backend foo_bar
  server service1_2 $SRV1_IP check inter $CHCECK_INTER


signature.asc
Description: Digital signature


Re: HAProxy clustering

2016-12-18 Thread Marco Corte

Il 16/12/2016 20:54, Guillaume Bourque ha scritto:

Hello Marco,

I would be very interest on how you build your harpy config, you must
have per server settings and then a global config ?



On the Ansible Control Machine the configuration is split in several 
files named either ".common" or in "." (pgli01 or pmli01 in 
this example).


000-common.common
010-traveler.pgli01
010-traveler.pmli01
020-tesi.common
050-vip.common
070-vdi.common
080-AD.common
150-crm.common
990-stats.pgli01
990-stats.pmli01

The differences between the nodes are disseminated into the 
configuration: it was very difficult to mantain it with a Jinja2 
template (although much more elegant).


The Ansible task assembles the configuration on the node taking the 
files in order (numbers) and choosing between the "common" or the 
node-specific one.


- name: Configure haproxy
  assemble: src=etc/haproxy remote_src=no
regexp=(common|{{ ansible_hostname }})
dest=/etc/haproxy/haproxy.cfg mode=644 owner=root group=root
  tags:
  - haproxyconfig
  notify:
  - Reload haproxy

Ciao

.marcoc



Re: HAProxy clustering

2016-12-18 Thread Stephan Müller
Well, i have some health checks which are not very lightweight (don't 
ask, another story). So its better to reduce its number to bare minimum.


Actually it would be quite cool, if you can use variables in you 
configuration. And moreover if you can change these vars at runtime, it 
would be awesome. I there are environment variables, but yet I didn't 
check if they are what I am looking for.


You could control check intervals (probably its better to already have 
some state, if one of your backup haproxys takes over), so you could 
make checks less frequent instead of turning them off altogether. 
Another point I encounter frequently, I use the same server (IPs) in 
multiple backends, this duplicates configuration.


SRV1_IP=192.168.0.1
CHECK_INTER=1

backend foo
  server service1 $SRV1_IP check inter $CHCECK_INTER

backend bar
  server service2 $SRV1_IP check inter $CHCECK_INTER


On 16.12.2016 20:50, Neil - HAProxy List wrote:

Stephan,

I'm curious...

Why would you want the inactive loadbal not to check the services?

If you really really did want that you do something horrid like tell
keepalive to block with iptables access to the backends when it does not
own the service ip

but why? you healthchecks should be fairly lightweight?

Neil


On 16 Dec 2016 15:44, "Marco Corte" > wrote:

Hi!

I use keepalived for IP management.

I use Ansible on another host to deploy the configuration on the
haproxy nodes.
This setup gives me better control on the configuration: it is split
in several files on the Ansible host, but assembled to a single
config file on the nodes.
This gives also the opportunity to deploy the configuration on one
node only.
On the Ansible host, the configuration changes are tracked with git.

I also considered an automatic replication of the config, between
the nodes but... I did not like the idea.


.marcoc





Re: HAProxy clustering

2016-12-16 Thread Jeff Palmer
I didn't say that if one can hit it, they all can.

However, if you want to use that logic,  then I'd counter with..  if
it's not currently the active instance, it doesn't matter if it can or
not. Thus, why do the health check?

The only time it'd matter if the inactive/standby sever can hit the
backend, is if it became the active/hot server.


Now mind you,  I'm not saying this functionality needs to be added.
Merely saying if someone else has figured out a decent workaround, I'd
love to hear about it (and apparently so would others on the list)






On Fri, Dec 16, 2016 at 4:53 PM, Neil - HAProxy List
 wrote:
> So because one loadbal can reach the service the others can?
>
> Log spam needs getting rid of anyway. Filter it out whether its the in
> service or one of the out of service loadbal.
>
> If you have a complex health check that creates load make it a little
> smarter and cache its result for a while
>
> On Fri, 16 Dec 2016 at 19:56, Jeff Palmer  wrote:
>>
>> backend health should be in on the sticktables that are shared between
>>
>> all instances,  right?
>>
>>
>>
>> With that in mind,  the inactive servers would know the backed states
>>
>> if a failover were to occur.  no sense in having the log spam, network
>>
>> traffic, and load from healthchecks that aree essentially useless
>>
>> (IMO, of course)
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> On Fri, Dec 16, 2016 at 2:50 PM, Neil - HAProxy List
>>
>>  wrote:
>>
>> > Stephan,
>>
>> >
>>
>> > I'm curious...
>>
>> >
>>
>> > Why would you want the inactive loadbal not to check the services?
>>
>> >
>>
>> > If you really really did want that you do something horrid like tell
>>
>> > keepalive to block with iptables access to the backends when it does not
>> > own
>>
>> > the service ip
>>
>> >
>>
>> > but why? you healthchecks should be fairly lightweight?
>>
>> >
>>
>> > Neil
>>
>> >
>>
>> >
>>
>> > On 16 Dec 2016 15:44, "Marco Corte"  wrote:
>>
>> >>
>>
>> >> Hi!
>>
>> >>
>>
>> >> I use keepalived for IP management.
>>
>> >>
>>
>> >> I use Ansible on another host to deploy the configuration on the
>> >> haproxy
>>
>> >> nodes.
>>
>> >> This setup gives me better control on the configuration: it is split in
>>
>> >> several files on the Ansible host, but assembled to a single config
>> >> file on
>>
>> >> the nodes.
>>
>> >> This gives also the opportunity to deploy the configuration on one node
>>
>> >> only.
>>
>> >> On the Ansible host, the configuration changes are tracked with git.
>>
>> >>
>>
>> >> I also considered an automatic replication of the config, between the
>>
>> >> nodes but... I did not like the idea.
>>
>> >>
>>
>> >>
>>
>> >> .marcoc
>>
>> >>
>>
>> >
>>
>>
>>
>>
>>
>>
>>
>> --
>>
>> Jeff Palmer
>>
>> https://PalmerIT.net
>>
>



-- 
Jeff Palmer
https://PalmerIT.net



Re: HAProxy clustering

2016-12-16 Thread Neil - HAProxy List
So because one loadbal can reach the service the others can?

Log spam needs getting rid of anyway. Filter it out whether its the in
service or one of the out of service loadbal.

If you have a complex health check that creates load make it a little
smarter and cache its result for a while

On Fri, 16 Dec 2016 at 19:56, Jeff Palmer  wrote:

> backend health should be in on the sticktables that are shared between
>
> all instances,  right?
>
>
>
> With that in mind,  the inactive servers would know the backed states
>
> if a failover were to occur.  no sense in having the log spam, network
>
> traffic, and load from healthchecks that aree essentially useless
>
> (IMO, of course)
>
>
>
>
>
>
>
>
>
> On Fri, Dec 16, 2016 at 2:50 PM, Neil - HAProxy List
>
>  wrote:
>
> > Stephan,
>
> >
>
> > I'm curious...
>
> >
>
> > Why would you want the inactive loadbal not to check the services?
>
> >
>
> > If you really really did want that you do something horrid like tell
>
> > keepalive to block with iptables access to the backends when it does not
> own
>
> > the service ip
>
> >
>
> > but why? you healthchecks should be fairly lightweight?
>
> >
>
> > Neil
>
> >
>
> >
>
> > On 16 Dec 2016 15:44, "Marco Corte"  wrote:
>
> >>
>
> >> Hi!
>
> >>
>
> >> I use keepalived for IP management.
>
> >>
>
> >> I use Ansible on another host to deploy the configuration on the haproxy
>
> >> nodes.
>
> >> This setup gives me better control on the configuration: it is split in
>
> >> several files on the Ansible host, but assembled to a single config
> file on
>
> >> the nodes.
>
> >> This gives also the opportunity to deploy the configuration on one node
>
> >> only.
>
> >> On the Ansible host, the configuration changes are tracked with git.
>
> >>
>
> >> I also considered an automatic replication of the config, between the
>
> >> nodes but... I did not like the idea.
>
> >>
>
> >>
>
> >> .marcoc
>
> >>
>
> >
>
>
>
>
>
>
>
> --
>
> Jeff Palmer
>
> https://PalmerIT.net
>
>


Re: HAProxy clustering

2016-12-16 Thread Jeff Palmer
backend health should be in on the sticktables that are shared between
all instances,  right?

With that in mind,  the inactive servers would know the backed states
if a failover were to occur.  no sense in having the log spam, network
traffic, and load from healthchecks that aree essentially useless
(IMO, of course)




On Fri, Dec 16, 2016 at 2:50 PM, Neil - HAProxy List
 wrote:
> Stephan,
>
> I'm curious...
>
> Why would you want the inactive loadbal not to check the services?
>
> If you really really did want that you do something horrid like tell
> keepalive to block with iptables access to the backends when it does not own
> the service ip
>
> but why? you healthchecks should be fairly lightweight?
>
> Neil
>
>
> On 16 Dec 2016 15:44, "Marco Corte"  wrote:
>>
>> Hi!
>>
>> I use keepalived for IP management.
>>
>> I use Ansible on another host to deploy the configuration on the haproxy
>> nodes.
>> This setup gives me better control on the configuration: it is split in
>> several files on the Ansible host, but assembled to a single config file on
>> the nodes.
>> This gives also the opportunity to deploy the configuration on one node
>> only.
>> On the Ansible host, the configuration changes are tracked with git.
>>
>> I also considered an automatic replication of the config, between the
>> nodes but... I did not like the idea.
>>
>>
>> .marcoc
>>
>



-- 
Jeff Palmer
https://PalmerIT.net



Re: HAProxy clustering

2016-12-16 Thread Guillaume Bourque
Hello Marco,

I would be very interest on how you build your harpy config, you must have per 
server settings and then a global config ?

If time permit and if you can share some unusable config I would be very happy 
to look into this..

Thanks 
---
Guillaume Bourque, B.Sc.,
Architecte infrastructures technologiques robustes

> Le 2016-12-16 à 10:42, Marco Corte  a écrit :
> 
> Hi!
> 
> I use keepalived for IP management.
> 
> I use Ansible on another host to deploy the configuration on the haproxy 
> nodes.
> This setup gives me better control on the configuration: it is split in 
> several files on the Ansible host, but assembled to a single config file on 
> the nodes.
> This gives also the opportunity to deploy the configuration on one node only.
> On the Ansible host, the configuration changes are tracked with git.
> 
> I also considered an automatic replication of the config, between the nodes 
> but... I did not like the idea.
> 
> 
> .marcoc
> 



Re: HAProxy clustering

2016-12-16 Thread Marco Corte

Hi!

I use keepalived for IP management.

I use Ansible on another host to deploy the configuration on the haproxy 
nodes.
This setup gives me better control on the configuration: it is split in 
several files on the Ansible host, but assembled to a single config file 
on the nodes.
This gives also the opportunity to deploy the configuration on one node 
only.

On the Ansible host, the configuration changes are tracked with git.

I also considered an automatic replication of the config, between the 
nodes but... I did not like the idea.



.marcoc



Re: HAProxy clustering

2016-12-16 Thread Jeff Palmer
I would be interested in seeing the ansible playbook,  if it's sanitized?




On Fri, Dec 16, 2016 at 10:19 AM, Michel blanc
 wrote:
> Le 16/12/2016 à 16:08, Jeff Palmer a écrit :
>
>>> Hi
>>> I would like to know what is the best way to have multiple instances of
>>> haproxy and have or share the same configuration file between these
>>> instances.
>
>
>> If you find a solution to the health checks from unused instances,  let us 
>> know!
>
> Hi,
>
> Here I use pacemaker+corosync and 2 VIPs (+ round robin DNS) so all
> haproxy instances are active. In case of failure, failed VIP is "moved"
> to the remaining instance (which then holds the 2 VIPs).
>
> The configuration is deployed using ansible.
>
>
> M
>



-- 
Jeff Palmer
https://PalmerIT.net



Re: HAProxy clustering

2016-12-16 Thread ge...@riseup.net
On 16-12-16 16:19:09, Michel blanc wrote:
> Here I use pacemaker+corosync and 2 VIPs (+ round robin DNS) so all
> haproxy instances are active. In case of failure, failed VIP is
> "moved" to the remaining instance (which then holds the 2 VIPs).

Doing this as well. Also, pacemaker/corosync enables the use of STONITH
/ fencing, which is critical if doing HA.

Cheers,
Georg


signature.asc
Description: Digital signature


Re: HAProxy clustering

2016-12-16 Thread Michel blanc
Le 16/12/2016 à 16:08, Jeff Palmer a écrit :

>> Hi
>> I would like to know what is the best way to have multiple instances of
>> haproxy and have or share the same configuration file between these
>> instances.


> If you find a solution to the health checks from unused instances,  let us 
> know!

Hi,

Here I use pacemaker+corosync and 2 VIPs (+ round robin DNS) so all
haproxy instances are active. In case of failure, failed VIP is "moved"
to the remaining instance (which then holds the 2 VIPs).

The configuration is deployed using ansible.


M



Re: HAProxy clustering

2016-12-16 Thread Jeff Palmer
If you find a solution to the health checks from unused instances,  let us know!



On Fri, Dec 16, 2016 at 10:05 AM, Stephan Müller
 wrote:
>
>
> On 16.12.2016 14:58, shouldbeq931 wrote:
>>
>>
>>
>>> On 16 Dec 2016, at 13:22, Allan Moraes  wrote:
>>>
>>> Hi
>>> I would like to know what is the best way to have multiple instances of
>>> haproxy and have or share the same configuration file between these
>>> instances.
>>
>>
>> I use keepalived to present clustered addresses, and incrond with unison
>> to keep configs in sync.
>>
>> I'm quite sure there are better methods :-)
>
>
> I use also keepalived to float IPs around. Google tells you this setup is
> quite common. For me it works very well.
>
> Currently I am looking for a method to prevent the unused haproxys from
> doing health checks. I'll check "incrond with unison", thanks for the
> pointer.
>
>  ~stephan
>



-- 
Jeff Palmer
https://PalmerIT.net



Re: HAProxy clustering

2016-12-16 Thread Stephan Müller



On 16.12.2016 14:58, shouldbeq931 wrote:




On 16 Dec 2016, at 13:22, Allan Moraes  wrote:

Hi
I would like to know what is the best way to have multiple instances of haproxy 
and have or share the same configuration file between these instances.


I use keepalived to present clustered addresses, and incrond with unison to 
keep configs in sync.

I'm quite sure there are better methods :-)


I use also keepalived to float IPs around. Google tells you this setup 
is quite common. For me it works very well.


Currently I am looking for a method to prevent the unused haproxys from 
doing health checks. I'll check "incrond with unison", thanks for the 
pointer.


 ~stephan



Re: HAProxy clustering

2016-12-16 Thread shouldbeq931


> On 16 Dec 2016, at 13:22, Allan Moraes  wrote:
> 
> Hi
> I would like to know what is the best way to have multiple instances of 
> haproxy and have or share the same configuration file between these instances.

I use keepalived to present clustered addresses, and incrond with unison to 
keep configs in sync.

I'm quite sure there are better methods :-)

Cheers


HAProxy clustering

2016-12-16 Thread Allan Moraes
Hi
I would like to know what is the best way to have multiple instances of
haproxy and have or share the same configuration file between these
instances.