Re: Backend: Multiple A records

2016-11-27 Thread Tim Düsterhus
Hi

On 25.11.2016 08:08, Willy Tarreau wrote:
> No, unfortunately none of us had the time to complete this. It's sad
> but true. And I definitely refuse to reproduce the 1.5 model where
> we wait for a certain feature to release and where it takes 4.5
> years to produce the expected 6-months release. So this will have to
> wait for 1.8 or later, until someone has time to complete this feature.

Fair enough, thanks for the heads up.

Best regards
Tim Düsterhus



Re: Backend: Multiple A records

2016-11-25 Thread Baptiste
On Fri, Nov 25, 2016 at 8:08 AM, Willy Tarreau  wrote:

> Hi Tim,
>
> On Fri, Nov 25, 2016 at 02:34:49AM +0100, Tim Düsterhus wrote:
> > Hi
> >
> > On 28.08.2016 19:57, Baptiste wrote:
> > > This should happen soon, for 1.7.
> >
> > I noticed Willy's email ???1.7 => almost there??? and wanted to test out
> the
> > feature. Has this feature been implemented for 1.7?
>
> No, unfortunately none of us had the time to complete this. It's sad
> but true. And I definitely refuse to reproduce the 1.5 model where
> we wait for a certain feature to release and where it takes 4.5
> years to produce the expected 6-months release. So this will have to
> wait for 1.8 or later, until someone has time to complete this feature.
>
> At least right now you can update the IP addresses from the CLI, so
> you could very well run a script iterating over the output of a
> "host" command to feed it. It's not as magical but will work.
>
> Regards,
> Willy
>


Hi

This will be my next point of focus. But I have very low amount of free
time currently.
(this and supporting SRV records)

Baptiste


Re: Backend: Multiple A records

2016-11-24 Thread Willy Tarreau
Hi Tim,

On Fri, Nov 25, 2016 at 02:34:49AM +0100, Tim Düsterhus wrote:
> Hi
> 
> On 28.08.2016 19:57, Baptiste wrote:
> > This should happen soon, for 1.7.
> 
> I noticed Willy's email ???1.7 => almost there??? and wanted to test out the
> feature. Has this feature been implemented for 1.7?

No, unfortunately none of us had the time to complete this. It's sad
but true. And I definitely refuse to reproduce the 1.5 model where
we wait for a certain feature to release and where it takes 4.5
years to produce the expected 6-months release. So this will have to
wait for 1.8 or later, until someone has time to complete this feature.

At least right now you can update the IP addresses from the CLI, so
you could very well run a script iterating over the output of a
"host" command to feed it. It's not as magical but will work.

Regards,
Willy



Re: Backend: Multiple A records

2016-11-24 Thread Tim Düsterhus
Hi

On 28.08.2016 19:57, Baptiste wrote:
> This should happen soon, for 1.7.

I noticed Willy's email “1.7 => almost there” and wanted to test out the
feature. Has this feature been implemented for 1.7? I noticed the
init-addr feature mentioned in the dev6 release announcement and
modified the configuration in my initial mail to the following:

server nginx1 nginx.containers.example.com:80 check resolvers
containers resolve-prefer ipv4 init-addr last,libc,none
server nginx2 nginx.containers.example.com:80 check resolvers
containers resolve-prefer ipv4 init-addr last,libc,none
server nginx3 nginx.containers.example.com:80 check resolvers
containers resolve-prefer ipv4 init-addr last,libc,none
server nginx4 nginx.containers.example.com:80 check resolvers
containers resolve-prefer ipv4 init-addr last,libc,none

haproxy started up without any A records at
nginx.containers.example.com, great! It also brought the backends down
once the record disappeared, also great!

Unfortunately it still sent the requests to a single nginx backend only,
instead of using all the available IP addresses.

Did I configure something wrong?

Best regards
Tim Düsterhus



Re: Backend: Multiple A records

2016-09-01 Thread ge...@riseup.net
On 16-09-01 14:14:54, Tim Düsterhus wrote:
> On 31.08.2016 23:05, Baptiste wrote:
> > If I want to setup a lab on my computer, what would be the fastest way to
> > build it?
> > I mean, running docker on my laptop does not seem to be sufficient and I
> > don't really understand what would the bear minimum setup.
> > If you could help me on this point, I'll appreciate it a lot!
> 
> You mean for testing the DNS responses? Personally I'm running Knot
> inside a Docker container, it is very easy to manage it's zones
> programmatically and it's probably the simplest solution for you.

I'm not using / running Docker, but instead good old virtual (KVM)
machines, but +1 for knot. It's great, and I would recommend it as well.
It also supports dynamic DNS updates: [1].

All the best,
Georg


[1] https://www.knot-dns.cz/docs/2.x/html/man_knsupdate.html


signature.asc
Description: Digital signature


Re: Backend: Multiple A records

2016-09-01 Thread Tim Düsterhus
Hello

On 31.08.2016 23:05, Baptiste wrote:
> I have one question for you guys.
> If I want to setup a lab on my computer, what would be the fastest way to
> build it?
> I mean, running docker on my laptop does not seem to be sufficient and I
> don't really understand what would the bear minimum setup.
> If you could help me on this point, I'll appreciate it a lot!

You mean for testing the DNS responses? Personally I'm running Knot
inside a Docker container, it is very easy to manage it's zones
programmatically and it's probably the simplest solution for you. I just
open sourced the Dockerfile I built on GitHub:

https://github.com/TimWolla/dockerdns-knot

You would build the image like this:

$ make
*snip*

You would run the image like this:

$ docker run -it --rm --name knot -e KNOT_ZONE=docker.example.com -v
/tmp/knot/:/var/lib/knot/ timwolla/knot
*snip*

Afterwards you can retrieve the IP address of the nameserver like this:

$ docker inspect -f "{{range .NetworkSettings.Networks }}{{ .IPAddress
}}{{ end }}" knot
172.17.0.2

And then change records like this, using knotc
(https://www.knot-dns.cz/docs/2.x/html/operation.html#reading-and-editing-zones):

$ docker exec knot knotc zone-begin docker.example.com
OK
$ docker exec knot knotc zone-set docker.example.com test 60 A 127.0.0.1
OK
$ docker exec knot knotc zone-commit docker.example.com
OK
$ dig +short @172.17.0.2 A test.docker.example.com
127.0.0.1

You can get a root shell inside the container like this:

$ docker exec -it knot bash

Best regards
Tim Düsterhus



Re: Backend: Multiple A records

2016-08-31 Thread Baptiste
On Wed, Aug 31, 2016 at 3:37 PM, Tim Düsterhus  wrote:

> Hi
>
> On 30.08.2016 22:10, Baptiste wrote:
> > Worst case, set X to 10 and you're good ;)
> >
>
> That would not help if slots are not freed and IP addresses change
> randomly. But you already clarified in your reply that this is not the
> case.
>
> So: No more questions from my side for the moment. Thanks!
>
> Best regards
> Tim Düsterhus
>


I have one question for you guys.
If I want to setup a lab on my computer, what would be the fastest way to
build it?
I mean, running docker on my laptop does not seem to be sufficient and I
don't really understand what would the bear minimum setup.
If you could help me on this point, I'll appreciate it a lot!

Baptiste


Re: Backend: Multiple A records

2016-08-31 Thread Tim Düsterhus
Hi

On 30.08.2016 22:10, Baptiste wrote:
> Worst case, set X to 10 and you're good ;)
> 

That would not help if slots are not freed and IP addresses change
randomly. But you already clarified in your reply that this is not the case.

So: No more questions from my side for the moment. Thanks!

Best regards
Tim Düsterhus



Re: Backend: Multiple A records

2016-08-30 Thread Baptiste
> What would happen if I'd configure X = 2 and the following happens:
>
> 1. Initially only 127.0.0.1 is returned.
>

1 UP server available in the backend


> 2. 127.0.0.2 is added and healthy.
>

2 servers UP available in the backend


> 3. 127.0.0.1 is removed from DNS and thus marked DOWN.
>

then only 1 server in the backend. I think we'll have a specific flag to
report a failure due to DNS


> 4. 127.0.0.3 is added (with 127.0.0.2 still being healthy and 127.0.0.1
> still being DOWN / missing from the DNS response)
>
>
then 2 servers in the backend.

Worst case, set X to 10 and you're good ;)



> You said that once an IP address disappears the backend will be marked as
> DOWN and that there is an upper limit.


Not the backend, the corresponding server will be DOWN because of DNS (a
specific flag should be added)


> Are new IP addresses able to push removed IP addresses from the list or
> will removed IP addresses be DOWN and taking up a slot until they reappear?
>
>
Yes, this is the purpose.
The algorithm will consider each DNS response atomically when updating the
backend server list.



> If new IPs are able to push away old IPs it sounds like it will meet my
> requirements perfectly. I won't have control over the IP addresses assigned
> in the DNS.
>
>
We may be good then, which is nice :)

Baptiste


Re: Backend: Multiple A records

2016-08-30 Thread Tim Düsterhus

Hi

On 30.08.2016 01:49, Maciej Katafiasz wrote:

Right, I missed the "independent healthchecks" in the original
description, in which case it'd work well enough (albeit a low enough
TTL value is still a concern).



Thanks for the heads up. In my case this is not a concern: The DNS 
server is completely under my control, returns a low TTL and is able to 
update the list of nodes almost instantly after a node goes up or down.


Best regards
Tim Düsterhus



Re: Backend: Multiple A records

2016-08-30 Thread Tim Düsterhus

Hi

On 30.08.2016 09:11, Baptiste wrote:

The way we designed the feature is more like a "server template" line which
may be used to pre-configure in memory X servers sharing the same DNS
resolution.
In your case X=2. If you intend to have up to 10 servers for this service,
simply set X to 10.
HAProxy will use A records to create the servers and the health checks will
ensure that the servers are available before sending them traffic.
If a A record disappear from  the response, the corresponding server will
get down. If a new server is added and we provisioned less than X, then a
new server is provisioned.
This X "upper" limit is to ensure compatibility with all HAProxy features
(such as hash LBing algorithms).

Could you let me know if that meets your requirements?
(we can still change this description).



What would happen if I'd configure X = 2 and the following happens:

1. Initially only 127.0.0.1 is returned.
2. 127.0.0.2 is added and healthy.
3. 127.0.0.1 is removed from DNS and thus marked DOWN.
4. 127.0.0.3 is added (with 127.0.0.2 still being healthy and 127.0.0.1 
still being DOWN / missing from the DNS response)


You said that once an IP address disappears the backend will be marked 
as DOWN and that there is an upper limit. Are new IP addresses able to 
push removed IP addresses from the list or will removed IP addresses be 
DOWN and taking up a slot until they reappear?


If new IPs are able to push away old IPs it sounds like it will meet my 
requirements perfectly. I won't have control over the IP addresses 
assigned in the DNS.


Thanks for your replies so far! Looking forward to it.

Best regards
Tim Düsterhus



Re: Backend: Multiple A records

2016-08-30 Thread Baptiste
On Tue, Aug 30, 2016 at 1:49 AM, Maciej Katafiasz <
mkatafi...@purestorage.com> wrote:

> On 29 August 2016 at 16:39, Igor Cicimov 
> wrote:
> > On Tue, Aug 30, 2016 at 6:18 AM, Maciej Katafiasz
> >  wrote:
> >> Be aware though that DNS round-robin reduces the availability of the
> >> entire setup, since there are no provisions in the protocol for the
> >> eviction of dead nodes. So unless you're very sure there will never be
> >> any in your DNS and also have the TTL set to some very low value,
> >> multiple DNS records will defeat some of the care HAProxy takes to
> >> ensure it only sends requests to backends that can service them.
> >
> > Hmmm, one would think though the backend health check and fail over
> should
> > take care of this ... or maybe not???
> >
> > Anyway, in case you use something like Consul which I mentioned before to
> > provide the DNS records, then Consul itself will remove the failed node
> from
> > the DNS record.
>
> Right, I missed the "independent healthchecks" in the original
> description, in which case it'd work well enough (albeit a low enough
> TTL value is still a concern).
>
> Cheers,
> Maciej
>
>
The way we designed the feature is more like a "server template" line which
may be used to pre-configure in memory X servers sharing the same DNS
resolution.
In your case X=2. If you intend to have up to 10 servers for this service,
simply set X to 10.
HAProxy will use A records to create the servers and the health checks will
ensure that the servers are available before sending them traffic.
If a A record disappear from  the response, the corresponding server will
get down. If a new server is added and we provisioned less than X, then a
new server is provisioned.
This X "upper" limit is to ensure compatibility with all HAProxy features
(such as hash LBing algorithms).

Could you let me know if that meets your requirements?
(we can still change this description).

Baptiste


Re: Backend: Multiple A records

2016-08-29 Thread Maciej Katafiasz
On 29 August 2016 at 16:39, Igor Cicimov  wrote:
> On Tue, Aug 30, 2016 at 6:18 AM, Maciej Katafiasz
>  wrote:
>> Be aware though that DNS round-robin reduces the availability of the
>> entire setup, since there are no provisions in the protocol for the
>> eviction of dead nodes. So unless you're very sure there will never be
>> any in your DNS and also have the TTL set to some very low value,
>> multiple DNS records will defeat some of the care HAProxy takes to
>> ensure it only sends requests to backends that can service them.
>
> Hmmm, one would think though the backend health check and fail over should
> take care of this ... or maybe not???
>
> Anyway, in case you use something like Consul which I mentioned before to
> provide the DNS records, then Consul itself will remove the failed node from
> the DNS record.

Right, I missed the "independent healthchecks" in the original
description, in which case it'd work well enough (albeit a low enough
TTL value is still a concern).

Cheers,
Maciej



Re: Backend: Multiple A records

2016-08-29 Thread Igor Cicimov
On Tue, Aug 30, 2016 at 6:18 AM, Maciej Katafiasz <
mkatafi...@purestorage.com> wrote:

> On 27 August 2016 at 14:32, Tim Düsterhus  wrote:
> > Hello
> >
> > I want to run HAProxy 1.6.8 with a backend server that may have multiple
> > A records corresponding to different containers.
> >
> > During testing I noticed that HAProxy only tries to connect to the first
> > A record returned, instead of cycling through the different IP addresses
> > returned (effectively treating every IP as a different backend server,
> > with independent health checks). In case of a timeout the whole backend
> > is treated as DOWN as well, instead of trying the next IP address.
> >
> > The reason for this setup is that it would be easier for me to add and
> > remove backend containers in DNS than generating a new HAProxy
> > configuration and reloading HAProxy whenever something changes.
>
> Be aware though that DNS round-robin reduces the availability of the
> entire setup, since there are no provisions in the protocol for the
> eviction of dead nodes. So unless you're very sure there will never be
> any in your DNS and also have the TTL set to some very low value,
> multiple DNS records will defeat some of the care HAProxy takes to
> ensure it only sends requests to backends that can service them.
>
> Cheers,
> Maciej
>
>
Hmmm, one would think though the backend health check and fail over should
take care of this ... or maybe not???

Anyway, in case you use something like Consul which I mentioned before to
provide the DNS records, then Consul itself will remove the failed node
from the DNS record.


Re: Backend: Multiple A records

2016-08-29 Thread Maciej Katafiasz
On 27 August 2016 at 14:32, Tim Düsterhus  wrote:
> Hello
>
> I want to run HAProxy 1.6.8 with a backend server that may have multiple
> A records corresponding to different containers.
>
> During testing I noticed that HAProxy only tries to connect to the first
> A record returned, instead of cycling through the different IP addresses
> returned (effectively treating every IP as a different backend server,
> with independent health checks). In case of a timeout the whole backend
> is treated as DOWN as well, instead of trying the next IP address.
>
> The reason for this setup is that it would be easier for me to add and
> remove backend containers in DNS than generating a new HAProxy
> configuration and reloading HAProxy whenever something changes.

Be aware though that DNS round-robin reduces the availability of the
entire setup, since there are no provisions in the protocol for the
eviction of dead nodes. So unless you're very sure there will never be
any in your DNS and also have the TTL set to some very low value,
multiple DNS records will defeat some of the care HAProxy takes to
ensure it only sends requests to backends that can service them.

Cheers,
Maciej



Re: Backend: Multiple A records

2016-08-28 Thread Igor Cicimov
On Mon, Aug 29, 2016 at 3:57 AM, Baptiste  wrote:

> Hi,
>
> This should happen soon, for 1.7.
>
> Baptiste
>
Fantastic news, exactly what I've been waiting for it will make haproxy and
consul a perfect couple :-)


Re: Backend: Multiple A records

2016-08-28 Thread Baptiste
Hi,

This should happen soon, for 1.7.

Baptiste

Le 27 août 2016 23:33, "Tim Düsterhus"  a écrit :

> Hello
>
> I want to run HAProxy 1.6.8 with a backend server that may have multiple
> A records corresponding to different containers.
>
> During testing I noticed that HAProxy only tries to connect to the first
> A record returned, instead of cycling through the different IP addresses
> returned (effectively treating every IP as a different backend server,
> with independent health checks). In case of a timeout the whole backend
> is treated as DOWN as well, instead of trying the next IP address.
>
> The reason for this setup is that it would be easier for me to add and
> remove backend containers in DNS than generating a new HAProxy
> configuration and reloading HAProxy whenever something changes.
>
> This is an example configuration I used during testing:
>
> global
> stats timeout 30s
>
> resolvers containers
> nameserver knot ns-containers.example.com:53
>
> frontend nginx
> bind :80
>
> default_backend nginx
>
> backend nginx
> timeout connect 1s
> timeout server 1s
> server nginx nginx.containers.example.com:80 check resolvers
> containers
> resolve-prefer ipv4
>
> With the following DNS response by the configured nameserver HAProxy
> only connects to 172.17.0.5:
>
> $ dig +short @ns-containers.example.com nginx.containers.example.com
> 172.17.0.5
> 172.17.0.6
>
> Is there a configuration setting / workaround for this? If not: Is this
> something that could be introduced in a future version or does it
> conflict with a design decision?
>
> Best regards
> Tim Düsterhus
>
>


Backend: Multiple A records

2016-08-27 Thread Tim Düsterhus
Hello

I want to run HAProxy 1.6.8 with a backend server that may have multiple
A records corresponding to different containers.

During testing I noticed that HAProxy only tries to connect to the first
A record returned, instead of cycling through the different IP addresses
returned (effectively treating every IP as a different backend server,
with independent health checks). In case of a timeout the whole backend
is treated as DOWN as well, instead of trying the next IP address.

The reason for this setup is that it would be easier for me to add and
remove backend containers in DNS than generating a new HAProxy
configuration and reloading HAProxy whenever something changes.

This is an example configuration I used during testing:

global
stats timeout 30s

resolvers containers
nameserver knot ns-containers.example.com:53

frontend nginx
bind :80

default_backend nginx

backend nginx
timeout connect 1s
timeout server 1s
server nginx nginx.containers.example.com:80 check resolvers containers
resolve-prefer ipv4

With the following DNS response by the configured nameserver HAProxy
only connects to 172.17.0.5:

$ dig +short @ns-containers.example.com nginx.containers.example.com
172.17.0.5
172.17.0.6

Is there a configuration setting / workaround for this? If not: Is this
something that could be introduced in a future version or does it
conflict with a design decision?

Best regards
Tim Düsterhus