Re: haproxy architecture

2019-05-20 Thread Shawn Heisey

On 5/20/2019 6:58 AM, Jeff Abrahamson wrote:

We set up an haproxy instance to front several rails servers.  It's
working well, so we're quickly wanting to use it for other services.

Since the load on the haproxy host is low (even miniscule), we're
tempted to push everything through a single haproxy instance and to let
haproxy notice based on requested hostname to which backend to dispatch
requests.

Is there any good wisdom here on how much to pile onto a single haproxy
instance or when to stop?


I'm just a user, not connected with the project in any way.  This 
message is not intended to contradict any of the other replies you've 
gotten.


The haproxy software is amazingly efficient.  Some people are setting up 
proxies on raspberry pi hardware with excellent results.


There's a page on haproxy.org about 10 gig performance:

http://www.haproxy.org/10g.html

In 2009, this test was done with a Core 2 Duo processor on the haproxy 
machine, and it easily saturated the 10gig connection.  The E8200 
processor was introduced in Q1-2008, so it's over 11 years old.


Any modern laptop would probably have more CPU power than the machine 
used in that test.  A desktop or server-class system would do even better.


As for when to stop adding load ... I don't know for sure on that.  The 
program is typically more CPU-bound than memory-bound, so if you have 
CPU capacity left, it could probably handle more traffic.  Memory for 
each active session is required, though, so don't neglect that.  A few 
gigabytes should handle a lot of traffic, though.


If your haproxy is handling TLS/SSL, you can do more traffic by using a 
CPU that can do encryption offloading, and making sure your encryption 
library (usually openssl) is compiled with the option to take advantage 
of that capability.


Thanks,
Shawn



Re: haproxy architecture

2019-05-20 Thread Alex Evonosky
I (personally) think this is a matter of preference and load, and my be
unique in each situation.  In my instance I have two sets of pods.
Internal and external


Internal is for any CockroachDB connections, mariaDB connections, Redis to
use.

External is for LetsEncrypt SSL terminations and front-facing dockers to
the internet.


So any docker can use the internal pods for database connections, and
external for end-users...



On Mon, May 20, 2019 at 11:54 AM Jeff Abrahamson  wrote:

> Ah, cool, thanks very much, that seems to go a long way to filling the
> holes in my knowledge.  (And thanks, Илья, too.)
>
> This leaves only a second piece of my question:  am I being reasonable
> running multiple services through one (pod of) haproxies and letting the
> haproxies (all with the same config) tease them apart based on host name
> and maybe part of path?
>
> Jeff
>
>
> On 20/05/2019 17:48, Alex Evonosky wrote:
>
> example:
>
> pod1:
>
> primary: 1.1.1.2
> secondary: 1.1.1.3
> virtual: 1.1.1.1
>
>
> pod2:
>
> primary: 1.1.1.5
> secondary: 1.1.1.6
> virtual: 1.1.1.4
>
>
> The mechanism to utilize the virtual IP is VRRP (apps like keepalived).
>
>
> Then on the DNS server, you can use A records for 1.1.1.1 and 1.1.1.4
>
>
> On Mon, May 20, 2019 at 11:37 AM Jeff Abrahamson  wrote:
>
>> Thanks, Alex.
>>
>> I'd understood that, but not the mechanism.  Each host has an A record.
>> Did I miss a DNS mapping type for virtual addresses?  Or do the two hosts
>> run a protocol between them and some other party?  (But if one of my
>> haproxies dies, what is the mechanism of notification?)
>>
>> Said differently, I'm a client and I want to send a packet to
>> service.example.com (a CNAME).  I do a DNS lookup, I get an IP address,
>> 1.2.3.4.  (Did the CNAME map only to 1.2.3.4?)  I establish an https
>> connection to 1.2.3.4.  Who/what on the network decides that that
>> connection terminates at service2.example.com and not at
>> service1.example.com?
>>
>> Does this mean that letsencrypt is incapable of issuing SSL certs because
>> my IP resolves to different hosts at different moments?
>>
>> Sorry if my questions are overly basic.  I'm just trying to get a grip on
>> what this means and how to do it.
>>
>> Jeff
>>
>>
>> On 20/05/2019 17:12, Alex Evonosky wrote:
>>
>> Jeff-
>>
>> ViP - Virtual IP.  this is a shared IP between nodes.  One node is
>> primary and the other is hot-standby.  If the heartbeat fails between the
>> two, then the secondary becomes primary.
>>
>> The end application/user only needs to know about the virtual IP.  So in
>> DNS, you can create X amount of these pods  to distribute the load among
>> the pods.
>>
>>
>> and we run this setup in Apache mesos with about 100 dockers and 4 Ha
>> proxy pods.
>>
>>
>>
>>
>> On Mon, May 20, 2019 at 10:49 AM Jeff Abrahamson  wrote:
>>
>>> Thanks.  Have you tried that, bringing down an haproxy during some high
>>> load period and watching traffic to see how long it takes for traffic all
>>> to migrate to the remaining haproxy?  My fear (see below) is that that time
>>> is quite long and still expose you to quite a lot of failed clients.  (It's
>>> better than losing one's sole haproxy, to be sure.)
>>>
>>> In any case, and more concretely, that raises a few additional questions
>>> for me, mostly due to my specialty not being networking.
>>>
>>> *1.  VIP addresses.*  I've not managed to fully understand how VIP
>>> addresses work.  Everything I've read either (1) seems to be using the term
>>> incorrectly, with a sort of short TTL DNS resolution and a manual
>>> fail-over, or (2) requires that the relevant servers act as routers (
>>> OSPF , etc.) if
>>> not outright playing link-level tricks.  On (1), we try to engineer our
>>> infra so that our troubles will be handled automatically or by machines
>>> before being handled by us.  I worry that (2) is a long rabbit hole, but
>>> I'd still like to understand what that rabbit hole is, either in case I'm
>>> wrong or so that I understand when it's the right time.
>>>
>>> *2.  RR DNS.  *People talk about RR DNS for availability, but I've seen
>>> no evidence that it's applicable beyond load balancing.  Indeed, RFC
>>> 1794  (1995) only talks about load
>>> balancing.  As long as the haproxy hosts are all up, clients pick an
>>> address at random (I think, I haven't found written evidence of that as a
>>> client requirement.)  But if an haproxy goes down, every client has to time
>>> out and try again independently, which doesn't make me happy.  It might
>>> still be the best I can do.
>>>
>>> I'm very open to pointers or insights.  And I'm quite aware that the
>>> relationship between availability and cost is super-linear.  My goal is to
>>> engineer the best solutions we can with the constraints we have and to
>>> understand why we do what we do.
>>>
>>> Anecdotally, I noticed a while back that Google and others, which used

Re: haproxy architecture

2019-05-20 Thread Jeff Abrahamson
Ah, cool, thanks very much, that seems to go a long way to filling the
holes in my knowledge.  (And thanks, Илья, too.)

This leaves only a second piece of my question:  am I being reasonable
running multiple services through one (pod of) haproxies and letting the
haproxies (all with the same config) tease them apart based on host name
and maybe part of path?

Jeff


On 20/05/2019 17:48, Alex Evonosky wrote:
> example:
>
> pod1:
>
> primary: 1.1.1.2
> secondary: 1.1.1.3
> virtual: 1.1.1.1
>
>
> pod2:
>
> primary: 1.1.1.5
> secondary: 1.1.1.6
> virtual: 1.1.1.4
>
>
> The mechanism to utilize the virtual IP is VRRP (apps like keepalived).
>
>
> Then on the DNS server, you can use A records for 1.1.1.1 and 1.1.1.4
>
>
> On Mon, May 20, 2019 at 11:37 AM Jeff Abrahamson  > wrote:
>
> Thanks, Alex.
>
> I'd understood that, but not the mechanism.  Each host has an A
> record.  Did I miss a DNS mapping type for virtual addresses?  Or
> do the two hosts run a protocol between them and some other
> party?  (But if one of my haproxies dies, what is the mechanism of
> notification?)
>
> Said differently, I'm a client and I want to send a packet to
> service.example.com  (a CNAME).  I do
> a DNS lookup, I get an IP address, 1.2.3.4.  (Did the CNAME map
> only to 1.2.3.4?)  I establish an https connection to 1.2.3.4. 
> Who/what on the network decides that that connection terminates at
> service2.example.com  and not at
> service1.example.com ?
>
> Does this mean that letsencrypt is incapable of issuing SSL certs
> because my IP resolves to different hosts at different moments?
>
> Sorry if my questions are overly basic.  I'm just trying to get a
> grip on what this means and how to do it.
>
> Jeff
>
>
> On 20/05/2019 17:12, Alex Evonosky wrote:
>> Jeff-
>>
>> ViP - Virtual IP.  this is a shared IP between nodes.  One node
>> is primary and the other is hot-standby.  If the heartbeat fails
>> between the two, then the secondary becomes primary. 
>>
>> The end application/user only needs to know about the virtual
>> IP.  So in DNS, you can create X amount of these pods  to
>> distribute the load among the pods.
>>
>>
>> and we run this setup in Apache mesos with about 100 dockers and
>> 4 Ha proxy pods.
>>
>>
>>
>>
>> On Mon, May 20, 2019 at 10:49 AM Jeff Abrahamson > > wrote:
>>
>> Thanks.  Have you tried that, bringing down an haproxy during
>> some high load period and watching traffic to see how long it
>> takes for traffic all to migrate to the remaining haproxy? 
>> My fear (see below) is that that time is quite long and still
>> expose you to quite a lot of failed clients.  (It's better
>> than losing one's sole haproxy, to be sure.)
>>
>> In any case, and more concretely, that raises a few
>> additional questions for me, mostly due to my specialty not
>> being networking.
>>
>> *1.  VIP addresses.*  I've not managed to fully understand
>> how VIP addresses work.  Everything I've read either (1)
>> seems to be using the term incorrectly, with a sort of short
>> TTL DNS resolution and a manual fail-over, or (2) requires
>> that the relevant servers act as routers (OSPF
>> ,
>> etc.) if not outright playing link-level tricks.  On (1), we
>> try to engineer our infra so that our troubles will be
>> handled automatically or by machines before being handled by
>> us.  I worry that (2) is a long rabbit hole, but I'd still
>> like to understand what that rabbit hole is, either in case
>> I'm wrong or so that I understand when it's the right time.
>>
>> *2.  RR DNS.  *People talk about RR DNS for availability, but
>> I've seen no evidence that it's applicable beyond load
>> balancing.  Indeed, RFC 1794
>>  (1995) only talks about
>> load balancing.  As long as the haproxy hosts are all up,
>> clients pick an address at random (I think, I haven't found
>> written evidence of that as a client requirement.)  But if an
>> haproxy goes down, every client has to time out and try again
>> independently, which doesn't make me happy.  It might still
>> be the best I can do.
>>
>> I'm very open to pointers or insights.  And I'm quite aware
>> that the relationship between availability and cost is
>> super-linear.  My goal is to engineer the best solutions we
>> can with the constraints we have and to understand why we do
>> what we do.
>>
>> Anecdotally, I noticed a while back that Google and others,
>>  

Re: haproxy architecture

2019-05-20 Thread Alex Evonosky
example:

pod1:

primary: 1.1.1.2
secondary: 1.1.1.3
virtual: 1.1.1.1


pod2:

primary: 1.1.1.5
secondary: 1.1.1.6
virtual: 1.1.1.4


The mechanism to utilize the virtual IP is VRRP (apps like keepalived).


Then on the DNS server, you can use A records for 1.1.1.1 and 1.1.1.4


On Mon, May 20, 2019 at 11:37 AM Jeff Abrahamson  wrote:

> Thanks, Alex.
>
> I'd understood that, but not the mechanism.  Each host has an A record.
> Did I miss a DNS mapping type for virtual addresses?  Or do the two hosts
> run a protocol between them and some other party?  (But if one of my
> haproxies dies, what is the mechanism of notification?)
>
> Said differently, I'm a client and I want to send a packet to
> service.example.com (a CNAME).  I do a DNS lookup, I get an IP address,
> 1.2.3.4.  (Did the CNAME map only to 1.2.3.4?)  I establish an https
> connection to 1.2.3.4.  Who/what on the network decides that that
> connection terminates at service2.example.com and not at
> service1.example.com?
>
> Does this mean that letsencrypt is incapable of issuing SSL certs because
> my IP resolves to different hosts at different moments?
>
> Sorry if my questions are overly basic.  I'm just trying to get a grip on
> what this means and how to do it.
>
> Jeff
>
>
> On 20/05/2019 17:12, Alex Evonosky wrote:
>
> Jeff-
>
> ViP - Virtual IP.  this is a shared IP between nodes.  One node is primary
> and the other is hot-standby.  If the heartbeat fails between the two, then
> the secondary becomes primary.
>
> The end application/user only needs to know about the virtual IP.  So in
> DNS, you can create X amount of these pods  to distribute the load among
> the pods.
>
>
> and we run this setup in Apache mesos with about 100 dockers and 4 Ha
> proxy pods.
>
>
>
>
> On Mon, May 20, 2019 at 10:49 AM Jeff Abrahamson  wrote:
>
>> Thanks.  Have you tried that, bringing down an haproxy during some high
>> load period and watching traffic to see how long it takes for traffic all
>> to migrate to the remaining haproxy?  My fear (see below) is that that time
>> is quite long and still expose you to quite a lot of failed clients.  (It's
>> better than losing one's sole haproxy, to be sure.)
>>
>> In any case, and more concretely, that raises a few additional questions
>> for me, mostly due to my specialty not being networking.
>>
>> *1.  VIP addresses.*  I've not managed to fully understand how VIP
>> addresses work.  Everything I've read either (1) seems to be using the term
>> incorrectly, with a sort of short TTL DNS resolution and a manual
>> fail-over, or (2) requires that the relevant servers act as routers (OSPF
>> , etc.) if not
>> outright playing link-level tricks.  On (1), we try to engineer our infra
>> so that our troubles will be handled automatically or by machines before
>> being handled by us.  I worry that (2) is a long rabbit hole, but I'd still
>> like to understand what that rabbit hole is, either in case I'm wrong or so
>> that I understand when it's the right time.
>>
>> *2.  RR DNS.  *People talk about RR DNS for availability, but I've seen
>> no evidence that it's applicable beyond load balancing.  Indeed, RFC 1794
>>  (1995) only talks about load
>> balancing.  As long as the haproxy hosts are all up, clients pick an
>> address at random (I think, I haven't found written evidence of that as a
>> client requirement.)  But if an haproxy goes down, every client has to time
>> out and try again independently, which doesn't make me happy.  It might
>> still be the best I can do.
>>
>> I'm very open to pointers or insights.  And I'm quite aware that the
>> relationship between availability and cost is super-linear.  My goal is to
>> engineer the best solutions we can with the constraints we have and to
>> understand why we do what we do.
>>
>> Anecdotally, I noticed a while back that Google and others, which used to
>> have DNS resolutions from one name to multiple IP's, now resolve to a
>> single IP.
>>
>> Jeff Abrahamson
>> http://p27.eu/jeff/
>> http://transport-nantes.com/
>>
>>
>> On 20/05/2019 15:04, Alex Evonosky wrote:
>>
>> You could make it a bit more agile and scale it:
>>
>> you can run them in "pods", such as two haproxy instances running
>> keepalived between them and use the ViP IP as the DNS record, so if an
>> HAproxy instance was to die, the alternate HAproxy instance can take over.
>> Set more pods up and use DNS round robin.
>>
>>
>>
>> On Mon, May 20, 2019 at 8:59 AM Jeff Abrahamson  wrote:
>>
>>> We set up an haproxy instance to front several rails servers.  It's
>>> working well, so we're quickly wanting to use it for other services.
>>>
>>> Since the load on the haproxy host is low (even miniscule), we're
>>> tempted to push everything through a single haproxy instance and to let
>>> haproxy notice based on requested hostname to which backend to dispatch
>>> requests.
>>>
>>> Is there any good wisdom here on 

Re: haproxy architecture

2019-05-20 Thread Jeff Abrahamson
Thanks, Alex.

I'd understood that, but not the mechanism.  Each host has an A record. 
Did I miss a DNS mapping type for virtual addresses?  Or do the two
hosts run a protocol between them and some other party?  (But if one of
my haproxies dies, what is the mechanism of notification?)

Said differently, I'm a client and I want to send a packet to
service.example.com (a CNAME).  I do a DNS lookup, I get an IP address,
1.2.3.4.  (Did the CNAME map only to 1.2.3.4?)  I establish an https
connection to 1.2.3.4.  Who/what on the network decides that that
connection terminates at service2.example.com and not at
service1.example.com?

Does this mean that letsencrypt is incapable of issuing SSL certs
because my IP resolves to different hosts at different moments?

Sorry if my questions are overly basic.  I'm just trying to get a grip
on what this means and how to do it.

Jeff


On 20/05/2019 17:12, Alex Evonosky wrote:
> Jeff-
>
> ViP - Virtual IP.  this is a shared IP between nodes.  One node is
> primary and the other is hot-standby.  If the heartbeat fails between
> the two, then the secondary becomes primary. 
>
> The end application/user only needs to know about the virtual IP.  So
> in DNS, you can create X amount of these pods  to distribute the load
> among the pods.
>
>
> and we run this setup in Apache mesos with about 100 dockers and 4 Ha
> proxy pods.
>
>
>
>
> On Mon, May 20, 2019 at 10:49 AM Jeff Abrahamson  > wrote:
>
> Thanks.  Have you tried that, bringing down an haproxy during some
> high load period and watching traffic to see how long it takes for
> traffic all to migrate to the remaining haproxy?  My fear (see
> below) is that that time is quite long and still expose you to
> quite a lot of failed clients.  (It's better than losing one's
> sole haproxy, to be sure.)
>
> In any case, and more concretely, that raises a few additional
> questions for me, mostly due to my specialty not being networking.
>
> *1.  VIP addresses.*  I've not managed to fully understand how VIP
> addresses work.  Everything I've read either (1) seems to be using
> the term incorrectly, with a sort of short TTL DNS resolution and
> a manual fail-over, or (2) requires that the relevant servers act
> as routers (OSPF
> , etc.) if
> not outright playing link-level tricks.  On (1), we try to
> engineer our infra so that our troubles will be handled
> automatically or by machines before being handled by us.  I worry
> that (2) is a long rabbit hole, but I'd still like to understand
> what that rabbit hole is, either in case I'm wrong or so that I
> understand when it's the right time.
>
> *2.  RR DNS.  *People talk about RR DNS for availability, but I've
> seen no evidence that it's applicable beyond load balancing. 
> Indeed, RFC 1794  (1995) only
> talks about load balancing.  As long as the haproxy hosts are all
> up, clients pick an address at random (I think, I haven't found
> written evidence of that as a client requirement.)  But if an
> haproxy goes down, every client has to time out and try again
> independently, which doesn't make me happy.  It might still be the
> best I can do.
>
> I'm very open to pointers or insights.  And I'm quite aware that
> the relationship between availability and cost is super-linear. 
> My goal is to engineer the best solutions we can with the
> constraints we have and to understand why we do what we do.
>
> Anecdotally, I noticed a while back that Google and others, which
> used to have DNS resolutions from one name to multiple IP's, now
> resolve to a single IP.
>
> Jeff Abrahamson
> http://p27.eu/jeff/
> http://transport-nantes.com/
>
>
> On 20/05/2019 15:04, Alex Evonosky wrote:
>> You could make it a bit more agile and scale it:
>>
>> you can run them in "pods", such as two haproxy instances running
>> keepalived between them and use the ViP IP as the DNS record, so
>> if an HAproxy instance was to die, the alternate HAproxy instance
>> can take over.  Set more pods up and use DNS round robin.
>>
>>
>>
>> On Mon, May 20, 2019 at 8:59 AM Jeff Abrahamson > > wrote:
>>
>> We set up an haproxy instance to front several rails
>> servers.  It's
>> working well, so we're quickly wanting to use it for other
>> services.
>>
>> Since the load on the haproxy host is low (even miniscule), we're
>> tempted to push everything through a single haproxy instance
>> and to let
>> haproxy notice based on requested hostname to which backend
>> to dispatch
>> requests.
>>
>> Is there any good wisdom here on how much to pile onto a
>> single haproxy
>> instance or when to stop?
>>
>> 

Re: haproxy architecture

2019-05-20 Thread Alex Evonosky
Jeff-

ViP - Virtual IP.  this is a shared IP between nodes.  One node is primary
and the other is hot-standby.  If the heartbeat fails between the two, then
the secondary becomes primary.

The end application/user only needs to know about the virtual IP.  So in
DNS, you can create X amount of these pods  to distribute the load among
the pods.


and we run this setup in Apache mesos with about 100 dockers and 4 Ha proxy
pods.




On Mon, May 20, 2019 at 10:49 AM Jeff Abrahamson  wrote:

> Thanks.  Have you tried that, bringing down an haproxy during some high
> load period and watching traffic to see how long it takes for traffic all
> to migrate to the remaining haproxy?  My fear (see below) is that that time
> is quite long and still expose you to quite a lot of failed clients.  (It's
> better than losing one's sole haproxy, to be sure.)
>
> In any case, and more concretely, that raises a few additional questions
> for me, mostly due to my specialty not being networking.
>
> *1.  VIP addresses.*  I've not managed to fully understand how VIP
> addresses work.  Everything I've read either (1) seems to be using the term
> incorrectly, with a sort of short TTL DNS resolution and a manual
> fail-over, or (2) requires that the relevant servers act as routers (OSPF
> , etc.) if not
> outright playing link-level tricks.  On (1), we try to engineer our infra
> so that our troubles will be handled automatically or by machines before
> being handled by us.  I worry that (2) is a long rabbit hole, but I'd still
> like to understand what that rabbit hole is, either in case I'm wrong or so
> that I understand when it's the right time.
>
> *2.  RR DNS.  *People talk about RR DNS for availability, but I've seen
> no evidence that it's applicable beyond load balancing.  Indeed, RFC 1794
>  (1995) only talks about load
> balancing.  As long as the haproxy hosts are all up, clients pick an
> address at random (I think, I haven't found written evidence of that as a
> client requirement.)  But if an haproxy goes down, every client has to time
> out and try again independently, which doesn't make me happy.  It might
> still be the best I can do.
>
> I'm very open to pointers or insights.  And I'm quite aware that the
> relationship between availability and cost is super-linear.  My goal is to
> engineer the best solutions we can with the constraints we have and to
> understand why we do what we do.
>
> Anecdotally, I noticed a while back that Google and others, which used to
> have DNS resolutions from one name to multiple IP's, now resolve to a
> single IP.
>
> Jeff Abrahamson
> http://p27.eu/jeff/
> http://transport-nantes.com/
>
>
> On 20/05/2019 15:04, Alex Evonosky wrote:
>
> You could make it a bit more agile and scale it:
>
> you can run them in "pods", such as two haproxy instances running
> keepalived between them and use the ViP IP as the DNS record, so if an
> HAproxy instance was to die, the alternate HAproxy instance can take over.
> Set more pods up and use DNS round robin.
>
>
>
> On Mon, May 20, 2019 at 8:59 AM Jeff Abrahamson  wrote:
>
>> We set up an haproxy instance to front several rails servers.  It's
>> working well, so we're quickly wanting to use it for other services.
>>
>> Since the load on the haproxy host is low (even miniscule), we're
>> tempted to push everything through a single haproxy instance and to let
>> haproxy notice based on requested hostname to which backend to dispatch
>> requests.
>>
>> Is there any good wisdom here on how much to pile onto a single haproxy
>> instance or when to stop?
>>
>> --
>>
>> Jeff Abrahamson
>> http://p27.eu/jeff/
>> http://transport-nantes.com/
>>
>>
>>
>>
>> --
>
> Jeff Abrahamson
> +33 6 24 40 01 57
> +44 7920 594 255
> http://p27.eu/jeff/http://transport-nantes.com/
>
>


Re: haproxy architecture

2019-05-20 Thread Илья Шипицин
ExaBGP ?

пн, 20 мая 2019 г. в 20:01, Jeff Abrahamson :

> Thanks.  Have you tried that, bringing down an haproxy during some high
> load period and watching traffic to see how long it takes for traffic all
> to migrate to the remaining haproxy?  My fear (see below) is that that time
> is quite long and still expose you to quite a lot of failed clients.  (It's
> better than losing one's sole haproxy, to be sure.)
>
> In any case, and more concretely, that raises a few additional questions
> for me, mostly due to my specialty not being networking.
>
> *1.  VIP addresses.*  I've not managed to fully understand how VIP
> addresses work.  Everything I've read either (1) seems to be using the term
> incorrectly, with a sort of short TTL DNS resolution and a manual
> fail-over, or (2) requires that the relevant servers act as routers (OSPF
> , etc.) if not
> outright playing link-level tricks.  On (1), we try to engineer our infra
> so that our troubles will be handled automatically or by machines before
> being handled by us.  I worry that (2) is a long rabbit hole, but I'd still
> like to understand what that rabbit hole is, either in case I'm wrong or so
> that I understand when it's the right time.
>
> *2.  RR DNS.  *People talk about RR DNS for availability, but I've seen
> no evidence that it's applicable beyond load balancing.  Indeed, RFC 1794
>  (1995) only talks about load
> balancing.  As long as the haproxy hosts are all up, clients pick an
> address at random (I think, I haven't found written evidence of that as a
> client requirement.)  But if an haproxy goes down, every client has to time
> out and try again independently, which doesn't make me happy.  It might
> still be the best I can do.
>
> I'm very open to pointers or insights.  And I'm quite aware that the
> relationship between availability and cost is super-linear.  My goal is to
> engineer the best solutions we can with the constraints we have and to
> understand why we do what we do.
>
> Anecdotally, I noticed a while back that Google and others, which used to
> have DNS resolutions from one name to multiple IP's, now resolve to a
> single IP.
>
> Jeff Abrahamson
> http://p27.eu/jeff/
> http://transport-nantes.com/
>
>
> On 20/05/2019 15:04, Alex Evonosky wrote:
>
> You could make it a bit more agile and scale it:
>
> you can run them in "pods", such as two haproxy instances running
> keepalived between them and use the ViP IP as the DNS record, so if an
> HAproxy instance was to die, the alternate HAproxy instance can take over.
> Set more pods up and use DNS round robin.
>
>
>
> On Mon, May 20, 2019 at 8:59 AM Jeff Abrahamson  wrote:
>
>> We set up an haproxy instance to front several rails servers.  It's
>> working well, so we're quickly wanting to use it for other services.
>>
>> Since the load on the haproxy host is low (even miniscule), we're
>> tempted to push everything through a single haproxy instance and to let
>> haproxy notice based on requested hostname to which backend to dispatch
>> requests.
>>
>> Is there any good wisdom here on how much to pile onto a single haproxy
>> instance or when to stop?
>>
>> --
>>
>> Jeff Abrahamson
>> http://p27.eu/jeff/
>> http://transport-nantes.com/
>>
>>
>>
>>
>> --
>
> Jeff Abrahamson
> +33 6 24 40 01 57
> +44 7920 594 255
> http://p27.eu/jeff/http://transport-nantes.com/
>
>


Re: haproxy architecture

2019-05-20 Thread Jeff Abrahamson
Thanks.  Have you tried that, bringing down an haproxy during some high
load period and watching traffic to see how long it takes for traffic
all to migrate to the remaining haproxy?  My fear (see below) is that
that time is quite long and still expose you to quite a lot of failed
clients.  (It's better than losing one's sole haproxy, to be sure.)

In any case, and more concretely, that raises a few additional questions
for me, mostly due to my specialty not being networking.

*1.  VIP addresses.*  I've not managed to fully understand how VIP
addresses work.  Everything I've read either (1) seems to be using the
term incorrectly, with a sort of short TTL DNS resolution and a manual
fail-over, or (2) requires that the relevant servers act as routers
(OSPF , etc.) if
not outright playing link-level tricks.  On (1), we try to engineer our
infra so that our troubles will be handled automatically or by machines
before being handled by us.  I worry that (2) is a long rabbit hole, but
I'd still like to understand what that rabbit hole is, either in case
I'm wrong or so that I understand when it's the right time.

*2.  RR DNS.  *People talk about RR DNS for availability, but I've seen
no evidence that it's applicable beyond load balancing.  Indeed, RFC
1794  (1995) only talks about load
balancing.  As long as the haproxy hosts are all up, clients pick an
address at random (I think, I haven't found written evidence of that as
a client requirement.)  But if an haproxy goes down, every client has to
time out and try again independently, which doesn't make me happy.  It
might still be the best I can do.

I'm very open to pointers or insights.  And I'm quite aware that the
relationship between availability and cost is super-linear.  My goal is
to engineer the best solutions we can with the constraints we have and
to understand why we do what we do.

Anecdotally, I noticed a while back that Google and others, which used
to have DNS resolutions from one name to multiple IP's, now resolve to a
single IP.

Jeff Abrahamson
http://p27.eu/jeff/
http://transport-nantes.com/


On 20/05/2019 15:04, Alex Evonosky wrote:
> You could make it a bit more agile and scale it:
>
> you can run them in "pods", such as two haproxy instances running
> keepalived between them and use the ViP IP as the DNS record, so if an
> HAproxy instance was to die, the alternate HAproxy instance can take
> over.  Set more pods up and use DNS round robin.
>
>
>
> On Mon, May 20, 2019 at 8:59 AM Jeff Abrahamson  > wrote:
>
> We set up an haproxy instance to front several rails servers.  It's
> working well, so we're quickly wanting to use it for other services.
>
> Since the load on the haproxy host is low (even miniscule), we're
> tempted to push everything through a single haproxy instance and
> to let
> haproxy notice based on requested hostname to which backend to
> dispatch
> requests.
>
> Is there any good wisdom here on how much to pile onto a single
> haproxy
> instance or when to stop?
>
> -- 
>
> Jeff Abrahamson
> http://p27.eu/jeff/
> http://transport-nantes.com/
>
>
>
>
-- 

Jeff Abrahamson
+33 6 24 40 01 57
+44 7920 594 255

http://p27.eu/jeff/
http://transport-nantes.com/



Re: haproxy architecture

2019-05-20 Thread Alex Evonosky
You could make it a bit more agile and scale it:

you can run them in "pods", such as two haproxy instances running
keepalived between them and use the ViP IP as the DNS record, so if an
HAproxy instance was to die, the alternate HAproxy instance can take over.
Set more pods up and use DNS round robin.



On Mon, May 20, 2019 at 8:59 AM Jeff Abrahamson  wrote:

> We set up an haproxy instance to front several rails servers.  It's
> working well, so we're quickly wanting to use it for other services.
>
> Since the load on the haproxy host is low (even miniscule), we're
> tempted to push everything through a single haproxy instance and to let
> haproxy notice based on requested hostname to which backend to dispatch
> requests.
>
> Is there any good wisdom here on how much to pile onto a single haproxy
> instance or when to stop?
>
> --
>
> Jeff Abrahamson
> http://p27.eu/jeff/
> http://transport-nantes.com/
>
>
>
>
>