Jeff-

ViP - Virtual IP.  this is a shared IP between nodes.  One node is primary
and the other is hot-standby.  If the heartbeat fails between the two, then
the secondary becomes primary.

The end application/user only needs to know about the virtual IP.  So in
DNS, you can create X amount of these pods  to distribute the load among
the pods.


and we run this setup in Apache mesos with about 100 dockers and 4 Ha proxy
pods.




On Mon, May 20, 2019 at 10:49 AM Jeff Abrahamson <j...@p27.eu> wrote:

> Thanks.  Have you tried that, bringing down an haproxy during some high
> load period and watching traffic to see how long it takes for traffic all
> to migrate to the remaining haproxy?  My fear (see below) is that that time
> is quite long and still expose you to quite a lot of failed clients.  (It's
> better than losing one's sole haproxy, to be sure.)
>
> In any case, and more concretely, that raises a few additional questions
> for me, mostly due to my specialty not being networking.
>
> *1.  VIP addresses.*  I've not managed to fully understand how VIP
> addresses work.  Everything I've read either (1) seems to be using the term
> incorrectly, with a sort of short TTL DNS resolution and a manual
> fail-over, or (2) requires that the relevant servers act as routers (OSPF
> <https://en.wikipedia.org/wiki/Open_Shortest_Path_First>, etc.) if not
> outright playing link-level tricks.  On (1), we try to engineer our infra
> so that our troubles will be handled automatically or by machines before
> being handled by us.  I worry that (2) is a long rabbit hole, but I'd still
> like to understand what that rabbit hole is, either in case I'm wrong or so
> that I understand when it's the right time.
>
> *2.  RR DNS.  *People talk about RR DNS for availability, but I've seen
> no evidence that it's applicable beyond load balancing.  Indeed, RFC 1794
> <https://tools.ietf.org/html/rfc1794> (1995) only talks about load
> balancing.  As long as the haproxy hosts are all up, clients pick an
> address at random (I think, I haven't found written evidence of that as a
> client requirement.)  But if an haproxy goes down, every client has to time
> out and try again independently, which doesn't make me happy.  It might
> still be the best I can do.
>
> I'm very open to pointers or insights.  And I'm quite aware that the
> relationship between availability and cost is super-linear.  My goal is to
> engineer the best solutions we can with the constraints we have and to
> understand why we do what we do.
>
> Anecdotally, I noticed a while back that Google and others, which used to
> have DNS resolutions from one name to multiple IP's, now resolve to a
> single IP.
>
> Jeff Abrahamson
> http://p27.eu/jeff/
> http://transport-nantes.com/
>
>
> On 20/05/2019 15:04, Alex Evonosky wrote:
>
> You could make it a bit more agile and scale it:
>
> you can run them in "pods", such as two haproxy instances running
> keepalived between them and use the ViP IP as the DNS record, so if an
> HAproxy instance was to die, the alternate HAproxy instance can take over.
> Set more pods up and use DNS round robin.
>
>
>
> On Mon, May 20, 2019 at 8:59 AM Jeff Abrahamson <j...@p27.eu> wrote:
>
>> We set up an haproxy instance to front several rails servers.  It's
>> working well, so we're quickly wanting to use it for other services.
>>
>> Since the load on the haproxy host is low (even miniscule), we're
>> tempted to push everything through a single haproxy instance and to let
>> haproxy notice based on requested hostname to which backend to dispatch
>> requests.
>>
>> Is there any good wisdom here on how much to pile onto a single haproxy
>> instance or when to stop?
>>
>> --
>>
>> Jeff Abrahamson
>> http://p27.eu/jeff/
>> http://transport-nantes.com/
>>
>>
>>
>>
>> --
>
> Jeff Abrahamson
> +33 6 24 40 01 57
> +44 7920 594 255
> http://p27.eu/jeff/http://transport-nantes.com/
>
>

Reply via email to