On 5/20/2019 6:58 AM, Jeff Abrahamson wrote:
We set up an haproxy instance to front several rails servers. It's
working well, so we're quickly wanting to use it for other services.
Since the load on the haproxy host is low (even miniscule), we're
tempted to push everything through a single hapr
I (personally) think this is a matter of preference and load, and my be
unique in each situation. In my instance I have two sets of pods.
Internal and external
Internal is for any CockroachDB connections, mariaDB connections, Redis to
use.
External is for LetsEncrypt SSL terminations and front-
Ah, cool, thanks very much, that seems to go a long way to filling the
holes in my knowledge. (And thanks, Илья, too.)
This leaves only a second piece of my question: am I being reasonable
running multiple services through one (pod of) haproxies and letting the
haproxies (all with the same confi
example:
pod1:
primary: 1.1.1.2
secondary: 1.1.1.3
virtual: 1.1.1.1
pod2:
primary: 1.1.1.5
secondary: 1.1.1.6
virtual: 1.1.1.4
The mechanism to utilize the virtual IP is VRRP (apps like keepalived).
Then on the DNS server, you can use A records for 1.1.1.1 and 1.1.1.4
On Mon, May 20, 201
Thanks, Alex.
I'd understood that, but not the mechanism. Each host has an A record.
Did I miss a DNS mapping type for virtual addresses? Or do the two
hosts run a protocol between them and some other party? (But if one of
my haproxies dies, what is the mechanism of notification?)
Said differ
Jeff-
ViP - Virtual IP. this is a shared IP between nodes. One node is primary
and the other is hot-standby. If the heartbeat fails between the two, then
the secondary becomes primary.
The end application/user only needs to know about the virtual IP. So in
DNS, you can create X amount of thes
ExaBGP ?
пн, 20 мая 2019 г. в 20:01, Jeff Abrahamson :
> Thanks. Have you tried that, bringing down an haproxy during some high
> load period and watching traffic to see how long it takes for traffic all
> to migrate to the remaining haproxy? My fear (see below) is that that time
> is quite lon
Thanks. Have you tried that, bringing down an haproxy during some high
load period and watching traffic to see how long it takes for traffic
all to migrate to the remaining haproxy? My fear (see below) is that
that time is quite long and still expose you to quite a lot of failed
clients. (It's b
You could make it a bit more agile and scale it:
you can run them in "pods", such as two haproxy instances running
keepalived between them and use the ViP IP as the DNS record, so if an
HAproxy instance was to die, the alternate HAproxy instance can take over.
Set more pods up and use DNS round ro
We set up an haproxy instance to front several rails servers. It's
working well, so we're quickly wanting to use it for other services.
Since the load on the haproxy host is low (even miniscule), we're
tempted to push everything through a single haproxy instance and to let
haproxy notice based on
10 matches
Mail list logo