On 15/08/19 10:59 +0100, solarmon wrote:
> I have a two node cluster setup where each node is multi-homed over two
> separate external interfaces - net4 and net5 - that can have traffic load
> balanced between them.
> 
> I have created multiple virtual ip resources (grouped together) that should
> only be active on only one of the two nodes.
> 
> I have created ethmonitor resources for net4 and net5 and have created a
> constraint against the virtual ip resources group.
> 
> When one of the net4/net5 interfaces is taken

clarification request: taken _down_ ?

(and if so, note that there's this running misconception that
ifdown equals pulling the cable out or cutting it apart physically
and I am not sure if, by any chance, ethmonitor is not as sensitive
about this difference as corosync used to be for years, yet people
were not getting it right)

> on the active node (where the virtual IPs are), the virtual ip
> resource group switches to the other node.  This is working as
> expected.
> 
> However, when either of the net4/net5 interfaces are down on BOTH nodes -
> for example, if net4 is down on BOTH nodes - the cluster seems to get
> itself in to a flapping state where there virtual IP resources keeps
> becoming available then unavailable. Or the virtual IP resources group
> isn't running on any node.
> 
> Since net4 and net5 interfaces can have traffic load-balanced across them,
> it is acceptable for the virtual IP resources to be running any of the
> node, even if the same interface (for example, net4) is down on both nodes,
> since the other interface (for example, net5) is still available on both
> nodes.
> 
> What is the recommended way to configure the ethmonitor and constraint
> resources for this type of multi-homed setup?

-- 
Jan (Poki)

Attachment: pgpkVbJsN1vOV.pgp
Description: PGP signature

_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Reply via email to