Hi,

I built a Pacemaker cluster to manage virtual machines (VMs).  Storage
is provided by cLVM volume groups, network access is provided by
software bridges.  I wanted to avoid maintaining precise VG and bridge
dependencies, so I created two cloned resource groups:

group storage dlm clvmd vg-vm vg-data
group network br150 br151

I cloned these groups and thus every VM resource uniformly got two these
two dependencies only, which makes it easy to add new VM resources:

colocation cl-elm-network inf: vm-elm network-clone
colocation cl-elm-storage inf: vm-elm storage-clone
order o-elm-network inf: network-clone vm-elm
order o-elm-storage inf: storage-clone vm-elm

Of course the network and storage groups do not even model their
internal dependencies correctly, as the different VGs and bridges are
independent and unordered, but this is not a serious limitation in my
case.

The problem is, if I want to extend for example the network group by a
new bridge, the cluster wants to restart all running VM resources while
starting the new bridge.  I get this info by changing a shadow copy of
the CIB and crm_simulate --run --live-check on it.  This is perfectly
understandable due to the strict ordering and colocation constraints
above, but undesirable in these cases.

The actual restarts are avoidable by putting the cluster in maintenance
mode beforehand, starting the bridge on each node manually, changing the
configuration and moving the cluster out of maintenance mode, but this
is quite a chore, and I did not find a way to make sure everything would
be fine, like seeing the planned cluster actions after the probes for
the new bridge resource are run (when there should not be anything left
to do).  Is there a way to regain my peace of mind during such
operations?  Or is there at least a way to order the cluster to start
the new bridge clones so that I don't have to invoke the resource agent
by hand on each node, thus reducing possible human mistakes?

The bridge configuration was moved into the cluster to avoid having to
maintain it in each node's OS separately.  The network and storage
resource groups provide a great concise status output with only the VM
resources expanded.  These are bonuses, but not requirements; if
sensible maintenance is not achievable with this setup, everything is
subject to change.  Actually, I'm starting to feel that simplifying the
VM dependencies may not be viable in the end, but wanted to ask for
outsider ideas before overhauling the whole configuration.
-- 
Thanks in advance,
Feri.
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to