Arnold Krille <[email protected]> writes:

> If I understand you correctly, the problem only arises when adding new
> bridges while the cluster is running. And your vms will (rightfully)
> get restarted when you add a non-running bridge-resource to the
> cloned dependency-group.

Exactly.

> You might be able to circumvent this problem: Define the bridge as a
> single cloned resource and start it. When it runs on all nodes, remove
> the clones for the single resource and add the resource to your
> dependency-group in one single edit. With commit the cluster should see
> that the new resource in the group is already running and thus not
> affect the vms.

Thanks, this sounds like a plan, I'll test it!

> On a side-note: I made the (sad) experience that its easier to
> configure such stuff outside of pacemaker/corosync and use the cluster
> only for the reliable ha things.

What do you mean by "reliable"?  What did you experience (if you can put
it in a few sentences)?

> Configuring several systems into a sane state is more a job for
> configuration-management such as chef, puppet or at least csync2 (to
> sync the configs).

I'm not a big fan of configuration management systems, but they probably
have their place.  None is present in the current setup, though, so
setting one up for bridge configuration seemed more complicated than
extending the cluster.  We'll see...
-- 
Regards,
Feri.
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to