Hi,

I've got a heartbeat 3.0.beta1+hg20091102-2~bpo50+1 + pacemaker 
1.0.6+hg20091102-4~bpo50+1 with drbd (8.3.6) cluster on which I run a 
few Xen guests. I use the Xen OCF for this and have live migration 
working well. allow-migrate = 1, xen guests use block-drbd disks, dopd 
is configured for drbd (I stopped using corosync due to no drbd dopd 
support).

Everything works as expected so far, e.g. live migration is fine, 
rebooting a node with running guests will live migrate them to the other 
node - as long as I do not put any kind of constraint on my Xen resources.

I've been trying all different combinations of constraints; I've tried 
colocation constraints, colocation resource sets, location constraints, 
and orders which only make up a start order (so as to not start all the 
guest at once). But every single time, this breaks live migration when 
(for example) I reboot a node with active Xen resources. They get 
stopped and then started on the other node.

This has led me to wonder if I'm really missing something here. What can 
I do to achieve one (or both) of these goals:

a) "Try" (not "must") to keep all the xen resource on the same node 
(optionally by binding them to specific xen resource), and *live migrate 
them there if they are elsewhere*

b) "try" to keep an individual xen resource on a specific node, and if 
it can't be there anymore (due to reboot or standby), live migrate it 
elsewhere (instead of stopping and starting)

We can manage without all this by just suspending a node and having them 
all live migrated (which happens in parallel - any hints on to serialize 
  that with live migration are welcome too). But it's not very 
autonomous. Perhaps that's deliberate?

I'd appreciate any pointers :)

Regards,

Gerben
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to