Hi all,
I'm testing the following configuration with two nodes:
Clone: storage-clone
Meta Attrs: interleave=true target-role=Started
Group: storage
Resource: dlm (class=ocf provider=pacemaker type=controld)
Resource: lockd (class=ocf provider=heartbeat type=lvmlockd)
Clone: gfs2-clone
Group: gfs2
Resource: gfs2-lvm (class=ocf provider=heartbeat type=LVM-activate)
Attributes: activation_mode=shared vg_access_mode=lvmlockd vgname=vgshared
lvname=gfs2
Resource: gfs2-fs (class=ocf provider=heartbeat type=Filesystem)
Attributes: directory=/srv/gfs2 fstype=gfs2 device=/dev/vgshared/gfs2
Clone: ocfs2-clone
Group: ocfs2
Resource: ocfs2-lvm (class=ocf provider=heartbeat type=LVM-activate)
Attributes: activation_mode=shared vg_access_mode=lvmlockd vgname=vgshared
lvname=ocfs2
Resource: ocfs2-fs (class=ocf provider=heartbeat type=Filesystem)
Attributes: directory=/srv/ocfs2 fstype=ocfs2 device=/dev/vgshared/ocfs2
Ordering Constraints:
storage-clone then gfs2-clone (kind:Mandatory) (id:gfs2_after_storage)
storage-clone then ocfs2-clone (kind:Mandatory) (id:ocfs2_after_storage)
Colocation Constraints:
gfs2-clone with storage-clone (score:INFINITY) (id:gfs2_with_storage)
ocfs2-clone with storage-clone (score:INFINITY) (id:ocfs2_with_storage)
When node2 is set to standby resource stop running there. However when
node2 is brought back online, it causes the resources on node1 to stop
and than start again which is a bit unexpected?
Maybe the dependency between the common storage group and the upper
gfs2/ocfs2 groups could be written in some other way to prevent this
resource restart?
--
Valentin
_______________________________________________
Users mailing list: [email protected]
https://lists.clusterlabs.org/mailman/listinfo/users
Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org