Hello, Pacemaker Gurus: My HA (2 active/slave nodes and 1 standby node) setup contains 1 DRBD master/slave resource group, and 1 simple lsb resource. I configure some location constraints to prefer the active node, and I also want resource-stickiness to the actual online node to avoid unnecessary switchover. The expected topology is as below:
ms_drbd_ssn | fs_ssn | ip_ssn | ssn ip_sst \ / \ / sst The ip_ssn and ip_sst are two independent VIPs, and sst service depends on both ip_sst and ssn. At the same time, I prefer ssn and sst to be initially started on node1, so that node1 should have two VIPs at resource startup. This setup almost works, except below 2 issues: 1. when the failed node is online again, all the resource are restarted, this is totally unnecessary 2. when failover from node1 to node2, all resources will be started on node2, however, when node1 is online again, the sst service will failback to node1, which is not I want. My pacemaker version is 1.1.7-6.el6 and setup on CentOS6.3. Is my setup not well configured as expected? Could you please share your thoughts about it? Any advice is pretty appreciated. Thanks. Below is my configure: ------------------CONFIG START-------------------------------------- node node3 \ attributes standby="on" node node1 node node2 primitive drbd_ssn ocf:linbit:drbd \ params drbd_resource="r0" \ op monitor interval="15s" primitive fs_ssn ocf:heartbeat:Filesystem \ op monitor interval="15s" \ params device="/dev/drbd0" directory="/drbd" fstype="ext3" \ meta target-role="Started" primitive ip_ssn ocf:heartbeat:IPaddr2 \ params ip="192.168.241.1" cidr_netmask="32" \ op monitor interval="15s" \ meta target-role="Started" primitive ip_sst ocf:heartbeat:IPaddr2 \ params ip="192.168.241.2" cidr_netmask="32" \ op monitor interval="15s" \ meta target-role="Started" primitive sst lsb:sst \ op monitor interval="15s" \ meta target-role="stopped" primitive ssn lsb:ssn \ op monitor interval="15s" \ meta target-role="stopped" ms ms_drbd_ssn drbd_ssn \ meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true" target-role="Started" location sst_ip_prefer ip_sst 50: node1 location drbd_ssn_prefer ms_drbd_ssn 50: node1 colocation fs_ssn_coloc inf: ip_ssn fs_ssn colocation fs_on_drbd_coloc inf: fs_ssn ms_drbd_ssn:Master colocation sst_ip_coloc inf: sst ip_sst colocation ssn_ip_coloc inf: ssn ip_ssn order ssn_after_drbd inf: ms_drbd_ssn:promote fs_ssn:start order ip_after_fs inf: fs_ssn:start ip_ssn:start order sst_after_ip inf: ip_sst:start sst:start order sst_after_ssn inf: ssn:start sst:start order ssn_after_ip inf: ip_ssn:start ssn:start property $id="cib-bootstrap-options" \ dc-version="1.1.8-7.el6-394e906" \ cluster-infrastructure="classic openais (with plugin)" \ expected-quorum-votes="3" \ stonith-enabled="false" rsc_defaults $id="rsc-options" \ resource-stickiness="100" -------------------CONFIG END---------------------------------------- Best Regards. Xiaomin _______________________________________________ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org