Try setting the interleave meta option for the ms resources. This should tell ms_pimd to care about the instance of ms_Tmgr on the same node.
On Thu, Dec 29, 2011 at 2:43 PM, neha chatrath <nehachatr...@gmail.com> wrote: > Hello, > > I have cluster with 2 nodes with multiple Master/slave resources. > The ordering of resources on the master node is achieved using order option > of crm. When standby node started, the processes are started one after the > another. > Following is the configuration info: > primitive ClusterIP ocf:mcg:MCG_VIPaddr_RA \ > params ip="192.168.113.67" cidr_netmask="255.255.255.0" nic="eth0:1" > \ > op monitor interval="40" timeout="20" > primitive Rmgr ocf:mcg:RM_RA \ > op monitor interval="60" role="Master" timeout="30" > on-fail="restart" \ > op monitor interval="40" role="Slave" timeout="40" on-fail="restart" > primitive Tmgr ocf:mcg:TM_RA \ > op monitor interval="60" role="Master" timeout="30" > on-fail="restart" \ > op monitor interval="40" role="Slave" timeout="40" on-fail="restart" > primitive pimd ocf:mcg:PIMD_RA \ > op monitor interval="60" role="Master" timeout="30" > on-fail="restart" \ > op monitor interval="40" role="Slave" timeout="40" on-fail="restart" > ms ms_Rmgr Rmgr \ > meta master-max="1" master-max-node="1" clone-max="2" > clone-node-max="1" notify="true" > ms ms_Tmgr Tmgr \ > meta master-max="1" master-max-node="1" clone-max="2" > clone-node-max="1" notify="true" > ms ms_pimd pimd \ > meta master-max="1" master-max-node="1" clone-max="2" > clone-node-max="1" notify="true" > colocation ip_with_Rmgr inf: ClusterIP ms_Rmgr:Master > colocation ip_with_Tmgr inf: ClusterIP ms_Tmgr:Master > colocation ip_with_pimd inf: ClusterIP ms_pimd:Master > order TM-after-RM inf: ms_Rmgr:promote ms_Tmgr:start > order ip-after-pimd inf: ms_pimd:promote ClusterIP:start > order pimd-after-TM inf: ms_Tmgr:promote ms_pimd:start > property $id="cib-bootstrap-options" \ > dc-version="1.0.11-db98485d06ed3fe0fe236509f023e1bd4a5566f1" \ > cluster-infrastructure="Heartbeat" \ > no-quorum-policy="ignore" \ > stonith-enabled="false" > rsc_defaults $id="rsc-options" \ > migration_threshold="3" \ > resource-stickiness="100" > > I have a system requirement in which start of resource (e.g. pimd) is > dependent on successful start of another resource (e.g. Tmgr) > Everything run smoothly on the master node. This is due to ordering and few > seconds delay untill a resource is promoted as Master. > But on the standby node since the resources are started one after the > another without any delay , Standby node in the cluster behaves erratically > > Is there a way, through which I can serialize/control resource start up on > the standby node. > > Thanks and regards > Neha Chatrath > > _______________________________________________ > Pacemaker mailing list: Pacemaker@oss.clusterlabs.org > http://oss.clusterlabs.org/mailman/listinfo/pacemaker > > Project Home: http://www.clusterlabs.org > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf > Bugs: http://bugs.clusterlabs.org > _______________________________________________ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org