You can use 'kind' and 'symmetrical' to control order constraints. The default value for symmetrical is 'true' which means that in order to stop dummy1 , the cluster has to stop dummy1 & dummy2. Best Regards,Strahil Nikolov On Fri, Apr 8, 2022 at 15:29, ChittaNagaraj, Raghav<[email protected]> wrote: <!--#yiv9734321973 _filtered {} _filtered {} _filtered {} _filtered {}#yiv9734321973 #yiv9734321973 p.yiv9734321973MsoNormal, #yiv9734321973 li.yiv9734321973MsoNormal, #yiv9734321973 div.yiv9734321973MsoNormal {margin:0in;font-size:11.0pt;font-family:"Calibri", sans-serif;}#yiv9734321973 span.yiv9734321973EmailStyle17 {font-family:"Calibri", sans-serif;color:windowtext;}#yiv9734321973 .yiv9734321973MsoChpDefault {font-family:"Calibri", sans-serif;} _filtered {}#yiv9734321973 div.yiv9734321973WordSection1 {}--> Hello Team, Hope you are doing well. I have a 4 node pacemaker cluster where I created clone dummy resources test-1, test-2 and test-3 below: $ sudo pcs resource create test-1 ocf:heartbeat:Dummy op monitor timeout="20" interval="10" clone $ sudo pcs resource create test-2 ocf:heartbeat:Dummy op monitor timeout="20" interval="10" clone $ sudo pcs resource create test-3 ocf:heartbeat:Dummy op monitor timeout="20" interval="10" clone Then I ordered them so test-2-clone starts after test-1-clone and test-3-clone starts after test-2-clone: $ sudo pcs constraint order test-1-clone then test-2-clone Adding test-1-clone test-2-clone (kind: Mandatory) (Options: first-action=start then-action=start) $ sudo pcs constraint order test-2-clone then test-3-clone Adding test-2-clone test-3-clone (kind: Mandatory) (Options: first-action=start then-action=start) Here are my clone sets(snippet of "pcs status" output pasted below): * Clone Set: test-1-clone [test-1]: * Started: [ node2_a node2_b node1_a node1_b ] * Clone Set: test-2-clone [test-2]: * Started: [ node2_a node2_b node1_a node1_b ] * Clone Set: test-3-clone [test-3]: * Started: [ node2_a node2_b node1_a node1_b ] Then I restart test-1 on just node1_a: $ sudo pcs resource restart test-1 node1_a Warning: using test-1-clone... (if a resource is a clone, master/slave or bundle you must use the clone, master/slave or bundle name) test-1-clone successfully restarted This causes test-2 and test-3 clones to restart on all pacemaker nodes when my intention is for them to restart on just node1_a. Below is the log tracing seen on the Designated Controller NODE1-B: Apr 07 20:25:01 NODE1-B pacemaker-schedulerd[95746]: notice: * Stop test-1:1 ( node1_a ) due to node availability Apr 07 20:25:03 NODE1-B pacemaker-schedulerd[95746]: notice: * Restart test-2:0 ( node1_b ) due to required test-1-clone running Apr 07 20:25:03 NODE1-B pacemaker-schedulerd[95746]: notice: * Restart test-2:1 ( node1_a ) due to required test-1-clone running Apr 07 20:25:03 NODE1-B pacemaker-schedulerd[95746]: notice: * Restart test-2:2 ( node2_b ) due to required test-1-clone running Apr 07 20:25:03 NODE1-B pacemaker-schedulerd[95746]: notice: * Restart test-2:3 ( node2_a ) due to required test-1-clone running Above is a representation of the observed behavior using dummy resources. Is this the expected behavior of cloned resources? My goal is to be able to restart test-2-clone and test-3-clone on just the node that experienced test-1 restart rather than all other nodes in the cluster. Please let us know if any additional information will help for you to be able to provide feedback. Thanks for your help! - Raghav
Internal Use - Confidential _______________________________________________ Manage your subscription: https://lists.clusterlabs.org/mailman/listinfo/users ClusterLabs home: https://www.clusterlabs.org/
_______________________________________________ Manage your subscription: https://lists.clusterlabs.org/mailman/listinfo/users ClusterLabs home: https://www.clusterlabs.org/
