On 4/29/2011 at 03:36 AM, "Stallmann, Andreas" <astallm...@conet.de> wrote: > Hi! > > I configured my nodes *not* to auto failback after a defective node comes > back online. This worked nicely for a while, but now it doesn't (and, > honestly, I do not know what was changed in the meantime). > > What we do: We disconnect the two (virtual) interfaces of our node mgmt01 > (running on vmware esxi) by means of the vsphere client. Node mgmt02 takes > over the services as it should. When node mgmt01's interfaces are switched on > > again, everything looks alright for a minute or two, but then mgmt01 takes > over the resources again. Which it should not. Here's the relevant sniplet of > > the configuration (full config below): > > location nag_loc nag_grp 100: ipfuie-mgmt01 > property default-resource-stickiness="100" > > I thought, that because the resource-stickiness has the same value as the > location constrain, the resources would stick to the node they are started > on. Am I wrong?
If the resource ends up on the non-preferred node, those settings will cause it to have an equal score on both nodes, so it should stay put. If you want to verify, try "ptest -Ls" to see what scores each resource has. Anyway, the problem is this constraint: location cli-prefer-nag_grp nag_grp \ rule $id="cli-prefer-rule-nag_grp" inf: #uname eq ipfuie-mgmt01 and #uname eq ipfuie-mgmt01 Because that constraint has a score of "inf", it'll take precedence. Probably "crm resource move nag_grp ipfuie-mgmt01" was run at some point, to forcibly move the resource to ipfuie-mgmt01. That constraint will persist until you run "crm resource unmove nag_grp" Kind of weird that the hostname is listed twice in that rule though... Regards, Tim -- Tim Serong <tser...@novell.com> Senior Clustering Engineer, OPS Engineering, Novell Inc. _______________________________________________ Linux-HA mailing list Linux-HA@lists.linux-ha.org http://lists.linux-ha.org/mailman/listinfo/linux-ha See also: http://linux-ha.org/ReportingProblems