On Jul 10, 2008, at 5:44 PM, Raghuram Bondalapati wrote:

I tried that, but it did not work.

oh, right, sorry.
crm_failcount -D is what you need



--Raghu


On 7/9/08, Andrew Beekhof <[EMAIL PROTECTED]> wrote:

On Wed, Jul 9, 2008 at 19:47, Raghuram Bondalapati
<[EMAIL PROTECTED]> wrote:
Andrew, the Score for Node vcs9472 for Resource resource_ip1 is always
set
to -INFINITY.

This is preventing the resource_ip1 from sticking to vcs9472, when nodes vcs9473 reboots and come back on-line. [resource_ip1 always migrates back
to
vcs9473].

Is there a way to reset the score for resource_ip1 on vcs9472?

try crm_resource -C


I am running hearbeat 2.1.3 with pacemaker 0.6.5 extensions.

Resource            Score     Node            Stickiness #Fail
Fail-Stickiness
resource_ip1        -INFINITY vcs9472         100000     0
-INFINITY
resource_ip1        100000    vcs9473         100000     0
-INFINITY
resource_xinetd     -INFINITY vcs9472         100000     0
-INFINITY
resource_xinetd     -INFINITY vcs9473         100000     0
-INFINITY
resource_xinetd1    0         vcs9472         100000     0
-INFINITY
resource_xinetd1    200000    vcs9473         100000     0
-INFINITY



On 7/9/08, Andrew Beekhof <[EMAIL PROTECTED]> wrote:

Known bug in 2.1.3

Please grab the latest Pacemaker release (0.6.5) for you distro from
 http://download.opensuse.org/repositories/server:/ha-clustering/

On Tue, Jul 8, 2008 at 22:51, Raghuram Bondalapati
<[EMAIL PROTECTED]> wrote:
Hello list,

I have a two node cluster configured with a resource named
"resource_ip1".
It's of type "IPaddr" and of class "ocf".

When the node1 hosting "resource_ip1" is rebooted it failsover to
node2.
However the fail-count-resource for "resource_ip1" on node1 does not
get
incremented and still shows up as 0.

       crm_failcount -G -U node1 -r resource_ip1
       name=fail-count-resource_ip1 value=0

Further more eventhough i have "Default Resource Stickiness" set to
100000,
and Default Failure Resource Stickiness set to -INFINITY [with no
override
stickiness on resources] the resource "resource_ip1" fails back on to
node 1
after it's back on-line.

Any ideas on why this is happening is very much appreciated?

The same config works fine for resource_xinetd. Please see below for
the
current cib.xml

Regards
--Raghu

CIB.XML

<cib generated="true" admin_epoch="0" have_quorum="true"
ignore_dtd="false"
num_peers="2" cib_feature_revision="2.0" crm_feature_set="2.0"
ccm_transition="112" dc_uuid="1f0ffb39-b275-4e92- b6ca-8a3d00f2fb44"
epoch="187" num_updates="3" cib-last-written="Tue Jul  8 13:24:57
2008">
 <configuration>
   <crm_config>
     <cluster_property_set id="cib-bootstrap-options">
       <attributes>
         <nvpair id="cib-bootstrap-options-dc-version"
name="dc-version"
value="2.1.3-node: a3184d5240c6e7032aef9cce6e5b7752ded544b3"/>
         <nvpair id="cib-bootstrap-options-stonith-enabled"
name="stonith-enabled" value="false"/>
         <nvpair name="default-resource-stickiness"
id="cib-bootstrap-options-default-resource-stickiness"
value="100000"/>
         <nvpair name="default-resource-failure-stickiness"
id="cib-bootstrap-options-default-resource-failure-stickiness"
value="-INFINITY"/>
         <nvpair id="cib-bootstrap-options-last-lrm-refresh"
name="last-lrm-refresh" value="1215547282"/>
         <nvpair id="cib-bootstrap-options-no-quorum-policy"
name="no-quorum-policy" value="stop"/>
       </attributes>
     </cluster_property_set>
   </crm_config>
   <nodes>
<node id="1f0ffb39-b275-4e92-b6ca-8a3d00f2fb44" uname="vcs9473"
type="normal"/>
<node id="03fd6ad6-e7b1-4722-96d8-54e3be84a59c" uname="vcs9472"
type="normal"/>
   </nodes>
   <resources>
     <primitive id="resource_ip1" class="ocf" type="IPaddr"
provider="heartbeat">
       <instance_attributes
id="58501e02-21f5-49d3-aebf-fca5e378ae70">
         <attributes>
           <nvpair name="ip" value="172.25.52.245"
id="430bc62c-d2d2-4054-87a3-a9fe041f0ecc"/>
         </attributes>
       </instance_attributes>
     </primitive>
     <primitive id="resource_xinetd" class="lsb" type="xinetd"
provider="heartbeat">
       <meta_attributes id="resource_xinetd_meta_attrs">
         <attributes/>
       </meta_attributes>
       <operations>
<op id="b33c816d-d85a-47f2-bb35-0edc300da907" name="monitor"
interval="15" timeout="15" start_delay="15" disabled="false"
role="Started"
on_fail="restart"/>
       </operations>
     </primitive>
   </resources>
   <constraints>
     <rsc_colocation id="colocation_ftp" from="resource_ip1"
to="resource_xinetd" score="INFINITY"/>
   </constraints>
 </configuration>
</cib>
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to