Thanks, I`ll upgrade.

Any input on issue (2) and (3)

On Wed, Feb 18, 2009 at 9:15 PM, Serge Dubrouski <[email protected]> wrote:

> 2.1.3 has a well known bug that was fixed in later releases. fail
> count doesn't get increased when resource fails. You have to upgrade.
>
> On Wed, Feb 18, 2009 at 11:30 AM, Pavel Georgiev <[email protected]> wrote:
> > I`m using heartbeat 2.1.3, default Centos 5 rpm. I`m running 3 nodes with
> a
> > single resource (LSB RA) which has an equal score on each server. I`m
> having
> > two "issues" (which actually might be feature)
> > 1) If I stop the resource between two heartbeat "monitor" intervals, it
> > detects it is down and restarts it (which is OK) and the failure count
> > becomes one. If I stop th resource it is again restarted on the same node
> > and the failure count is still 1 - I cant get it to increase. Am I
> missing
> > some configuration trick?
> >
> > 2) If the "start" operation of the RA fails, the score is set to
> -INFINITY.
> > Is it possible to control so that the score is just decreased (by
> > resource_failure_stickiness) so that the node is still eligible for
> running
> > the resource?
> >
> > 3) I can`t clear the failure count for a resource with ` crm_failcount -D
> -U
> > server1 -r controller_vm`; Also, how do I change the score of a node if
> it
> > gets a "_INFINITY" score (as decribed in th eprevious issue).
> >
> >
> > My config:
> >
> > <cib epoch="0" admin_epoch="0" num_updates="0">
> >   <configuration>
> >      <crm_config/>
> >      <nodes/>
> >      <resources>
> >         <primitive id="controller_vm_resource" class="lsb"
> > type="controller_vm" provider="applogic">
> >            <operations>
> >               <op id="controller_vm_resource_status" interval="60s"
> > name="monitor" timeout="5s" start_delay="10s" on_fail="restart"/>
> >               <op id="controller_vm_resource_start" name="start"
> > timeout="10s" on_fail="restart"/>
> >               <op id="controller_vm_resource_stop" name="stop"
> > timeout="10s"/>
> >            </operations>
> >            <meta_attributes id="controller_vm_resource_attr">
> >               <attributes>
> >                  <nvpair id="controller_vm_resource_attr_1"
> > name="resource_stickiness" value="100"/>
> >                  <nvpair id="controller_vm_resource_attr_2"
> > name="resource_failure_stickiness" value="-100"/>
> >               </attributes>
> >            </meta_attributes>
> >         </primitive>
> >      </resources>
> >      <constraints>
> >         <rsc_location id="run_controller_vm_resource"
> > rsc="controller_vm_resource">
> >            <rule id="pref_run_controller_service_resource_1"
> score="1000">
> >               <expression id="rule_controller_vm_1" attribute="#uname"
> > operation="eq" value="server1"/>
> >            </rule>
> >            <rule id="pref_run_controller_service_resource_2"
> score="1000">
> >               <expression id="rule_controller_vm_2" attribute="#uname"
> > operation="eq" value="server2"/>
> >            </rule>
> >            <rule id="pref_run_controller_service_resource_3"
> score="1000">
> >               <expression id="rule_controller_vm_3" attribute="#uname"
> > operation="eq" value="server3"/>
> >            </rule>
> >         </rsc_location>
> >      </constraints>
> >   </configuration>
> > </cib>
> > _______________________________________________
> > Linux-HA mailing list
> > [email protected]
> > http://lists.linux-ha.org/mailman/listinfo/linux-ha
> > See also: http://linux-ha.org/ReportingProblems
> >
>
>
>
> --
> Serge Dubrouski.
> _______________________________________________
> Linux-HA mailing list
> [email protected]
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
>
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to