This is my new cib.xml:

 <cib admin_epoch="0" epoch="0" num_updates="0" generated="false"
have_quorum="false" ignore_dtd="false" num_peers="0" cib-last-written="Thu
Dec  6 10:15:17 2007">
   <configuration>
     <crm_config/>
     <nodes/>
     <resources>
       <clone id="pingd_clone">
         <meta_attributes id="pingd_clone_ma">
           <attributes>
             <nvpair id="pingd_clone_1" name="globally_unique"
value="false"/>
           </attributes>
         </meta_attributes>
         <primitive class="ocf" id="pingd_child" provider="heartbeat"
type="pingd">
           <operations>
             <op id="pingd_child_mon" interval="20s" name="monitor"
timeout="60s"/>
           </operations>
           <instance_attributes id="pingd_inst_attr">
             <attributes>
               <nvpair id="pingd_1" name="dampen" value="5s"/>
               <nvpair id="pingd_2" name="multiplier" value="200"/>
               <nvpair id="pingd_3" name="user" value="root"/>
             </attributes>
           </instance_attributes>
         </primitive>
       </clone>
       <group id="group_1" resource_stickiness="150">
         <primitive class="ocf" id="IPaddr_192_168_122_203"
provider="heartbeat" type="IPaddr">
           <operations>
             <op id="IPaddr_192_168_122_203_mon" interval="20s"
name="monitor" timeout="60s"/>
           </operations>
           <instance_attributes id="IPaddr_192_168_122_203_inst_attr">
             <attributes>
               <nvpair id="IPaddr_192_168_122_203_attr_0" name="ip" value="
192.168.122.203"/>
               <nvpair id="IPaddr_192_168_122_203_attr_1" name="netmask"
value="24"/>
             </attributes>
           </instance_attributes>
         </primitive>
         <primitive class="lsb" id="httpd_2" provider="heartbeat"
type="httpd">
           <operations>
             <op id="httpd_2_mon" interval="120s" name="monitor"
timeout="60s"/>
           </operations>
         </primitive>
         <primitive class="lsb" id="squid_3" provider="heartbeat"
type="squid">
           <operations>
             <op id="squid_3_mon" interval="120s" name="monitor"
timeout="60s"/>
           </operations>
         </primitive>
       </group>
     </resources>
     <constraints>
       <rsc_location id="rsc_location_group_1" rsc="group_1">
         <rule id="prefered_location_group_1" score="100">
           <expression attribute="#uname"
id="prefered_location_group_1_expr" operation="eq" value="test-ppc"/>
         </rule>
         <rule id="best_location_group_1" score_attribute="pingd">
           <expression attribute="pingd" id="best_location_group_1_expr"
operation="defined"/>
         </rule>
       </rsc_location>
     </constraints>
   </configuration>
 </cib>

With this configuration the resources doesn't failover to test, but remains
on test-ppc. Why?

On Dec 6, 2007 12:16 PM, Dominik Klein <[EMAIL PROTECTED]> wrote:

> China wrote:
> > Ok, I've set resource_stickiness to 150, a score of 100 to the default
> node
> > PC_A and a score_attribute for pingd. Now the resource when fail doesn't
> > start on PC_B. Why?
>
> The way I understand you, and please correct me or post your current
> cib.xml, is:
>
> pingd multiplier: 200 (as suggested by Andrew)
> one ping node
> a constraint with score 100 for node test-ppc
> resource-stickiness = 150
>
> So this would make a startup score for your group_1:
> node test-ppc: 100 (constraint) + 200 (pingd with one ping node) = 300
> node test3: 200 (pingd with one ping node) = 200
>
> So now the resource will run on node test-ppc and its score increases to
> 450 due to the 150 resource-stickiness.
>
> Then you fail test-ppc (what exactly do you do again?).
>
> So you have only a score of 200 for node test3 (no other nodes
> available). Now the resource will run there and have 150 added to its
> score. So test3 should end up with a score of 350.
>
> If test-ppc comes back, it will have 300 again (propably), so the
> resource should stay in place.
>
> Oh, wait a minute. I just re-read your initial mail. Are you only using
> one connection between your nodes and unplug that connection to force a
> failure? That's forced to fail and you do not have STONITH.
>
> I think that's the first thing you should fix.
>
> Regards
> Dominik
> _______________________________________________
> Linux-HA mailing list
> [email protected]
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
>



-- 

Davide Belloni
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to