you cant use score_attribute and score in the same rule.
in such cases score_attribute is ignored.

when calling ptest, can you include the "-I filename" option which
saves the input being used to a file and then attach it here please.


On 5/3/07, chiu chun chir <[EMAIL PROTECTED]> wrote:
Hi Andrew,

Very thanks for your advise, but I'm using another resource constraint.
Because I can't get it work with your advise.
Please refer to the score tracing by the following command.
'/usr/lib/heartbeat/ptest -L -VVVVVVVVVVVVVVVVVVVV 2>&1 | egrep assign'

If I use the constraint as you provide and the new cib.xml (please
refer to the attachment)
<rsc_location id="my_resource:connected" rsc="TACO_SERVICES">
  <rule id="my_resource:connected:rule" score_attribute="pingd">
    <expression id="my_resource:connected:expr:gateway"
attribute="pingd" operation="gt" value="0"/>
  </rule>
</rsc_location>

The score before unplug the tacomcs1 network :
ptest[19325]: 2007/05/03_11:28:50 debug: debug5: do_calculations:
assign nodes to colors
ptest[19325]: 2007/05/03_11:28:51 debug: native_assign_node: Color
GWMON:0, Node[0] tacomcs1: 500
ptest[19325]: 2007/05/03_11:28:51 debug: native_assign_node: Color
GWMON:0, Node[1] tacomcs2: 0
ptest[19325]: 2007/05/03_11:28:51 debug: native_assign_node: Assigning
tacomcs1 to GWMON:0
ptest[19325]: 2007/05/03_11:28:51 debug: native_assign_node: Color
GWMON:1, Node[0] tacomcs2: 500
ptest[19325]: 2007/05/03_11:28:51 debug: native_assign_node: Color
GWMON:1, Node[1] tacomcs1: -1000000
ptest[19325]: 2007/05/03_11:28:51 debug: native_assign_node: Assigning
tacomcs2 to GWMON:1
ptest[19325]: 2007/05/03_11:28:51 debug: native_assign_node: Color
VIP, Node[0] tacomcs1: 3000
ptest[19325]: 2007/05/03_11:28:51 debug: native_assign_node: Color
VIP, Node[1] tacomcs2: 1500
ptest[19325]: 2007/05/03_11:28:51 debug: native_assign_node: Assigning
tacomcs1 to VIP
ptest[19325]: 2007/05/03_11:28:51 debug: native_assign_node: Color
TacoRMI, Node[0] tacomcs1: 1000000
ptest[19325]: 2007/05/03_11:28:51 debug: native_assign_node: Color
TacoRMI, Node[1] tacomcs2: -1000000
ptest[19325]: 2007/05/03_11:28:51 debug: native_assign_node: Assigning
tacomcs1 to TacoRMI
ptest[19325]: 2007/05/03_11:28:51 debug: native_assign_node: Color
Tomcat, Node[0] tacomcs1: 1000000
ptest[19325]: 2007/05/03_11:28:51 debug: native_assign_node: Color
Tomcat, Node[1] tacomcs2: -1000000
ptest[19325]: 2007/05/03_11:28:51 debug: native_assign_node: Assigning
tacomcs1 to Tomcat

tacomcs1(Active node) scores higher among all of the TACO_SERVICES resources.

The score after unplug the tacomcs1 network :
ptest[19446]: 2007/05/03_11:29:35 debug: debug5: do_calculations:
assign nodes to colors
ptest[19446]: 2007/05/03_11:29:36 debug: native_assign_node: Color
GWMON:0, Node[0] tacomcs1: 500
ptest[19446]: 2007/05/03_11:29:36 debug: native_assign_node: Color
GWMON:0, Node[1] tacomcs2: 0
ptest[19446]: 2007/05/03_11:29:36 debug: native_assign_node: Assigning
tacomcs1 to GWMON:0
ptest[19446]: 2007/05/03_11:29:36 debug: native_assign_node: Color
GWMON:1, Node[0] tacomcs2: 500
ptest[19446]: 2007/05/03_11:29:36 debug: native_assign_node: Color
GWMON:1, Node[1] tacomcs1: -1000000
ptest[19446]: 2007/05/03_11:29:36 debug: native_assign_node: Assigning
tacomcs2 to GWMON:1
ptest[19446]: 2007/05/03_11:29:36 debug: native_assign_node: Color
VIP, Node[0] tacomcs1: 1500
ptest[19446]: 2007/05/03_11:29:36 debug: native_assign_node: Color
VIP, Node[1] tacomcs2: 1500
ptest[19446]: 2007/05/03_11:29:36 info: native_assign_node: 2 nodes
with equal score (1500) for running the listed resources (chose
tacomcs1):
ptest[19446]: 2007/05/03_11:29:36 debug: native_assign_node: Assigning
tacomcs1 to VIP
ptest[19446]: 2007/05/03_11:29:36 debug: native_assign_node: Color
TacoRMI, Node[0] tacomcs1: 1000000
ptest[19446]: 2007/05/03_11:29:36 debug: native_assign_node: Color
TacoRMI, Node[1] tacomcs2: -1000000
ptest[19446]: 2007/05/03_11:29:36 debug: native_assign_node: Assigning
tacomcs1 to TacoRMI
ptest[19446]: 2007/05/03_11:29:36 debug: native_assign_node: Color
Tomcat, Node[0] tacomcs1: 1000000
ptest[19446]: 2007/05/03_11:29:36 debug: native_assign_node: Color
Tomcat, Node[1] tacomcs2: -1000000
ptest[19446]: 2007/05/03_11:29:36 debug: native_assign_node: Assigning
tacomcs1 to Tomcat

tacomcs1(fail node) scores the same as tacomcs2 ...
2 nodes with equal score (1500) for running the listed resources
(chose tacomcs1)
Incredible!! Nothing will failover to tacomcs2, but tacomcs1 network is down...


After use another constraint,
I am able to make my Active / Passive failover scenario available.
And I would like to share my experience to somebody who would like to
adapt a Active / Passive scenario like me.

+++++++++++++++++++++++++++
+++++ Sucessful Experience +++++
+++++++++++++++++++++++++++

OS : Suse Enterprise Server 10.

First of all, I've upgrade my heartbeat from version 2.0.7-12 to the
snapshot patch 2.0.8-0.15 version provided by Lars:
You can find the link at thread:
http://www.gossamer-threads.com/lists/linuxha/users/38705?search_string=suse%20heartbeat%202.0.8;#38705

===============================
Again ha.cf
===============================
autojoin any
crm true
bcast eth1
node tacomcs2
node tacomcs1
debug 0
apiauth evms,pingd uid=hacluster,root
#Active/Passive configuration
auto_failback off

#LAN_FAIL_MONITOR
ping 10.0.0.1
keepalive 2

#loggins
use_logd on
===============================

Resource Location Constraint :
( Refer to http://www.linux-ha.org/pingd )
( Quickstart - Only Run my_resource on Nodes with Access to at Least
One Ping Node )

<rsc_location id="TACO_SERVICES:preferred" rsc="TACO_SERVICES">
  <rule id="TACO_SERVICES:connected:rule" score_attribute="pingd"
score="-INFINITY" boolean_op="and">
    <expression id="TACO_SERVICES:connected:expr:positive"
attribute="pingd" operation="lte" value="0"/>
  </rule>
</rsc_location>

And co-operate this crm_config
<nvpair id="cibbootstrap-03" name="default_resource_stickiness" value="500"/>
<nvpair id="cibbootstrap-04"
name="default_resource_failure_stickiness" value="-250"/>
Let's verify it by command :
'/usr/lib/heartbeat/ptest -L -VVVVVVVVVVVVVVVVVVVV 2>&1 | egrep assign'

The score before unplug the tacomcs1 network :
ptest[11187]: 2007/05/03_10:08:18 debug: debug5: do_calculations:
assign nodes to colors
ptest[11187]: 2007/05/03_10:08:18 debug: native_assign_node: Color
GWMON:0, Node[0] tacomcs1: 500
ptest[11187]: 2007/05/03_10:08:18 debug: native_assign_node: Color
GWMON:0, Node[1] tacomcs2: 0
ptest[11187]: 2007/05/03_10:08:18 debug: native_assign_node: Assigning
tacomcs1 to GWMON:0
ptest[11187]: 2007/05/03_10:08:18 debug: native_assign_node: Color
GWMON:1, Node[0] tacomcs2: 500
ptest[11187]: 2007/05/03_10:08:18 debug: native_assign_node: Color
GWMON:1, Node[1] tacomcs1: -1000000
ptest[11187]: 2007/05/03_10:08:18 debug: native_assign_node: Assigning
tacomcs2 to GWMON:1
ptest[11187]: 2007/05/03_10:08:18 debug: native_assign_node: Color
VIP, Node[0] tacomcs1: 1500
ptest[11187]: 2007/05/03_10:08:18 debug: native_assign_node: Color
VIP, Node[1] tacomcs2: 0
ptest[11187]: 2007/05/03_10:08:18 debug: native_assign_node: Assigning
tacomcs1 to VIP
ptest[11187]: 2007/05/03_10:08:18 debug: native_assign_node: Color
TacoRMI, Node[0] tacomcs1: 1000000
ptest[11187]: 2007/05/03_10:08:18 debug: native_assign_node: Color
TacoRMI, Node[1] tacomcs2: -1000000
ptest[11187]: 2007/05/03_10:08:18 debug: native_assign_node: Assigning
tacomcs1 to TacoRMI
ptest[11187]: 2007/05/03_10:08:18 debug: native_assign_node: Color
Tomcat, Node[0] tacomcs1: 1000000
ptest[11187]: 2007/05/03_10:08:18 debug: native_assign_node: Color
Tomcat, Node[1] tacomcs2: -1000000
ptest[11187]: 2007/05/03_10:08:18 debug: native_assign_node: Assigning
tacomcs1 to Tomcat

Currently, resource group TACO_SERVICES is stay on the Active node tacomcs1.
You can see tacomcs1 get the higer score among all of the
resources(VIP, TacoRMI, Tomcat).

The score after unplug the tacomcs1 network :
ptest[15958]: 2007/05/03_10:38:24 debug: debug5: do_calculations:
assign nodes to colors
ptest[15958]: 2007/05/03_10:38:24 debug: native_assign_node: Color
GWMON:0, Node[0] tacomcs1: 500
ptest[15958]: 2007/05/03_10:38:24 debug: native_assign_node: Color
GWMON:0, Node[1] tacomcs2: 0
ptest[15958]: 2007/05/03_10:38:24 debug: native_assign_node: Assigning
tacomcs1 to GWMON:0
ptest[15958]: 2007/05/03_10:38:24 debug: native_assign_node: Color
GWMON:1, Node[0] tacomcs2: 500
ptest[15958]: 2007/05/03_10:38:24 debug: native_assign_node: Color
GWMON:1, Node[1] tacomcs1: -1000000
ptest[15958]: 2007/05/03_10:38:24 debug: native_assign_node: Assigning
tacomcs2 to GWMON:1
ptest[15958]: 2007/05/03_10:38:24 debug: native_assign_node: Color
VIP, Node[0] tacomcs2: 500
ptest[15958]: 2007/05/03_10:38:24 debug: native_assign_node: Color
VIP, Node[1] tacomcs1: -1000000
ptest[15958]: 2007/05/03_10:38:24 debug: native_assign_node: Assigning
tacomcs2 to VIP
ptest[15958]: 2007/05/03_10:38:24 debug: native_assign_node: Color
TacoRMI, Node[0] tacomcs2: 1000000
ptest[15958]: 2007/05/03_10:38:24 debug: native_assign_node: Color
TacoRMI, Node[1] tacomcs1: -1000000
ptest[15958]: 2007/05/03_10:38:24 debug: native_assign_node: Assigning
tacomcs2 to TacoRMI
ptest[15958]: 2007/05/03_10:38:24 debug: native_assign_node: Color
Tomcat, Node[0] tacomcs2: 1000000
ptest[15958]: 2007/05/03_10:38:24 debug: native_assign_node: Color
Tomcat, Node[1] tacomcs1: -1000000
ptest[15958]: 2007/05/03_10:38:24 debug: native_assign_node: Assigning
tacomcs2 to Tomcat

As you can see tacomcs1 score a -INFINITY, and tacomcs2 score is
higher than tacomcs1,
As a result, tacomcs1 resource were move to the prefer node - tacomcs2.

The score after plug the tacomcs1 network :
ptest[16613]: 2007/05/03_10:51:18 debug: debug5: do_calculations:
assign nodes to colors
ptest[16613]: 2007/05/03_10:51:19 debug: native_assign_node: Color
GWMON:0, Node[0] tacomcs1: 500
ptest[16613]: 2007/05/03_10:51:19 debug: native_assign_node: Color
GWMON:0, Node[1] tacomcs2: 0
ptest[16613]: 2007/05/03_10:51:19 debug: native_assign_node: Assigning
tacomcs1 to GWMON:0
ptest[16613]: 2007/05/03_10:51:19 debug: native_assign_node: Color
GWMON:1, Node[0] tacomcs2: 500
ptest[16613]: 2007/05/03_10:51:19 debug: native_assign_node: Color
GWMON:1, Node[1] tacomcs1: -1000000
ptest[16613]: 2007/05/03_10:51:19 debug: native_assign_node: Assigning
tacomcs2 to GWMON:1
ptest[16613]: 2007/05/03_10:51:19 debug: native_assign_node: Color
VIP, Node[0] tacomcs2: 1500
ptest[16613]: 2007/05/03_10:51:19 debug: native_assign_node: Color
VIP, Node[1] tacomcs1: 0
ptest[16613]: 2007/05/03_10:51:19 debug: native_assign_node: Assigning
tacomcs2 to VIP
ptest[16613]: 2007/05/03_10:51:19 debug: native_assign_node: Color
TacoRMI, Node[0] tacomcs2: 1000000
ptest[16613]: 2007/05/03_10:51:19 debug: native_assign_node: Color
TacoRMI, Node[1] tacomcs1: -1000000
ptest[16613]: 2007/05/03_10:51:19 debug: native_assign_node: Assigning
tacomcs2 to TacoRMI
ptest[16613]: 2007/05/03_10:51:19 debug: native_assign_node: Color
Tomcat, Node[0] tacomcs2: 1000000
ptest[16613]: 2007/05/03_10:51:19 debug: native_assign_node: Color
Tomcat, Node[1] tacomcs1: -1000000
ptest[16613]: 2007/05/03_10:51:19 debug: native_assign_node: Assigning
tacomcs2 to Tomcat

Focus on the resource group TACO_SERVICES(VIP,TacoRMI,Tomcat),
All of the three , tacomcs2 scores higher than tacomcs1.
As a result, TACO_SERVICES will stay on tacomcs2, until tacomcs2
shutdown or unplug the network.

Several things can figure out by this trace:
1. default_resource_stickiness will add the score to the 1st resource
in the group after resources started.

For example:
"The score after unplug the tacomcs1 network"
VIP started, add 500 score.
Color VIP, Node[0] tacomcs2: 500

Because TacoRMI need 2 mins to start and I plug tacomcs1's network
cable back in the meantime.

"The score after plug the tacomcs1 network"
After TacoRMI, Tomcat started , add 1000 score.
Color VIP, Node[0] tacomcs2: 1500

2.A node unplug the network can result in resource location constraint
gives(adds) -INFINITY score.
<rsc_location id="TACO_SERVICES:preferred" rsc="TACO_SERVICES">
  <rule id="TACO_SERVICES:connected:rule" score_attribute="pingd"
score="-INFINITY" boolean_op="and">
    <expression id="TACO_SERVICES:connected:expr:positive"
attribute="pingd" operation="lte" value="0"/>
  </rule>
</rsc_location>

"The score after unplug the tacomcs1 network"
Color VIP, Node[1] tacomcs1: -1000000
Color TacoRMI, Node[1] tacomcs1: -1000000
Color Tomcat, Node[1] tacomcs1: -1000000

But the following expression I don't understand at all.
<expression id="TACO_SERVICES:connected:expr:positive"
attribute="pingd" operation="lte" value="0"/>
How do it affects score between the 2 nodes ?
And what does it means ?



2007/5/2, Andrew Beekhof <[EMAIL PROTECTED]>:
> try:
>
>       <rsc_location id="my_resource:connected" rsc="TACO_SERVICES">
>         <rule id="my_resource:connected:rule" score_attribute="pingd">
>           <expression id="my_resource:connected:expr:gateway"
> attribute="pingd" operation="gt" value="0"/>
>         </rule>
>       </rsc_location>
>
>
> On 4/30/07, chiu chun chir <[EMAIL PROTECTED]> wrote:
> > Dear Masters,
> >
> > Sorry forgot attachment last letter...
> >
> >
> > I've set up cluster with 2 nodes ( tacomcs1-Active and tacomcs2-Standby) .
> >
> > OS is SUSE Eneterprise Server 10 with heartbeat-2.0.7-1.2 upgraded.
> >
> > I wish if tacomcs1 cannot reach the outside world network, all services can
> > failover to tacomcs2.
> >
> > And all services will stay in tacomcs2 until it was failed or rebooted.
> >
> >
> >
> > I've follow the resource constraint illustrated at
> > http://www.linux-ha.org/pingd.
> >
> > Topic -> Quickstart - Run my resource on the node with the best
> > connectivity.
> >
> >
> >
> > And make a similar setting according the guide :
> >
> > <rsc_location id="my_resource:connected" rsc="my_resource">
> >
> >   <rule id="my_resource:connected:rule" score_attribute="pingd" >
> >
> >     <expression id="my_resource:connected:expr:defined"
> >
> >       attribute="pingd" operation="defined"/>
> >
> >   </rule>
> >
> > </rsc_location>
> >
> >
> >
> > After I use `yast` to modify tacomcs1 IP (Active node)
> >
> > (an IP address which cannot reach the gateway address : 10.31.70.1 -
> > configured in ha.cf as a PingNode).
> >
> >
> >
> > In the first time, the group named 'TACO_SERVICES' had failover to tacomcs2
> > (Standby node).
> >
> > After it failover, I'd modify tacomcs1 to correct IP Address - 10.31.70.8 -
> > can reach gateway.
> >
> > But in the mean time, tacomcs1 become OFFLINE (showed by $ crm_mon -1).
> >
> >
> >
> > But after I restarted the both tacomcs1 and tacomcs2 HEARTBEAT service.
> >
> > And use `yast` to modify tacomcs1 (Active node) IP address to wrong IP
> > (can't reach gateway).
> >
> > It does not failover to tacomcs2 anymore.
> >
> > In contrarily, tacomcs1 believes tacomcs2 was OFFLINE via $ crm_mon -1.
> >
> >
> >
> > I'm confused and cannot figure out what's going wrong.
> >
> > I've attached my settings, would u please help to verify it?
> >
> > Is there something wrong with ha.cf or cib.xml ?
> >
> > _______________________________________________
> > Linux-HA mailing list
> > [email protected]
> > http://lists.linux-ha.org/mailman/listinfo/linux-ha
> > See also: http://linux-ha.org/ReportingProblems
> >
> >
> _______________________________________________
> Linux-HA mailing list
> [email protected]
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
>

_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems


_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to