To see how node scores are calculated, try
ptest -L -s
J.
On Wed, 2009-10-14 at 19:07 +0300, Mihai wrote:
> Hello,
> I have a config like this:
> Online: [ int3.test.tst wor1.test.tst wor2.test.tst wor3.test.tst
> int1.test.tst int2.test.tst ]
>
> ip1 (ocf::heartbeat:IPaddr): Started int1.test.tst
> ip2 (ocf::heartbeat:IPaddr): Started int2.test.tst
> ip3 (ocf::heartbeat:IPaddr): Started int3.test.tst
> ip4 (ocf::heartbeat:IPaddr): Started wor2.test.tst
> ip5 (ocf::heartbeat:IPaddr): Started wor1.test.tst
> ip6 (ocf::heartbeat:IPaddr): Started wor3.test.tst
> Clone Set: clone_sql
> Started: [ int2.test.tst int1.test.tst ]
> Clone Set: clone_sip
> Started: [ int3.test.tst int1.test.tst int2.test.tst ]
> Clone Set: clone_http
> Started: [ int1.test.tst int2.test.tst ]
> Clone Set: clone_pbx
> Started: [ wor2.test.tst wor3.test.tst wor1.test.tst ]
> Clone Set: clone_cache
> Started: [ wor2.test.tst wor1.test.tst wor3.test.tst ]
>
>
> I want to keep ip1 always tied up to int1 ip2 to int2 .. and so on, my
> problem is that even if I put a location constrain with INFINITY on ip
> resurce
> For some node, there are cases like above when the ip resources are reversed.
> For example in this situation ip5 should on wor2 and ip4 on wor1
>
> Reason for this is that in the log two nodes have infinity and it probably
> chooses the one with lower id in openais or something.
>
> Oct 14 17:50:33 int1 pengine: [8648]: WARN: native_choose_node: 2 nodes with
> equal score (INFINITY) for running ip4 resources. Chose wor2.test.tst.
> Oct 14 17:50:33 int1 pengine: [8648]: WARN: native_choose_node: 2 nodes with
> equal score (INFINITY) for running ip5 resources. Chose wor1.test.tst.
> Oct 14 17:50:33 int1 pengine: [8648]: WARN: native_choose_node: 2 nodes with
> equal score (INFINITY) for running ip6 resources. Chose wor3.test.tst.
>
> My question is how is this score calculated since in the rules only 1 node
> has INFINITY for location, and second where can i view current score for each
> resource/node.
>
> Apart from the location rules i also set a collocation constraint so that ip1
> for example can start only on nodes with clone_sql,clone_sip,clone_http,
> same for ip2.
> And two order that clone_mysql should be started before sip and before pbx
>
> The behavior i want to obtain is that ip1 can move to ip2 if some resource
> fails on int1 but not to have ip1 on int2 when ip2 on int1
>
>
> Also what role does <nvpair name="default-resource-stickiness"
> id="cib-bootstrap-options-default-resource-stickiness" value="0"/> have in
> this.
> What node is the one it forces the resource to stick on? I've tried it with
> it set to 2 or 0 and i get the same result.
> Also symmetric-cluster is set to false since i have -INFINITY locations rules
> to constraint the start of the resources.
>
>
> If someone could explain this to me i'll really appreciate it since i can't
> find something useful, in the docs related to this.
>
> With respect,
> Vintila Mihai Alexandru
>
>
> _______________________________________________
> Pacemaker mailing list
> [email protected]
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
_______________________________________________
Pacemaker mailing list
[email protected]
http://oss.clusterlabs.org/mailman/listinfo/pacemaker