Andrew Beekhof escribió:
On 7/19/07, Adrian Chapela <[EMAIL PROTECTED]> wrote:
Andrew Beekhof escribió:
> then you want:
>
> default_resource_failure_stickiness=-INFINITY (if it fails, move
> immediately), and
>
> default_resource_stickiness=INFINITY (dont move it unless it fails or
> the node is shutting down or similar)
But If I assign default_resource_stickiness=INFINITY , the resource
group executes in the first heartbeat node available no ??
it has no relation to the initial placement (since its not yet running)
OK, but if by any cause I startup the secondary node first the primary
node, the resource group will go to the primary node??
How Can I have a rule/attribute/something wich assign initial placement
to the same node ???
I want to execute a resource in one node and then it network fails or
resource fails, the resource group must be moved to the secondary node.
Right
OK.
Now I assign a 200 score to a node by a rule, could be this rule the
problem ?? The rule is:
not really a problem, but it will of course have an effect
In my case the effect can change the final score when a node fails ??
<rsc_location id="my_resource:connected" rsc="MySQL_GROUP">
<rule id="my_resource:prefer:portatil" score="200">
<expression id="my_resource:prefer:portatil:expr"
attribute="#uname" operation="eq" value="portatil"/>
</rule>
<rule id="my_resource:connected:rule" score="-INFINITY"
boolean_op="or">
<expression id="my_resource:connected:expr:undefined"
attribute="pingd" operation="not_defined"/>
<expression id="my_resource:connected:expr:zero"
attribute="pingd" operation="lte" value="0"/>
</rule>
</rsc_location>
(It will test your theory now..)
>
>
>> To do that I configure in cib the next:
>>
>> default_resource_failure_stickiness=-INFINITY -> with this the
resource
>> doesn't execute in any node.
>
> has it failed once on every node?
No, I think not..
well the only time default_resource_failure_stickiness has any effect
on a node's score is if the resource has failed on that node.
OK, that's right, I thinked that.
unless its the pingd rule that is preventing some of the nodes from
taking over the resource
I don't understand, If the pingd has the same value in the two nodes
(two nodes have not network failures...) the pingd rule hasn't a effect
at all no ?? The rule has only effect in a node with network problems, I
think ..
> you'll need to clear out the failcount before it will be allowed to
> run there again
This is to execute the resource in a node wich has a failcount > 0 not
?? How can I know failcount value for a node ??
crm_failcount --help
I was searching the doc but you are faster!! Thank you!!
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems