Hello,
in our high availability configuration we have two nodes (named cuzzonia and
cuzzonib), and we have a ressource (named MONITOR) running on first node
cuzzonia. We also have defined constraints (location constraints and colocation
constraints) to assure resource MONITOR always should run on first node
cuzzonia. Hereafter some extractions form our cluster information base:
<configuration>
<crm_config>
<cluster_property_set id="cib-bootstrap-options">
<nvpair id="CiB-resource-stickiness" name="default-resource-stickiness"
value="INFINITY"/>
......
......
</cluster_property_set>
</crm_config>
<nodes>
<node id="cuzzonia" uname="cuzzonia" type="normal">
<instance_attributes id="nodes-cuzzonia">
<nvpair id="nodes-cuzzonia-gs_monitored" name="gs_monitored"
value="true"/>
</instance_attributes>
</node>
<node id="cuzzonib" uname="cuzzonib" type="normal"/>
<primitive id="MONITOR" class="ocf" type="Xen" provider="FSC">
<operations>
<op name="start" interval="0s" timeout="300s" id="MONITOR-op-01"/>
<op name="monitor" interval="10s" timeout="60s" requires="nothing"
id="MONITOR-op-02"/>
<op name="stop" interval="0s" timeout="300s" id="MONITOR-op-03"/>
</operations>
......
......
</primitive>
<constraints>
......
......
<rsc_location id="MONITOR_location" rsc="MONITOR">
<rule id="pref_MONITOR_location" score="INFINITY">
<expression id="MONITOR_loc_exp" attribute="#uname" operation="eq"
value="cuzzonia"/>
</rule>
</rsc_location>
<rsc_order id="MONITOR_orderconstr-01" first="GSstart" score="INFINITY"
then="MONITOR"/>
<rsc_colocation id="MONITOR_coloconstr-01" rsc="MONITOR" score="INFINITY"
with-rsc="GSstart"/>
</constraints>
So, our intention is to localize ressource named MONITOR on first node
cuzzonia, on second node cuzzonib it only should run in case of crash of first
node or in case of stopping openais on first node or also in case of
explicitely migrating form first node to second node.
Stopping resource MONITOR on first node leads to the following behavior: stop
on first node, then start on second node, afterwards stop on second node and
start on first node again. So, after all, this is the correct result.
Is there a possibility to start resource MONITOR again on first node after it
has been stopped without starting and stopping it on second node ??
Thank you very much for your help.
-----Original Message-----
From: Andrew Beekhof [mailto:[email protected]]
Sent: Monday, March 08, 2010 11:26 AM
To: Haussecker, Armin
Cc: [email protected]
Subject: Re: [Openais] resource restart
On Wed, Mar 3, 2010 at 8:21 AM, Haussecker, Armin
<[email protected]> wrote:
> Hi,
>
> we have an openais cluster consisting of two nodes, a resource is started on
> first node, and this resource should remain on first node by suitable
> location constraint, and also it should be started on the same node as
> another resource by colocation constraint.
>
> If the resource is stopped, and afterwards started again, we can see that
> first it is started on second node, and afterwards stopped on second node
> and restarted on first node again. So, finally everything seems to work
> correctly.
>
> But, how can we avoid that the resource is started on second node and
> afterwards stopped on second node and started on first node ?? If the
> resource is stopped on first node, and afterwards started again, it should
> be immediately restarted on first node and not started and stopped on second
> node in the meantime.
It depends on a number of things, the most important of which is the
monitor function of your resource.
If it returns 7 (ie. safely stopped), then the cluster has no reason
to put it back on the first node.
Also related is resource-stickiness
_______________________________________________
Openais mailing list
[email protected]
https://lists.linux-foundation.org/mailman/listinfo/openais