Hi Dan,
Thanks for your reply. For me I didn't setup any complex rules, I just set
up a two nodes cluster, and configure a dummy resource to view the behavior.
I am wondering this behavior is by default if I didn't specify anything(no
resource constraints, no location constraints)? Why heartbeat wants to
design like this because when M1 stop/start the resource will be migrated
twice, which seems not necessary. I mean if I set "I prefer resource to be
running at M1" I can understand the behavior, but if I didn't set anything
it should treat M1 and M2 the same machine, right?
Looking forward to your reply.
Thanks.
Bin
The CIB:
<?xml version="1.0" ?>
<cib admin_epoch="0" crm_feature_set="3.0.1"
dc-uuid="d111371b-51bd-41f0-a764-4e2f7616e47a" epoch="10" have-quorum="1"
num_updates="313" validate-with="pacemaker-1.0">
<configuration>
<crm_config>
<cluster_property_set id="cib-bootstrap-options">
<nvpair id="cib-bootstrap-options-dc-version" name="dc-version"
value="1.0.10-da7075976b5ff0bee71074385f8fd02f296ec8a3"/>
<nvpair id="cib-bootstrap-options-cluster-infrastructure"
name="cluster-infrastructure" value="Heartbeat"/>
<nvpair id="cib-bootstrap-options-stonith-enabled"
name="stonith-enabled" value="false"/>
</cluster_property_set>
</crm_config>
<rsc_defaults/>
<op_defaults/>
<nodes>
<node id="d111371b-51bd-41f0-a764-4e2f7616e47a" type="normal"
uname="xcp-3"/>
<node id="51fbafc2-2ca9-4123-b1e0-43927f6eccb6" type="normal"
uname="xcp-1"/>
</nodes>
<resources>
<primitive class="ocf" id="test-binch" provider="heartbeat"
type="binch">
<operations>
<op id="test-binch-monitor-3s" interval="3s" name="monitor"/>
</operations>
</primitive>
</resources>
<constraints/>
</configuration>
</cib>
On Thu, Dec 2, 2010 at 8:03 PM, Dan Frincu <[email protected]> wrote:
> Hi,
>
> Bin Chen(sunwen_ling) wrote:
> > Hi guys,
> >
> > I have configured 2 machines, M1 and M2. The case is:
> >
> > 1) M1 starts, M2 starts, resource running on M1
> > 2) M1 poweroff, resource running on M2
> > 3) M1 poweron, resource migrated to M1 from M2
> >
> > In step 3, for me I want to leave the resource running at M2, just make
> the
> > M1 to be passive node. How to achieve that?
> >
> Set the default resource-stickiness or the individual resource
> stickiness to a value higher than the location constraint. If the
> resource is part of a group, set the resource-stickiness to a value
> higher than the cumulated score of the group.
>
> Regards,
> Dan
> > Thanks.
> > Bin
> > _______________________________________________
> > Linux-HA mailing list
> > [email protected]
> > http://lists.linux-ha.org/mailman/listinfo/linux-ha
> > See also: http://linux-ha.org/ReportingProblems
> >
>
> --
> Dan FRINCU
> Systems Engineer
> CCNA, RHCE
> Streamwide Romania
>
> _______________________________________________
> Linux-HA mailing list
> [email protected]
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
>
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems