peteridah wrote:
> Hello,
> 
> I have set up a 2-node heartbeat cluster on Suse Linux 10.2.I am using IBM
> RSA slimlime II adapter cards on ibm x3655 servers connected to a SAN
> device.So far I have set up an ext3 filesystem,ip address &
> external/ibmrsa-telnet stonith resources which seem to look ok to me as seen
> by the crm_mon output below -
> 
> Refresh in 1s...
> 
> ============
> Last updated: Thu Jan 29 12:12:07 2009
> Current DC: cll-jcaps-002 (8dc7825a-c518-4550-b467-f5e162b3be27)
> 2 Nodes configured.
> 3 Resources configured.
> ============
> 
> Node: cll-jcaps-001 (7bb81766-a280-4946-8316-169ce4c1dfd5): online
> Node: cll-jcaps-002 (8dc7825a-c518-4550-b467-f5e162b3be27): online
> 
> kill01  (stonith:external/ibmrsa-telnet):       Started cll-jcaps-002
> kill02  (stonith:external/ibmrsa-telnet):       Started cll-jcaps-001
> Resource Group: ISGROUP
>     Fs  (ocf::heartbeat:Filesystem):    Started cll-jcaps-001
>     JVIP        (ocf::heartbeat:IPaddr2):       Started cll-jcaps-001
> 
> 
> 
> But for some reason the resources do not fail over when I switch off a node.
> I used crm_verify -L -V to check the config and I have no errors.
> However an attempt to simulate a failure returns with a warning in the logs
> -
> 
> WARN: native_color: Resource Fs cannot run anywhere
> pengine[5868]: 2009/01/27_12:17:06 WARN: native_color: Resource JVIP cannot
> run anywhere

When you shutdown 001, the resources have nowhere left to run due to
your configuration:

<rsc_location id="cli-standby-ISGROUP" rsc="ISGROUP">
<rule id="prefered_cli-standby-ISGROUP" score="-INFINITY">
<expression attribute="#uname" id="6200ffd0-126c-4fa5-993f-a24627ce15a8"
operation="eq" value="cll-jcaps-002"/>
</rule>
</rsc_location>

This forbids ISGROUP to run on 002. And that's the only node left.

Judging by the id "cli-standby-ISGROUP", you propably used crm_resource
-M or the gui before to migrate ISGROUP. Both will have told you that
they were going to create this constraint and that - if you ever wanted
the resource on that particular node ever again - you'd have to revert
the migration by crm_resource -U or some function in the gui (not
familiar with that).

So do that and ISGROUP should start on 002.

Regards
Dominik
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to