We didn't start them anywhere.
When the cluster starts, it goes looking for any resources that were
already active:

pengine[12646]: 2008/11/06_16:44:01 WARN: unpack_rsc_op:
IPaddr_monitor_0 found active IPaddr on www2test
pengine[12646]: 2008/11/06_16:44:01 WARN: unpack_rsc_op:
tomcat21-node1_monitor_0 found active tomcat21-node1 on www2test
pengine[12646]: 2008/11/06_16:44:01 WARN: unpack_rsc_op:
tomcat21-node2_monitor_0 found active tomcat21-node2 on www2test
pengine[12646]: 2008/11/06_16:44:01 WARN: unpack_rsc_op:
apache2_monitor_0 found active apache2 on www2test
pengine[12646]: 2008/11/06_16:44:01 WARN: unpack_rsc_op:
tomcat1-node1_monitor_0 found active tomcat1-node1 on www2test
pengine[12646]: 2008/11/06_16:44:01 WARN: unpack_rsc_op:
tomcat1-node2_monitor_0 found active tomcat1-node2 on www2test

It found lots... were they started at boot time by the OS?

On Thu, Nov 6, 2008 at 16:46, Ehlers, Kolja <[EMAIL PROTECTED]> wrote:
> Hi all,
>
> since the upgrade to the new Version my Cluster is not workign as expected.
> I have set symmetric-cluster to false and have these location rules:
>
> <rsc_location id="loc-2" rsc="tomcat1-node1" node="www1test"
> score="INFINITY"/>
>      <rsc_location id="loc-3" rsc="tomcat1-node1" node="www2test"
> score="-INFINITY"/>
>      <rsc_location id="loc-4" rsc="tomcat1-node2" node="www2test"
> score="INFINITY"/>
>      <rsc_location id="loc-5" rsc="tomcat1-node2" node="www1test"
> score="-INFINITY"/>
>      <rsc_location id="loc-6" rsc="tomcat21-node1" node="www1test"
> score="INFINITY"/>
>      <rsc_location id="loc-7" rsc="tomcat21-node1" node="www2test"
> score="-INFINITY"/>
>      <rsc_location id="loc-8" rsc="tomcat21-node2" node="www2test"
> score="INFINITY"/>
>      <rsc_location id="loc-9" rsc="tomcat21-node2" node="www1test"
> score="-INFINITY"/>
>
> So I have:
>
> tomcat1-node1   (ocf::cr:tomcat1):      Started www1test
> tomcat21-node1  (ocf::cr:tomcat):       Started www1test
> tomcat1-node2   (ocf::cr:tomcat1):      Started www2test
> tomcat21-node2  (ocf::cr:tomcat):       Started www2test
>
> 1. If I stop one node everything is fine and I have:
>
> tomcat1-node2   (ocf::cr:tomcat1):      Started www2test
> tomcat21-node2  (ocf::cr:tomcat):       Started www2test
>
> 2. But if I bring node 1 back up weird thing happen. All resources are
> started on node2 now.
>
> tomcat1-node1   (ocf::cr:tomcat1):      Started www2test
> tomcat21-node1  (ocf::cr:tomcat):       Started www2test
> tomcat1-node2   (ocf::cr:tomcat1):      Started www2test
> tomcat21-node2  (ocf::cr:tomcat):       Started www2test
>
> 3. And then the monitors do fail
>
> tomcat1-node1   (ocf::cr:tomcat1):      Started www1test
> tomcat1-node2   (ocf::cr:tomcat1):      Started www2test FAILED
> tomcat21-node2  (ocf::cr:tomcat):       Started www2test FAILED
>
> Failed actions:
>    tomcat21-node2_monitor_5000 (node=www2test, call=192, rc=7): complete
>    tomcat1-node2_monitor_5000 (node=www2test, call=191, rc=7): complete
>
> After that it all comes back to normal but its unacceptable for the
> resources to be restarted on the untouched node. I have attached the log of
> what happens after 2.
>
> Thanks
>
>
> Geschäftsführung: Dr. Michael Fischer, Reinhard Eisebitt
> Amtsgericht Köln HRB 32356
> Steuer-Nr.: 217/5717/0536
> Ust.Id.-Nr.: DE 204051920
> --
> This email transmission and any documents, files or previous email
> messages attached to it may contain information that is confidential or
> legally privileged. If you are not the intended recipient or a person
> responsible for delivering this transmission to the intended recipient,
> you are hereby notified that any disclosure, copying, printing,
> distribution or use of this transmission is strictly prohibited. If you
> have received this transmission in error, please immediately notify the
> sender by telephone or return email and delete the original transmission
> and its attachments without reading or saving in any manner.
>
>
> _______________________________________________
> Linux-HA mailing list
> [email protected]
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
>
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to