It seems that I solved the issue. I post this in case someone else has a similar issue.
I have added in a group the Filesystem, the VIP and one of the tomcats. Then I
have created a collocation between Filesystem and the 2nd tomcat, and between
the VIP and the 2nd tomcat. If I create a collocation between the group and the
2nd tomcat, the issue remains As I noticed, the order that you add resources in
groups and in collocation may influence the result. The below configuration
worked for me:
node $id="147009f5-c4c6-40a7-b1d2-1f9c72f1bf48" lsc-node01.velti.net
node $id="a7e25657-fb85-4cf1-9d9b-5a21484e1583" lsc-node02.velti.net
primitive CHEROKEE_APACHE ocf:heartbeat:LSC-Apache \
params env_user="cherokee" port="80" type="apache2" vip="10.130.32.50" \
meta migration-threshold="3" multiple-active="stop_start"
is-managed="true" failure-timeout="10min" \
op monitor interval="10s" timeout="20s" start-delay="10s"
primitive CHEROKEE_FS ocf:heartbeat:LSC-Filesystem \
params env_user="cherokee" mount_path="/opt/data2"
partition="/dev/mapper/mpath1p1" \
meta migration-threshold="9" multiple-active="stop_start" \
op monitor interval="10s"
primitive CHEROKEE_VIP ocf:heartbeat:LSC-VIPaddr \
params ip="10.130.32.50" netmask="255.255.255.0" \
meta migration-threshold="9" multiple-active="stop_start"
failure-timeout="10min" \
op monitor interval="10s"
primitive LSC_TEST_FILESYSTEM ocf:heartbeat:LSC-Filesystem \
params partition="/dev/mapper/mpath0p1" mount_path="/opt/data1"
env_user="LSC_TEST" \
meta migration-threshold="9" multiple-active="stop_start" \
op monitor interval="10s"
primitive LSC_TEST_TOMCAT_CORE ocf:heartbeat:LSC-Tomcat \
params env_user="LSC_TEST" port="8080" type="CORE" vip="10.130.32.49" \
meta migration-threshold="9" multiple-active="stop_start"
target-role="Started" \
op monitor interval="30s" timeout="120s" start-delay="120s"
primitive LSC_TEST_TOMCAT_UI ocf:heartbeat:LSC-Tomcat \
params env_user="LSC_TEST" port="9080" type="UI" vip="10.130.32.49" \
meta migration-threshold="9" multiple-active="stop_start"
is-managed="true" target-role="Started" \
op monitor interval="30s" timeout="120s" start-delay="120s"
primitive LSC_TEST_VIP ocf:heartbeat:LSC-VIPaddr \
params ip="10.130.32.49" netmask="255.255.255.0" \
meta migration-threshold="9" multiple-active="stop_start" \
op monitor interval="10s"
group CHEROKEE_GROUP CHEROKEE_FS CHEROKEE_VIP CHEROKEE_APACHE \
meta is-managed="true" target-role="Started"
group LSC_TEST_GROUP LSC_TEST_FILESYSTEM LSC_TEST_VIP LSC_TEST_TOMCAT_CORE \
meta is-managed="true" target-role="Started"
location cli-prefer-LSC_TEST_GROUP LSC_TEST_GROUP \
rule $id="cli-prefer-rule-LSC_TEST_GROUP" inf: #uname eq
lsc-node02.velti.net
colocation LSC_UI_FS inf: LSC_TEST_FILESYSTEM LSC_TEST_TOMCAT_UI:Slave
colocation LSC_UI_VIP inf: LSC_TEST_VIP LSC_TEST_TOMCAT_UI:Slave
order CHEROKEE_ORDER inf: CHEROKEE_FS CHEROKEE_VIP CHEROKEE_APACHE
order LSC2 inf: LSC_TEST_FILESYSTEM LSC_TEST_TOMCAT_UI
order LSC3 inf: LSC_TEST_VIP LSC_TEST_TOMCAT_UI
property $id="cib-bootstrap-options" \
dc-version="1.0.10-da7075976b5ff0bee71074385f8fd02f296ec8a3" \
cluster-infrastructure="Heartbeat" \
stonith-enabled="false" \
no-quorum-policy="ignore" \
is-managed-default="true" \
cluster-recheck-interval="10min"
Kind Regards :)
Pavlos Polianidis | Application Support Engineer
Velti
44, Kifissias Avenue
GR-15125, Maroussi, Greece
T +30 210 6378 800
F +30 210 6378 888
M +30 695 5060 133
E [email protected]<mailto:[email protected]>
www.velti.com<http://www.velti.com/>
Velti is a global leader in mobile marketing and advertising solutions for
mobile operators, ad agencies, brands and media groups.
London | San Francisco | New York | Boston | Athens | Madrid | Sofia | Nicosia
| Moscow | Dubai | New Delhi | Mumbai | Beijing | Shanghai
From: Pavlos Polianidis
Sent: Tuesday, January 25, 2011 6:52 PM
To: General Linux-HA mailing list
Subject: Configuration Problem - Need 2 tomcats to run independently but both
depend on other resources
Dear All,
I am trying to configure some resources using pacemaker as below:
-a filesystem
-a Vip address
-2 tomcats
As both tomcats are on the same sun partition and listens to a specific VIP, I
have set those resources to a group so they will always run on the same node.
When the filesystem or a VIP is down, the tomcats are going down automatically.
Everything is fine until here. My problem is that I need the tomcats to run
independently one from the other, but as long as I have them in a group, when
one goes down and it is placed higher in the group, the other goes down as
well. Is there any solution to make them work independently but both depend on
if the Filesystem/VIP are up?
My current configuration is the below:
node $id="147009f5-c4c6-40a7-b1d2-1f9c72f1bf48" lsc-node01.velti.net
node $id="a7e25657-fb85-4cf1-9d9b-5a21484e1583" lsc-node02.velti.net
primitive CHEROKEE_APACHE ocf:heartbeat:LSC-Apache \
params env_user="cherokee" port="80" type="apache2" vip="10.130.32.50" \
meta migration-threshold="3" multiple-active="stop_start"
is-managed="true" failure-timeout="10min" \
op monitor interval="10s" timeout="20s" start-delay="10s"
primitive CHEROKEE_FS ocf:heartbeat:LSC-Filesystem \
params env_user="cherokee" mount_path="/opt/data2"
partition="/dev/mapper/mpath1p1" \
meta migration-threshold="9" multiple-active="stop_start" \
op monitor interval="10s"
primitive CHEROKEE_VIP ocf:heartbeat:LSC-VIPaddr \
params ip="10.130.32.50" netmask="255.255.255.0" \
meta migration-threshold="9" multiple-active="stop_start"
failure-timeout="10min" \
op monitor interval="10s"
primitive LSC_TEST_FILESYSTEM ocf:heartbeat:LSC-Filesystem \
params partition="/dev/mapper/mpath0p1" mount_path="/opt/data1"
env_user="LSC_TEST" \
meta migration-threshold="9" multiple-active="stop_start" \
op monitor interval="10s"
primitive LSC_TEST_TOMCAT_CORE ocf:heartbeat:LSC-Tomcat \
params env_user="LSC_TEST" port="8080" type="CORE" vip="10.130.32.49" \
meta migration-threshold="9" multiple-active="stop_start"
is-managed="true" target-role="Stopped" \
op monitor interval="30s" timeout="120s" start-delay="120s"
primitive LSC_TEST_TOMCAT_UI ocf:heartbeat:LSC-Tomcat \
params env_user="LSC_TEST" port="9080" type="UI" vip="10.130.32.49" \
meta migration-threshold="9" multiple-active="stop_start"
is-managed="true" target-role="Started" \
op monitor interval="30s" timeout="120s" start-delay="120s"
primitive LSC_TEST_VIP ocf:heartbeat:LSC-VIPaddr \
params ip="10.130.32.49" netmask="255.255.255.0" \
meta migration-threshold="9" multiple-active="stop_start" \
op monitor interval="10s"
group CHEROKEE_GROUP CHEROKEE_FS CHEROKEE_VIP CHEROKEE_APACHE \
meta is-managed="true" target-role="Started"
group LSC_TEST_GROUP LSC_TEST_FILESYSTEM LSC_TEST_VIP LSC_TEST_TOMCAT_CORE
LSC_TEST_TOMCAT_UI
location LSC_TEST_LOC LSC_TEST_GROUP inf: lsc-node01.velti.net
location cli-prefer-LSC_TEST_VIP LSC_TEST_VIP \
rule $id="cli-prefer-rule-LSC_TEST_VIP" inf: #uname eq
lsc-node01.velti.net
colocation LSC_TEST_COLOS inf: LSC_TEST_FILESYSTEM:Master LSC_TEST_VIP:Master
LSC_TEST_TOMCAT_CORE LSC_TEST_TOMCAT_UI
order CHEROKEE_ORDER inf: CHEROKEE_FS CHEROKEE_VIP CHEROKEE_APACHE
property $id="cib-bootstrap-options" \
dc-version="1.0.10-da7075976b5ff0bee71074385f8fd02f296ec8a3" \
cluster-infrastructure="Heartbeat" \
stonith-enabled="false" \
no-quorum-policy="ignore" \
is-managed-default="true" \
cluster-recheck-interval="10min"
I have tried several different configuration, without using "collocation",
using order e.t.c and none worked for me. Can anyone please assist :)
Kind Regards
Pavlos Polianidis
Pavlos Polianidis | Technical Support Specialist
Velti
44 Kifisias Ave.
15125 Marousi, Athens, Greece
T +30.210.637.8000
F +30.210.637.8888
M +30.695.506.0133
E [email protected]
www.velti.com<http://www.velti.com>
Velti is a global leader in mobile marketing and advertising solutions for
mobile operators, ad agencies, brands and media groups.
San Francisco | New York | Boston | Dublin | London | Paris | Madrid | Athens |
Sofia | Moscow | Dubai | New Delhi | Mumbai | Jakarta | Beijing | Shanghai |
Sydney
[cid:[email protected]]<http://www.velti.com/request-mobile-world-congress-form>
<<inline: image001.gif>>
_______________________________________________ Linux-HA mailing list [email protected] http://lists.linux-ha.org/mailman/listinfo/linux-ha See also: http://linux-ha.org/ReportingProblems
