On Tuesday 16 November 2010 17:47:44 Robinson, Eric wrote:
> >> I'm not sure if this list or the DRBD list is the right one to ask
> 
> this.
> 
> >> Is it possible to deploy a 3-node CRM-based cluster where:
> >>    -- nodes A and C share resource R1 on /dev/drbd0
> >>    
> >>    -- nodes B and C share resource R2 on /dev/drbd1
> >>    
> >>    -- resource constraints prevent R1 from running on node B and
> 
> prevent
> 
> >> resource R2 from running on node A?
> > 
> > should work.
> 
> Good to hear. Do you think the following config would do the job?
> 
> -----
> 
> node $id="6080642c-bad3-4bb8-80ba-db6b1f7a0735" ha07b.mydomain.com \
>         attributes standby="off"
> node $id="740538ba-ded5-43b1-95c0-ef898dc72581" ha07a.mydomain.com \
>         attributes standby="off"
> node $id="b3b4bec2-19e2-4096-8914-febddc5ae42a" ha07c.mydomain.com \
>         attributes standby="off"
> primitive p_clust04_ip ocf:heartbeat:IPaddr2 \
>         params ip="192.168.10.206" cidr_netmask="32" \
>         op monitor interval="15s"
> primitive p_clust05_ip ocf:heartbeat:IPaddr2 \
>         params ip="192.168.10.207" cidr_netmask="32" \
>         op monitor interval="16s"
> primitive p_drbd0 ocf:linbit:drbd \
>         params drbd_resource="ha01_mysql" \
>         op monitor interval="17s" \
>         op start interval="0" timeout="240" \
>         op stop interval="0" timeout="100"
> primitive p_drbd1 ocf:linbit:drbd \
>         params drbd_resource="ha02_mysql" \
>         op monitor interval="18s" \
>         op start interval="0" timeout="240" \
>         op stop interval="0" timeout="100"
> primitive p_fs_ha01 ocf:heartbeat:Filesystem \
>         params device="/dev/drbd0" directory="/ha01_mysql" fstype="ext3"
> primitive p_fs_ha02 ocf:heartbeat:Filesystem \
>         params device="/dev/drbd1" directory="/ha02_mysql" fstype="ext3"
> group g_clust04 p_fs_ha01 p_clust04_ip
> group g_clust05 p_fs_ha02 p_clust05_ip
> ms ms_drbd0 p_drbd0 \
>         meta master-max="1" master-node-max="1" clone-max="2"
> clone-node-max="1" notify="true" target-role="Master"
> ms ms_drbd1 p_drbd1 \
>         meta master-max="1" master-node-max="1" clone-max="2"
> clone-node-max="1" notify="true" target-role="Started"
> location l_prefer_1 g_clust04 50: ha07a.mydomain.com
> location l_prefer_2 g_clust05 50: ha07b.mydomain.com
> location l_prevent_1 g_clust05 -inf: ha07a.mydomain.com
> location l_prevent_2 ms_drbd1 -inf: ha07a.mydomain.com
> location l_prevent_3 g_clust04 -inf: ha07b.mydomain.com
> location l_prevent_4 ms_drbd0 -inf: ha07a.mydomain.com
> order o_drbd0_then_g_clust04 inf: ms_drbd0:promote g_clust04:start
> order o_drbd1_then_g_clust05 inf: ms_drbd1:promote g_clust05:start
> property $id="cib-bootstrap-options" \
>         dc-version="1.0.9-89bd754939df5150de7cd76835f98fe90851b677" \
>         cluster-infrastructure="Heartbeat" \
>         stonith-enabled="false" \
>         symmetric-cluster="true" \
>         last-lrm-refresh="1289916090" \
>         no-quorum-policy="ignore"
> rsc_defaults $id="rsc-options" \
>         resource-stickiness="100"
> 
> 
> --
> Eric Robinson

You are missing the colocation between Filesystem and DRBD master.

I'd do somethinge like:

location prevent_drbd0_on_a ms_drbd0 -inf: ha07a
location prevent_drbd1_on_c ms_drbd1 -inf: ha07c
order group0_after_drbd0 inf: msdrbd0:promote group0:start
colocation group0_with_drbd0 inf: group0:Started msdrbd0:Master
(...)

perhaps give a liitle negative colocation score between both masters to 
prevent them running on one node.

Greetings,
-- 
Dr. Michael Schwartzkopff
Guardinistr. 63
81375 München

Tel: (0163) 172 50 98

Attachment: signature.asc
Description: This is a digitally signed message part.

_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to