Andrew Beekhof wrote:
> On Tue, Dec 1, 2009 at 2:51 PM, Miriam Wiesner <[email protected]> wrote:
>   
>> Hello there,
>>
>> I'm setting up a high-available Web-server with DRBD.
>> I'm using heartbeat-2 and DRBD v8.
>>
>> I have two nodes: www-ha1 (DC) & www-ha2
>>
>> My services:
>> - IPaddr
>> - pingd
>> - stonith resource (my own script)
>> - apache2
>> - drbd (resource: /dev/drbd0 )
>> - filesystem
>>
>> After a bit configuration every resource worked fine, except the
>> DRBD-resource. It's starting always it's master instance (drbd:0) on
>> www-ha2 instead of the DC.
>>     
>
> The location of the DC has no relevance to where resources are placed.
> The DC is an internal detail, it affects only who makes the decision,
> not which node should be promoted.
>   
Ok, I thought, that would influence my apache-resource....
Apache runs atm on the DC, so I thought its web resources (drbd manages 
/www/) should be on the same node...? Therefore I was confused....

> Btw. You probably want to start using the agent from Linbit.
> AFAIK the one in heartbeat is now deprecated.
>   
Thank you! I didn't know this before. I downloaded the sources yet.
>   
>> Even if www-ha1 is DC the DRBD-master will start always on www-ha2.
>>
>> I tried to force it to start on www-ha1 with following commands:
>>
>> www-ha1:/# drbdadm primary all
>> www-ha1:/# drbdsetup /dev/drbd0 primary --overwrite-data-of-peer
>>
>> # As far as I know the "--overwrite-data-of-peer" has overtaken
>> "--do-what-I-say" in newer releases.
>>
>> When DRBD isn't yet started by heartbeat (started manually), it works
>> fine and `cat /proc/drbd` shows, that www-ha1 is primary.
>>
>> When I start DRBD with heartbeat, the drbd:1 resource runs successfully
>> on www-ha1 and drbd:0 (master) fails on www-ha2 the whole time, it tries
>> to start.
>>
>> If I stop the resource and start it again, drbd:1 runs on www-ha1 and
>> drbd:0 (master) on www-ha2.
>> It seems it has refused my configuration.
>>
>>
>> I also tried to set constraints in the cib with the lines:
>>
>> <rsc_location id="location_masterdrbd" rsc="drbd0">
>>        <rule id="rule:masterdrbd" role="master" score="100">
>>                <expression attribute="#uname" operation="eq"
>> value="www-ha1" />
>>        </rule>
>> </rsc_location>
>>
>> ...what should usually declare the master-drbd to run on www-ha1...this
>> won't work either...
>>
>> Do I have forgotten anything? Or do I do sth wrong?
>> It would be nice if you can help me.
>>
>> Regards,
>> Miriam Wiesner
>>
>> PS: Excuse my English; I'm not a native English speaker.
>>
>> ########################################
>> My configs:
>> ########################################
>>
>>
>> _/etc/heartbeat/ha.cf:_
>>
>> debugfile /var/log/ha-debug
>> logfile    /var/log/ha-log
>> logfacility    daemon
>> keepalive 1
>> deadtime 10
>> crm on
>> bcast    eth0 eth1
>> auto_failback off
>> node www-ha1 www-ha2
>> ping 10.5.254.254
>> respawn hacluster /usr/lib/heartbeat/ipfail
>> respawn root /usr/lib/heartbeat/pingd -m 100 -d 5s
>> respawn root /etc/init.d/apache2
>> respawn root /usr/lib/heartbeat/hbagent -d
>> respawn root /etc/init.d/drbd
>>
>> ########################################
>>
>> _/etc/heartbeat/haresources:_        (should not be needed if crm is on)
>>
>> www-ha1 10.5.2.10 drbd::drbd0 Filesystem::/dev/drbd0::/mnt::ext3 apache2ctl
>>
>>
>>
>> _/var/lib/heartbeat/crm/cib.xml:_
>>
>>  <cib admin_epoch="0" have_quorum="true" ignore_dtd="false"
>> num_peers="2" cib_feature_revision="2.0" epoch="534" ccm_transition="4"
>> generated="true" dc_uuid="56980102-239c-444f-9e1b-2a2eb79cde3b"
>> num_updates="1" cib-last-written="Tue Dec  1 11:29:30 2009">
>>   <configuration>
>>     <crm_config>
>>       <cluster_property_set id="cib-bootstrap-options">
>>         <attributes>
>>           <nvpair id="cib-bootstrap-options-dc-version"
>> name="dc-version" value="2.1.3-node:
>> 552305612591183b1628baa5bc6e903e0f1e26a3"/>
>>           <nvpair id="cib-bootstrap-options-stonith-enabled"
>> name="stonith-enabled" value="true"/>
>>           <nvpair name="last-lrm-refresh"
>> id="cib-bootstrap-options-last-lrm-refresh" value="1259335996"/>
>>         </attributes>
>>       </cluster_property_set>
>>     </crm_config>
>>     <nodes>
>>       <node id="081dbee7-b837-4bcf-8890-7ca45436d2f7" uname="www-ha2"
>> type="normal">
>>         <instance_attributes
>> id="nodes-081dbee7-b837-4bcf-8890-7ca45436d2f7">
>>           <attributes>
>>             <nvpair id="standby-081dbee7-b837-4bcf-8890-7ca45436d2f7"
>> name="standby" value="off"/>
>>           </attributes>
>>         </instance_attributes>
>>       </node>
>>       <node id="56980102-239c-444f-9e1b-2a2eb79cde3b" uname="www-ha1"
>> type="normal"/>
>>     <instance_attributes id="nodes-56980102-239c-444f-9e1b-2a2eb79cde3b">
>>           <attributes>
>>             <nvpair id="standby-56980102-239c-444f-9e1b-2a2eb79cde3b"
>> name="standby" value="off"/>
>>           </attributes>
>>         </instance_attributes>
>>       </node>
>>     </nodes>
>>     <resources>
>>       <clone id="fence">
>>         <meta_attributes id="fence_meta_attrs">
>>           <attributes>
>>             <nvpair id="fence_metaattr_target_role" name="target_role"
>> value="started"/>
>>             <nvpair id="fence_metaattr_clone_max" name="clone_max"
>> value="2"/>
>>             <nvpair id="fence_metaattr_clone_node_max"
>> name="clone_node_max" value="1"/>
>>           </attributes>
>>         </meta_attributes>
>>         <primitive id="fence" class="stonith" type="external/epcnet"
>> provider="heartbeat">
>>           <instance_attributes id="fence_instance_attrs">
>>             <attributes>
>>               <nvpair id="565cda7a-4035-4405-bd14-8e5ce772e068"
>> name="host" value="192.168.1.97"/>
>>               <nvpair id="4fed7802-83e6-469a-915e-71449e33e938"
>> name="community" value="XXXXX"/>
>>             </attributes>
>>           </instance_attributes>
>>           <operations>
>>             <op id="3a508356-1404-4e42-ab64-ca7b01b55be5" name="start"
>> timeout="20" prereq="nothing"/>
>>             <op id="69aabb92-1156-4ba4-9990-5ddbc1d5be7a"
>> name="monitor" interval="5" timeout="25" start_delay="20" prereq="nothing"/>
>>           </operations>
>>         </primitive>
>>       </clone>
>>       <clone id="ping">
>>         <meta_attributes id="ping_meta_attrs">
>>           <attributes>
>>             <nvpair id="ping_metaattr_target_role" name="target_role"
>> value="started"/>
>>             <nvpair id="ping_metaattr_clone_max" name="clone_max"
>> value="2"/>
>>             <nvpair id="ping_metaattr_clone_node_max"
>> name="clone_node_max" value="1"/>
>>           </attributes>
>>         </meta_attributes>
>>         <primitive id="ping" class="ocf" type="pingd" provider="heartbeat">
>>           <meta_attributes id="ping:1_meta_attrs">
>>             <attributes>
>>               <nvpair id="ping:1_metaattr_target_role"
>> name="target_role" value="started"/>
>>             </attributes>
>>           </meta_attributes>
>>         </primitive>
>>       </clone>
>>       <primitive id="IP" class="ocf" type="IPaddr" provider="heartbeat">
>>         <meta_attributes id="IP_meta_attrs">
>>           <attributes>
>>             <nvpair id="IP_metaattr_target_role" name="target_role"
>> value="started"/>
>>           </attributes>
>>         </meta_attributes>
>>         <instance_attributes id="IP_instance_attrs">
>>           <attributes>
>>             <nvpair id="a32f8cfa-929a-42ff-a375-5b762399ba93" name="ip"
>> value="10.5.2.10"/>
>>           </attributes>
>>         </instance_attributes>
>>       </primitive>
>>       <primitive id="www" class="ocf" type="apache2.new"
>> provider="heartbeat">
>>         <meta_attributes id="www_meta_attrs">
>>           <attributes>
>>             <nvpair id="www_metaattr_target_role" name="target_role"
>> value="stopped"/>
>>           </attributes>
>>         </meta_attributes>
>>         <instance_attributes id="www_instance_attrs">
>>           <attributes>
>>             <nvpair id="bd1fd7cf-f0f0-4d7d-b446-7eca8839919d"
>> name="apache2ctl" value="/usr/sbin/apache2ctl"/>
>>           </attributes>
>>         </instance_attributes>
>>       </primitive>
>>       <master_slave id="drbd_ms">
>>         <meta_attributes id="drbd_ms_meta_attrs">
>>           <attributes>
>>             <nvpair name="target_role"
>> id="drbd_ms_metaattr_target_role" value="started"/>
>>             <nvpair id="drbd_ms_metaattr_clone_max" name="clone_max"
>> value="2"/>
>>             <nvpair id="drbd_ms_metaattr_clone_node_max"
>> name="clone_node_max" value="1"/>
>>             <nvpair id="drbd_ms_metaattr_master_max" name="master_max"
>> value="1"/>
>>             <nvpair id="drbd_ms_metaattr_master_node_max"
>> name="master_node_max" value="1"/>
>>             <nvpair id="drbd_ms_metaattr_notify" name="notify"
>> value="true"/>
>>             <nvpair id="drbd_ms_metaattr_globally_unique"
>> name="globally_unique" value="false"/>
>>           </attributes>
>>         </meta_attributes>
>>         <primitive id="drbd" class="ocf" type="drbd" provider="heartbeat">
>>           <instance_attributes id="drbd_instance_attrs">
>>             <attributes>
>>               <nvpair id="ec5c46a5-5fbd-4cbc-a76b-ca6e41bf87e7"
>> name="drbd_resource" value="drbd0"/>
>>             </attributes>
>>           </instance_attributes>
>>         </primitive>
>>       </master_slave>
>>       <primitive class="ocf" type="Filesystem" provider="heartbeat"
>> id="filesystem">
>>         <meta_attributes id="filesystem_meta_attrs">
>>           <attributes>
>>             <nvpair name="target_role"
>> id="filesystem_metaattr_target_role" value="stopped"/>
>>           </attributes>
>>         </meta_attributes>
>>         <instance_attributes id="filesystem_instance_attrs">
>>           <attributes>
>>             <nvpair id="91206e6c-b704-44e5-813a-10d0d2aae654"
>> name="device" value="/dev/drbd0"/>
>>             <nvpair id="8fecb180-cc84-4d62-8603-aff8b72ea0da"
>> name="directory" value="/mnt"/>
>>             <nvpair id="1edb36e2-f9c7-48e8-9f2e-390f5d8fa8bd"
>> name="fstype" value="ext3"/>
>>           </attributes>
>>         </instance_attributes>
>>       </primitive>
>>     </resources>
>>     <constraints>
>>       <rsc_location id="location_IP" rsc="IP">
>>         <rule id="prefered_location_IP" score="INFINITY">
>>           <expression attribute="#is_dc"
>> id="8126c73b-c535-4be4-977d-2aea6c1b5aa6" operation="eq" value="true"/>
>>         </rule>
>>       </rsc_location>
>>       <rsc_location id="location_www" rsc="www">
>>         <rule id="prefered_location_www" score="INFINITY">
>>           <expression attribute="#is_dc"
>> id="4a9b89b4-9a4c-4578-9f52-5b864aee4fb0" operation="eq" value="true"/>
>>         </rule>
>>       </rsc_location>
>>       <rsc_order id="order_drbd_filesystem" from="filesystem"
>> action="start" to="drbd_ms" to_action="promote"/>
>>       <rsc_colocation id="colocation_drbd_filesystem" to="drbd_ms"
>> to_role="master" from="filesystem" score="INFINITY"/>
>>       <rsc_location id="location_masterdrbd" rsc="drbd0">
>>         <rule id="rule:masterdrbd" role="master" score="100">
>>           <expression attribute="#uname" operation="eq" value="www-ha1"
>> id="fcb42674-dd28-473c-96d6-4924a2501776"/>
>>         </rule>
>>       </rsc_location>
>>     </constraints>
>>   </configuration>
>>  </cib>
>>
>> ########################################
>>
>> _/etc/drbd.conf:_
>>
>> global {
>>        usage-count no;
>> }
>>
>> resource drbd0 {
>>        protocol C;
>>
>>
>>        disk {
>>                on-io-error     detach;
>>        }
>>
>>        syncer {
>>                rate 10M;
>>                al-extents 257;
>>        }
>>
>>        on www-ha1 {
>>                device /dev/drbd0;
>>                disk /dev/mapper/volgrp1-lv_www;
>>                address 10.5.2.11:7788;
>>                flexible-meta-disk      /dev/md2;
>>        }
>>
>>        on www-ha2 {
>>                device  /dev/drbd0;
>>                disk    /dev/mapper/volgrp1-lv_www;
>>                address 10.5.2.12:7788;
>>                flexible-meta-disk      /dev/md2;
>>        }
>> }
>>
>> ########################################
>>
>> --
>> Miriam Wiesner    ***     [email protected]     ***      EDV-Abteilung
>> Max-Planck-Institut fuer ausländisches  und  internationales Strafrecht
>> Guenterstalstr. 73    ***    79100 Freiburg i.Br.    ***    Deutschland
>> Tel. +49 761 7081-331  *  http://www.mpicc.de  *  Fax. +49 761 7081-410
>> *** Fingerprint: 45BC 04C3 9542 2D44 7F1D  ED1C 84C7 907C 4604 440E ***
>>
>> _______________________________________________
>> Linux-HA mailing list
>> [email protected]
>> http://lists.linux-ha.org/mailman/listinfo/linux-ha
>> See also: http://linux-ha.org/ReportingProblems
>>
>>     
> _______________________________________________
> Linux-HA mailing list
> [email protected]
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
>   


-- 
Miriam Wiesner    ***     [email protected]     ***      EDV-Abteilung
Max-Planck-Institut fuer ausländisches  und  internationales Strafrecht
Guenterstalstr. 73    ***    79100 Freiburg i.Br.    ***    Deutschland
Tel. +49 761 7081-331  *  http://www.mpicc.de  *  Fax. +49 761 7081-410
*** Fingerprint: 45BC 04C3 9542 2D44 7F1D  ED1C 84C7 907C 4604 440E ***

_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to