Again I am testing with a more simple config file, with only one master/slave resource.

In this case the resource becomes to be a Master but in the slave server the DRBD is not running. The modules is not loaded and crm_mon says that drbd is running as Slave:

Master/Slave Set: ms-drbd0
   drbd0:0     (ocf::heartbeat:drbd):  Master debianquagga2
   drbd0:1     (ocf::heartbeat:drbd):  Started debianquagga1

Resource configuration:
<resources>
<master_slave id="ms-drbd0">
                       <meta_attributes id="ma-ms-drbd0">
                               <attributes>
<nvpair id="ma-ms-drbd0-1" name="clone_max" value="2"/> <nvpair id="ma-ms-drbd0-2" name="clone_node_max" value="1"/> <nvpair id="ma-ms-drbd0-3" name="master_max" value="1"/> <nvpair id="ma-ms-drbd0-4" name="master_node_max" value="1"/> <nvpair id="ma-ms-drbd0-5" name="notify" value="yes"/> <nvpair id="ma-ms-drbd0-6" name="globally_unique" value="false"/>
                               </attributes>
                       </meta_attributes>
<primitive id="drbd0" class="ocf" provider="heartbeat" type="drbd">
                               <instance_attributes id="ia-drbd0">
                                       <attributes>
<nvpair id="ia-drbd0-1" name="drbd_resource" value="mail_disk"/>
                                       </attributes>
                               </instance_attributes>
                               <operations>
<op id="op-ms-drbd2-1" name="monitor" interval="59s" timeout="60s" start_delay="30s" role="Master"/> <op id="op-ms-drbd2-2" name="monitor" interval="60s" timeout="60s" start_delay="30s" role="Slave"/>
                               </operations>

                       </primitive>
</master_slave>
</resources>

Why heartbeat is not monitoring the service in the slave node ?

Adrian Chapela escribió:
Hello,

I am doing new tests. I am doing this tests to improve an old config and to try to understand best multistate resources.

I can't make it to works, my configuration is always bad.... I don't know anything. I am using 2.1.4 release.

The cluster is only doing a notify action and then stop action. If I started drbd myself, the cluster bring drbd in an Uncofigured resources state. I don't know the reason. Should I specify a start action ?

My config:
<cib generated="true" admin_epoch="0" epoch="32" have_quorum="true" ignore_dtd="false" num_peers="2" cib_feature_revision="2.0" crm_feature_set="2.0" num_updates="17" cib-last-written="Tue Oct 21 10:39:04 2008" ccm_transition="2" dc_uuid="369975ae-9c9a-497d-9aca-2a47cda0e4ce">
  <configuration>
    <crm_config>
      <cluster_property_set id="cib-bootstrap-options">
        <attributes>
<nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="2.1.4-node: aa909246edb386137b986c5773344b98c6969999"/>
        </attributes>
      </cluster_property_set>
    </crm_config>
    <nodes>
<node id="90375d05-1004-43f9-992e-4b516d75d50b" uname="debianquagga1" type="normal"/> <node id="369975ae-9c9a-497d-9aca-2a47cda0e4ce" uname="debianquagga2" type="normal"/>
    </nodes>
    <resources>
      <master_slave id="ms-drbd-mail-disk">
        <meta_attributes id="ma-ms-drbd-mail-disk">
          <attributes>
<nvpair id="ma-ms-drbd-mail-disk-1" name="clone_max" value="2"/> <nvpair id="ma-ms-drbd-mail-disk-2" name="clone_node_max" value="1"/> <nvpair id="ma-ms-drbd-mail-disk-3" name="master_max" value="1"/> <nvpair id="ma-ms-drbd-mail-disk-4" name="master_node_max" value="1"/> <nvpair id="ma-ms-drbd-mail-disk-5" name="notify" value="yes"/> <nvpair id="ma-ms-drbd-mail-disk-6" name="globally_unique" value="false"/> <nvpair id="ma-ms-drbd-mail-disk-7" name="target_role" value="stopped"/>
          </attributes>
        </meta_attributes>
<primitive id="id-mail-disk" class="ocf" provider="heartbeat" type="drbd">
          <instance_attributes id="ia-mail-disk">
            <attributes>
<nvpair id="ia-mail-disk-1" name="drbd_resource" value="mail_disk"/>
            </attributes>
          </instance_attributes>
          <operations>
<op id="op-mail-disk-1" name="monitor" interval="59s" timeout="10s" role="Master"/> <op id="op-mail-disk-2" name="monitor" interval="60s" timeout="10s" role="Slave"/>
          </operations>
        </primitive>
      </master_slave>
      <master_slave id="ms-drbd-samba-disk">
        <meta_attributes id="ma-ms-drbd-samba-disk">
          <attributes>
<nvpair id="ma-ms-drbd-samba-disk-1" name="clone_max" value="2"/> <nvpair id="ma-ms-drbd-samba-disk-2" name="clone_node_max" value="1"/> <nvpair id="ma-ms-drbd-samba-disk-3" name="master_max" value="1"/> <nvpair id="ma-ms-drbd-samba-disk-4" name="master_node_max" value="1"/> <nvpair id="ma-ms-drbd-samba-disk-5" name="notify" value="yes"/> <nvpair id="ma-ms-drbd-samba-disk-6" name="globally_unique" value="false"/> <nvpair id="ma-ms-drbd-samba-disk-7" name="target_role" value="stopped"/>
          </attributes>
        </meta_attributes>
<primitive id="id-samba-disk" class="ocf" provider="heartbeat" type="drbd">
          <instance_attributes id="ia-samba-disk">
            <attributes>
<nvpair id="ia-samba-disk-1" name="drbd_resource" value="samba_disk"/>
            </attributes>
          </instance_attributes>
          <operations>
<op id="op-samba-disk-1" name="monitor" interval="59s" timeout="10s" role="Master"/> <op id="op-samba-disk-2" name="monitor" interval="60s" timeout="10s" role="Slave"/>
          </operations>
        </primitive>
      </master_slave>
    </resources>
    <constraints>
      <rsc_location id="mail-disk-master-1" rsc="ms-drbd-mail-disk">
<rule id="mail-disk-master-on-debianQuagga2" role="master" score="INFINITY"> <expression id="mail-disk-exp-1" attribute="#uname" operation="eq" value="debianQuagga2"/>
        </rule>
      </rsc_location>
      <rsc_location id="samba-disk-master-1" rsc="ms-drbd-samba-disk">
<rule id="samba-disk-master-on-debianQuagga2" role="master" score="INFINITY"> <expression id="samba-disk-exp-1" attribute="#uname" operation="eq" value="debianQuagga2"/>
        </rule>
      </rsc_location>
    </constraints>
  </configuration>

Thank you!!


_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to