The crm_mon from the latest pacemaker build doesn't include a -f option: $ crm_mon -r -i 2 -f crm_mon: invalid option -- f
Is there another way to find this information? -Ben On Mon, Aug 25, 2008 at 4:23 AM, Björn Boschman <[EMAIL PROTECTED]> wrote: > Hi, > > check the failed actions and failure-counters > > crm_mon -rfi 2 > > Ben Beuchler schrieb: >> >> I think I must be missing something simple. My test cluster is 3 >> nodes (test01, test02, hyperaxe). For the moment test01 is doing >> nothing. I'm trying to get drbd/nfs working on the other two. The >> DRBD bit was working great until I rolled in the services that depend >> on it (filesystem, nfs, ip address). >> >> Since I added the group containing the additional services, along with >> the constraints that tie them to the DRBD resource, all of the >> resources fail to run. Note the output of crm_verify -L -V: >> >> crm_verify[1392]: 2008/08/22_18:11:13 ERROR: unpack_rsc_op: Hard >> error: drbd-www:0_monitor_0 failed with rc=5. >> crm_verify[1392]: 2008/08/22_18:11:13 ERROR: unpack_rsc_op: >> Preventing ms-drbd-www from re-starting on test01 >> crm_verify[1392]: 2008/08/22_18:11:13 WARN: unpack_rsc_op: >> nfs_server_monitor_0 found active nfs_server on test01 >> crm_verify[1392]: 2008/08/22_18:11:13 WARN: unpack_rsc_op: >> drbd-www:0_monitor_0 found active drbd-www:0 on test02 >> crm_verify[1392]: 2008/08/22_18:11:13 WARN: unpack_rsc_op: >> drbd-www:0_monitor_0 found active drbd-www:0 on hyperaxe >> crm_verify[1392]: 2008/08/22_18:11:13 WARN: native_color: Resource >> drbd-www:0 cannot run anywhere >> crm_verify[1392]: 2008/08/22_18:11:13 WARN: native_color: Resource >> drbd-www:1 cannot run anywhere >> crm_verify[1392]: 2008/08/22_18:11:13 WARN: native_color: Resource >> fs_www cannot run anywhere >> crm_verify[1392]: 2008/08/22_18:11:13 WARN: native_color: Resource >> nfs_ip cannot run anywhere >> crm_verify[1392]: 2008/08/22_18:11:13 WARN: native_color: Resource >> nfs_server cannot run anywhere >> >> The first error makes sense, as drbd is neither installed nor >> configured on test01. I don't know why it says it found an active >> "nfs_server" on test01, as it's definitely not running. I've since >> removed the init script as well. I also have no idea why drbd-www:0 >> is flagged as active on test02 and hyperaxe. It appears the module is >> loaded, apparently by the ocf drbd script as I've removed the system >> drbd init script, but drbd is "Unconfigured". No resources visible in >> /proc/drbd. >> >> And I'm assuming fs_www, nfs_ip, and nfs_server cannot run anywhere >> because they have order/colocation constraints tying them to drbd-www. >> >> The <resources> and <constraints> configs are pasted below. The full >> CIB can be viewed here: http://pastebin.com/m50fc6123 >> >> I have symmetric-cluster = false. >> >> What am I doing wrong? >> >> Thanks! >> >> -Ben >> >> <resources> >> <master_slave id="ms-drbd-www"> >> <meta_attributes id="71bf85c6-176f-4d08-bfc3-22f62ead87eb"> >> <attributes> >> <nvpair name="clone_max" value="2" >> id="519fb092-7c05-4f19-abba-0a55773d6348"/> >> <nvpair name="clone_node_max" value="1" >> id="4d480e7c-361f-4e85-8ff7-b36fa7228925"/> >> <nvpair name="master_max" value="1" >> id="4a9b6670-42d7-4133-8fc5-4b8804ea49dd"/> >> <nvpair name="master_node_max" value="1" >> id="73262db6-8c33-4e5a-adf6-e34fc8fd08ba"/> >> <nvpair name="notify" value="yes" >> id="956bbe97-fc48-4f23-8b7e-7c048369e8e9"/> >> <nvpair name="globally_unique" value="false" >> id="4e7daeda-4cf0-4c33-83c1-2de4a032476c"/> >> <nvpair name="target_role" value="stopped" >> id="ee6d447a-c844-4bbb-91fe-9ce03536a2db"/> >> </attributes> >> </meta_attributes> >> <primitive id="drbd-www" class="ocf" provider="heartbeat" type="drbd"> >> <instance_attributes id="34e97979-3d63-4033-9079-cc5b07ded44c"> >> <attributes> >> <nvpair name="drbd_resource" value="www" >> id="9e161e49-17a3-438a-979b-14aeb71a7416"/> >> </attributes> >> </instance_attributes> >> <operations> >> <op name="monitor" interval="59s" timeout="10s" role="Master" >> id="1b2f0be4-b348-45e2-b2fe-dee233689880"/> >> <op name="monitor" interval="60s" timeout="10s" role="Slave" >> id="5f1f5d3e-ec93-4358-b809-8b1a0c9a4436"/> >> </operations> >> </primitive> >> </master_slave> >> <group id="nfs"> >> <meta_attributes id="32882692-c39a-4a23-bf44-9b08537383b0"> >> <attributes> >> <nvpair name="target_role" value="#default" >> id="33ae083e-0665-4f94-b6a2-0e92e4e10be1"/> >> </attributes> >> </meta_attributes> >> <primitive id="fs_www" class="ocf" type="Filesystem" >> provider="heartbeat"> >> <instance_attributes id="404a647f-ca3b-46bd-9fe6-4f95a7099d77"> >> <attributes> >> <nvpair name="device" value="/dev/drbd0" >> id="111eb283-ebc8-4702-8852-ef17bc57a4f7"/> >> <nvpair name="directory" value="/www" >> id="56bf20a3-37e8-400b-a92a-ef98e86dafa7"/> >> <nvpair name="fstype" value="xfs" >> id="4220aaed-3300-425c-9c2b-9eee35376338"/> >> </attributes> >> </instance_attributes> >> </primitive> >> <primitive id="nfs_ip" class="ocf" type="IPaddr" provider="heartbeat"> >> <instance_attributes id="nfs_ip"> >> <attributes> >> <nvpair id="nfs_ip-ip" name="ip" value="192.168.22.213"/> >> </attributes> >> </instance_attributes> >> </primitive> >> <primitive id="nfs_server" class="lsb" type="nfs-kernel-server"/> >> </group> >> </resources> >> >> <constraints> >> <rsc_location id="drbd-www-loc-1" rsc="ms-drbd-www" node="hyperaxe" >> score="100"/> >> <rsc_location id="drbd-www-loc-2" rsc="ms-drbd-www" node="test02" >> score="100"/> >> <rsc_location id="drbd-www-loc-3" rsc="nfs" node="hyperaxe" >> score="100"/> >> <rsc_location id="drbd-www-loc-4" rsc="nfs" node="test02" score="100"/> >> <rsc_location id="drbd-www-master-loc-1" rsc="ms-drbd-www"> >> <rule role="master" score="200" >> id="9c815988-d347-46a3-8556-382e6b07275f"> >> <expression attribute="#uname" operation="eq" value="test02" >> id="7bb8206f-d5f5-419d-99a1-f0bf6ef93c3b"/> >> </rule> >> </rsc_location> >> <rsc_order from="nfs" action="start" to="ms-drbd-www" >> to_action="promote" id="3332c7f7-41ba-426a-9ebb-56793d990549"/> >> <rsc_colocation to="ms-drbd-www" to_role="master" from="nfs" >> score="infinity" id="3f453dc6-0db2-43db-9701-73a52d99145a"/> >> </constraints> >> _______________________________________________ >> Linux-HA mailing list >> [email protected] >> http://lists.linux-ha.org/mailman/listinfo/linux-ha >> See also: http://linux-ha.org/ReportingProblems > > _______________________________________________ > Linux-HA mailing list > [email protected] > http://lists.linux-ha.org/mailman/listinfo/linux-ha > See also: http://linux-ha.org/ReportingProblems >
_______________________________________________ Linux-HA mailing list [email protected] http://lists.linux-ha.org/mailman/listinfo/linux-ha See also: http://linux-ha.org/ReportingProblems
