Where did all the pengine logs go? Anyway, there are a bunch of resources that have something like this: <op id="smb_9_mon" interval="120s" name="start" timeout="60s"/>
This is wrong, start actions can't have an interval. I'm pretty sure you want name="monitor" for all these. Fix that and see how you go. On Mon, Aug 31, 2009 at 1:15 PM, prakash hallalli <[email protected]> wrote: > Hi All, > > Sorry for late replay, i was trying using some there possibilities to run > but it could not work, still its giving same problem. > I have attached 2 log message in this mail. > First one log file has contained when i have reboot then i got log message. > Second file has contained when i try to do failure i have got this log > message. > > Some time it will got to end of resources and stop. i don not know why its > happening. but i tryed all possibilities but no use. > * > #crm_mon * > > ============ > Last updated: Mon Aug 31 16:14:45 2009 > Current DC: gtt5.linux.com (7d892d6c-d277-45c2-beb6-331fca5b3920) > 2 Nodes configured. > 1 Resources configured. > ============ > > Node: gtt5.linux.com (7d892d6c-d277-45c2-beb6-331fca5b3920): online > Node: gtt4.linux.com (87dc2dcc-791b-4bfb-a971-b30fbd909255): online > > Resource Group: group_1 > MailTo_1 (heartbeat::ocf:MailTo): Started gtt5.linux.com > IPaddr_192_168_2_20 (heartbeat::ocf:IPaddr): Started > gtt5.linux.com > drbddisk_3 (heartbeat:drbddisk): Started gtt5.linux.com > LVM_4 (heartbeat::ocf:LVM): Started gtt5.linux.com > Filesystem_5 (heartbeat::ocf:Filesystem): Started > gtt5.linux.com > MakeMounts_6 (heartbeat:MakeMounts): Started gtt5.linux.com > Filesystem_7 (heartbeat::ocf:Filesystem): Started > gtt5.linux.com > nfs_8 (lsb:nfs): Started gtt5.linux.com > smb_9 (lsb:smb): Started gtt5.linux.com > iscsi-target_10 (lsb:iscsi-target): Started gtt5.linux.com > * openfiler_11 (lsb:openfiler): Stopped* > > Failed actions: > *openfiler_11_start_0 (node=gtt5.linux.com, call=36, rc=1): Error* > IPaddr_192_168_2_20_start_0 (node=gtt4.linux.com, call=49, rc=1): Error > > > > This is my configuration file what i have currently running resources. > > <cib admin_epoch="0" have_quorum="true" ignore_dtd="false" num_peers="2" > cib_feature_revision="2.0" ccm_transition="2" generated="true" > dc_uuid="7d892d6c-d277-45c2-beb6-331fca5b3920" epoch="2" num_updates="1" > cib-last-written="Mon Aug 31 15:53:33 2009"> > <configuration> > <crm_config> > <cluster_property_set id="cib-bootstrap-options"> > <attributes> > <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" > value="2.1.3-node: 4a3eac571f442c7cfcefc18fcaad35314460c1f6"/> > <nvpair id="cib-bootstrap-options-symmetric-cluster" > name="symmetric-cluster" value="true"/> > <nvpair id="cib-bootstrap-options-no-quorum-policy" > name="no-quorum-policy" value="stop"/> > <nvpair id="cib-bootstrap-options-default-resource-stickiness" > name="default-resource-stickiness" value="0"/> > <nvpair name="default-resource-failure-stickiness" > id="cib-bootstrap-options-default-resource-failure-stickiness" value="0"/> > <nvpair id="cib-bootstrap-options-stonith-enabled" > name="stonith-enabled" value="false"/> > <nvpair id="cib-bootstrap-options-stonith-action" > name="stonith-action" value="reboot"/> > <nvpair id="cib-bootstrap-options-startup-fencing" > name="startup-fencing" value="true"/> > <nvpair id="cib-bootstrap-options-stop-orphan-resources" > name="stop-orphan-resources" value="true"/> > <nvpair id="cib-bootstrap-options-stop-orphan-actions" > name="stop-orphan-actions" value="true"/> > <nvpair id="cib-bootstrap-options-remove-after-stop" > name="remove-after-stop" value="false"/> > <nvpair id="cib-bootstrap-options-short-resource-names" > name="short-resource-names" value="true"/> > <nvpair id="cib-bootstrap-options-transition-idle-timeout" > name="transition-idle-timeout" value="5min"/> > <nvpair id="cib-bootstrap-options-default-action-timeout" > name="default-action-timeout" value="20s"/> > <nvpair id="cib-bootstrap-options-is-managed-default" > name="is-managed-default" value="true"/> > <nvpair id="cib-bootstrap-options-cluster-delay" > name="cluster-delay" value="60s"/> > <nvpair id="cib-bootstrap-options-pe-error-series-max" > name="pe-error-series-max" value="-1"/> > <nvpair id="cib-bootstrap-options-pe-warn-series-max" > name="pe-warn-series-max" value="-1"/> > <nvpair id="cib-bootstrap-options-pe-input-series-max" > name="pe-input-series-max" value="-1"/> > </attributes> > </cluster_property_set> > </crm_config> > <nodes> > <node id="7d892d6c-d277-45c2-beb6-331fca5b3920" uname="gtt5.linux.com" > type="normal"/> > <node id="87dc2dcc-791b-4bfb-a971-b30fbd909255" uname="gtt4.linux.com" > type="normal"/> > </nodes> > <resources> > <group id="group_1"> > <primitive class="ocf" id="MailTo_1" provider="heartbeat" > type="MailTo"> > <operations> > <op id="MailTo_1_mon" interval="120s" name="monitor" > timeout="60s"/> > </operations> > <instance_attributes id="MailTo_1_inst_attr"> > <attributes> > <nvpair id="MailTo_1_attr_0" name="email" > value="r...@localhost"/> > <nvpair id="MailTo_1_attr_1" name="subject" > value="ClusterFailover"/> > </attributes> > </instance_attributes> > </primitive> > <primitive class="ocf" id="IPaddr_192_168_2_20" > provider="heartbeat" type="IPaddr"> > <operations> > <op id="IPaddr_192_168_2_20_mon" interval="5s" name="monitor" > timeout="5s"/> > </operations> > <instance_attributes id="IPaddr_192_168_2_20_inst_attr"> > <attributes> > <nvpair id="IPaddr_192_168_2_20_attr_0" name="ip" > value="192.168.2.20"/> > <nvpair id="IPaddr_192_168_2_20_attr_3" name="broadcast" > value="255.255.255.0"/> > </attributes> > </instance_attributes> > </primitive> > <primitive class="heartbeat" id="drbddisk_3" provider="heartbeat" > type="drbddisk"> > <operations> > <op id="drbddisk_3_mon" interval="120s" name="start" > timeout="60s"/> > </operations> > <instance_attributes id="drbddisk_3_inst_attr"> > <attributes> > <nvpair id="drbddisk_3_attr_1" name="1"/> > </attributes> > </instance_attributes> > </primitive> > <primitive class="ocf" id="LVM_4" provider="heartbeat" type="LVM"> > <operations> > <op id="LVM_4_mon" interval="120s" name="start" timeout="60s"/> > </operations> > <instance_attributes id="LVM_4_inst_attr"> > <attributes> > <nvpair id="LVM_4_attr_0" name="volgrpname" > value="vg0_drbd"/> > </attributes> > </instance_attributes> > </primitive> > <primitive class="ocf" id="Filesystem_5" provider="heartbeat" > type="Filesystem"> > <operations> > <op id="Filesystem_5_mon" interval="120s" name="start" > timeout="60s"/> > </operations> > <instance_attributes id="Filesystem_5_inst_attr"> > <attributes> > <nvpair id="Filesystem_5_attr_0" name="device" > value="/dev/drbd0"/> > <nvpair id="Filesystem_5_attr_1" name="directory" > value="/cluster_metadata"/> > <nvpair id="Filesystem_5_attr_2" name="fstype" value="ext3"/> > <nvpair id="Filesystem_5_attr_3" name="options" > value="defaults,noatime"/> > </attributes> > </instance_attributes> > </primitive> > <primitive class="heartbeat" id="MakeMounts_6" provider="heartbeat" > type="MakeMounts"> > <operations> > <op id="MakeMounts_6_mon" interval="120s" name="start" > timeout="60s"/> > </operations> > </primitive> > <primitive class="ocf" id="Filesystem_7" provider="heartbeat" > type="Filesystem"> > <operations> > <op id="Filesystem_7_mon" interval="120s" name="monitor" > timeout="60s"/> > </operations> > <instance_attributes id="Filesystem_7_inst_attr"> > <attributes> > <nvpair id="Filesystem_7_attr_0" name="device" > value="/dev/vg0_drbd/lvm0"/> > <nvpair id="Filesystem_7_attr_1" name="directory" > value="/mnt/vg0_drbd/lvm0"/> > <nvpair id="Filesystem_7_attr_2" name="fstype" value="ext3"/> > <nvpair id="Filesystem_7_attr_3" name="options" > value="defaults,usrquota,grpquota,acl,user_xattr"/> > </attributes> > </instance_attributes> > </primitive> > <primitive class="lsb" id="nfs_8" provider="heartbeat" type="nfs"> > <operations> > <op id="nfs_8_mon" interval="120s" name="start" timeout="60s"/> > </operations> > </primitive> > <primitive class="lsb" id="smb_9" provider="heartbeat" type="smb"> > <operations> > <op id="smb_9_mon" interval="120s" name="start" timeout="60s"/> > </operations> > </primitive> > <primitive class="lsb" id="iscsi-target_10" provider="heartbeat" > type="iscsi-target"> > <operations> > <op id="iscsi-target_10_mon" interval="120s" name="start" > timeout="60s"/> > </operations> > </primitive> > <primitive class="lsb" id="openfiler_11" provider="heartbeat" > type="openfiler"> > <operations> > <op id="openfiler_11_mon" interval="120s" name="start" > timeout="60s"/> > </operations> > </primitive> > </group> > </resources> > <constraints> > <rsc_location id="rsc_location_group_1" rsc="group_1"> > <rule id="prefered_location_group_1" score="100"> > <expression attribute="#uname" > id="prefered_location_group_1_expr" operation="eq" value="gtt4.linux.com"/> > </rule> > </rsc_location> > </constraints> > </configuration> > </cib> > > > Please help me what should i do, > > Thanks, > > Prakash, KH > > > > > On Mon, Aug 24, 2009 at 1:52 PM, Andrew Beekhof <[email protected]> wrote: > >> On Fri, Aug 21, 2009 at 2:40 PM, prakash >> hallalli<[email protected]> wrote: >> > Hello all, >> > >> > I have configuration heartbeat server in openfiler system using crm >> enable >> > options. I can able to run my primary server successfully with out crm >> > resource problem. But when i try to do failure i can not do because while >> > failure happening the crm resources started to stop the lvm filesystem >> > mounting. I did try the checking resources file i couldn't understand, >> why >> > its stopping. I have given system informations in below. >> >> Please send your configuration and logs as attachments. >> Logs that have been pasted into the email body are unreadable. >> _______________________________________________ >> Linux-HA mailing list >> [email protected] >> http://lists.linux-ha.org/mailman/listinfo/linux-ha >> See also: http://linux-ha.org/ReportingProblems >> > > _______________________________________________ > Linux-HA mailing list > [email protected] > http://lists.linux-ha.org/mailman/listinfo/linux-ha > See also: http://linux-ha.org/ReportingProblems > _______________________________________________ Linux-HA mailing list [email protected] http://lists.linux-ha.org/mailman/listinfo/linux-ha See also: http://linux-ha.org/ReportingProblems
