Hi there,

I am trying to set up a little 2 node cluster with DRBD 
pacemaker-heartbeat which will be using the ocfs2 Filesystem. I want to 
use ocfs as a non clustered filesystem to make things easier I don't 
really see the advantage of using ocfs as a clustered filesystem on a 
cluster block device, but enlight me if I didn't understand it 
correctly. Well here are my configs


node $id="609b0bcb-8904-4666-bc2c-c66b8b2b7f1e" sitasl00001
node $id="610f86ee-6bd4-4789-8da9-9555cfa90b7b" sitasl00002
primitive drbd ocf:linbit:drbd \
         params drbd_resource="r0" \
         op monitor interval="15s"
primitive fs ocf:heartbeat:Filesystem \
         params device="/dev/drbd0" directory="/data" fstype="ocfs2" \
         meta target-role="Started"
primitive ip ocf:heartbeat:IPaddr2 \
         params ip="10.65.68.239" nic="br0"
group cluster ip
ms ms_drbd drbd \
         meta master-max="1" master-node-max="1" clone-max="2" 
clone-node-max="1" notify="true"
colocation cluster-with-drbdmaster inf: cluster ms_drbd:Master
property $id="cib-bootstrap-options" \
         stonith-enabled="false" \
         no-quorum-policy="ignore" \
         dc-version="1.0.5-3840e6b5a305ccb803d29b468556739e75532d56" \
         cluster-infrastructure="Heartbeat" \
         last-lrm-refresh="1269427049"


when I try to start my fs resource crm_mon shows me this

============
Last updated: Wed Mar 24 11:44:34 2010
Stack: Heartbeat
Current DC: sitasl00002 (610f86ee-6bd4-4789-8da9-9555cfa90b7b) - 
partition with quorum
Version: 1.0.5-3840e6b5a305ccb803d29b468556739e75532d56
2 Nodes configured, unknown expected votes
3 Resources configured.
============

Online: [ sitasl00001 sitasl00002 ]

Master/Slave Set: ms_drbd
         Masters: [ sitasl00001 ]
         Slaves: [ sitasl00002 ]
Resource Group: cluster
     ip  (ocf::heartbeat:IPaddr2):       Started sitasl00001
fs      (ocf::heartbeat:Filesystem):    Started sitasl00001 (unmanaged) 
FAILED

Failed actions:
     fs_start_0 (node=sitasl00001, call=26, rc=6, status=complete): not 
configured
     fs_stop_0 (node=sitasl00001, call=27, rc=6, status=complete): not 
configured



and I get this in my ha-log

Mar 24 11:44:29 sitasl00001 attrd: [18882]: info: attrd_trigger_update: 
Sending flush op to all hosts for: last-failure-fs (1269427481)
Mar 24 11:44:29 sitasl00001 attrd: [18882]: info: attrd_perform_update: 
Sent update 59: last-failure-fs=1269427481
Filesystem[24021]:      2010/03/24_11:44:29 INFO: Running stop for 
/dev/drbd0 on /data
Filesystem[24021]:      2010/03/24_11:44:29 ERROR: /dev/drbd0: ocfs2 is 
not compatible with your environment.
Mar 24 11:44:29 sitasl00001 crmd: [18883]: info: process_lrm_event: LRM 
operation fs_stop_0 (call=27, rc=6, cib-update=48, confirmed=true) 
complete not configured
Mar 24 11:44:30 sitasl00001 attrd: [18882]: info: attrd_ha_callback: 
Update relayed from sitasl00002
Mar 24 11:44:30 sitasl00001 attrd: [18882]: info: attrd_ha_callback: 
Update relayed from sitasl00002
Mar 24 11:44:30 sitasl00001 attrd: [18882]: info: attrd_trigger_update: 
Sending flush op to all hosts for: last-failure-fs (1269427482)
Mar 24 11:44:30 sitasl00001 attrd: [18882]: info: attrd_perform_update: 
Sent update 61: last-failure-fs=1269427482
Mar 24 11:44:31 sitasl00001 cib: [18879]: info: cib_stats: Processed 98 
operations (714.00us average, 0% utilization) in the last 10min



Thx for any help :)

_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to