Hello,

I am running a two node Heartbeat Cluster Version 2.1.3 with SLES10 SP2.
My config includes several resources, which are being mounted via the 
OCF Filesystem resource script. All those mounts work fine, so I thought 
it would be no problem to add another mount to my config...but I was wrong.
My cluster is running, so I added another Filesystem-Resource to a 
existing resource group. The device and directory are spelled correctly, 
the mountpoint exists. Although the device does not get mounted by 
heartbeat.
The only error I get in my logfile (I have got debug 0 set in ha.cf) is 
the following:
-- snip --
Oct 26 17:15:26 db10 mgmtd: [8515]: ERROR: unpack_rsc_op: Hard error: 
mount_ora37_data2_monitor_0 failed with rc=2.
Oct 26 17:15:26 db10 mgmtd: [8515]: ERROR: unpack_rsc_op:   Preventing 
mount_ora37_data2 from re-starting anywhere in the cluster
-- snip --

I must admit, I am a little bit confused because I have no "monitor" 
operation configured. Where does the "mount_ora37_data2_monitor_0" come 
from?
I tried to mount the device manual via "mount" and the agent directly 
(/etc/ha.d/resource.d/Filesystem /dev/vx/dsk/dg_ora37/data2_ora37 
/data2/ora37/ vxfs start) and both options work fine.
It is a Veritas Filesystem but so are the other mounts which work 
without problems.

Can anyone please help or has some ideas to debug? Unfortunately it is a 
productive system, so I think I can not increase the debug level without 
restarting the cluster?
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to