Thanks Karl. It did worked. This made me familiar with the details of Heartbeat configuration.
And Thank you David for that nice point. I did that by "tune2fs -i 0 /dev/sda3". Karl wrote: > > I use just such a set-up on a three node cluster of web-servers, the disk > system gets mounted automatically on the active node. (The SAN serves it > up > to all three, and any of the three can mount it, but the resource agent > sees > to it that it is only mounted by one node at a time.) > > Mine looks something like this: > > <primitive class="ocf" id="My_Disk" provider="heartbeat" > type="Filesystem"> > <operations> > <op id="My_Disk_mon" interval="25s" name="monitor" > timeout="50s"/> > </operations> > <instance_attributes id="My_Disk_inst_attr"> > <nvpair id="My_Disk_dev" name="device" value="/dev/sdb1"/> > <nvpair id="My_Disk_mountpoint" name="directory" > value="/path_to_mount"/> > <nvpair id="My_Disk_fstype" name="fstype" value="ext3"/> > <nvpair id="My_Disk_status" name="target_role" > value="started"/> > </instance_attributes> > </primitive> > > David wrote: > > >>> On 5/29/2010 at 04:54 AM, Mozafar Roshany <[email protected]> > wrote: > > I've a mail system with two active/passive nodes using Heartbeat; these > two > > servers use an ext3 partition on SAN storage for mailboxes. I want that > > partition always be mounted on the active node. I mean when node switch > > occurs for any reason, the partition be mounted/unmounted on > > new-active/pre-active server automatically. I saw the Filesystem > resource; > > Filesystem is the right one. If you're using EXT3, you'll also want to look > at tune2fs to ensure that you don't have an extended unplanned outage as ext > spends time with fsck if it hasn't been run lately. > _______________________________________________ Linux-HA mailing list [email protected] http://lists.linux-ha.org/mailman/listinfo/linux-ha See also: http://linux-ha.org/ReportingProblems
