Hello group! I'm pretty new to ocfs2 and clustered file systems in general. I was able to set up a 2 node cluster (CentOS 5.4) with ocfs2 1.4.4 on DRBD on top of a LVM volume.
Everything works like a charm and is rock solid, even under heavy load conditions, so I'm really happy with it. However, there remains one little problem: I'd like to do backups with snapshots. Creating the snapshot volume, mounting, copying and dismounting works like expected. But I can't delete the snapshop volume after it was mounted once. What I do is: lvcreate -L5G -s -n lv00snap /dev/vg00/lv00 tunefs.ocfs2 -y --cloned-volume /dev/vg00/lv00snap mount -t ocfs2 /dev/vg00/lv00snap /mnt/backup (copy stuff) umount /mnt/backup lvremove -f /dev/vg00/lv00snap lvremove fails, saying, that the volume is open. Checking with lvdisplay it tells me "# open" is 1. And that's the funny thing: After creating the snapshot volume, # open is 0, what's not a surprise. After mounting the volume, # open is 2 - which is the same for the other ocfs2 volume and makes sense to me, as there are 2 nodes. But after unmounting the snapshot volume, the number decreases to 1, not to 0 so LVM consideres the volume still open. I also tried mounting read only and/or adding "--fs-features=local" to tunefs.ocfs2 without success. In the moment I have to reboot the node to be able to remove the snapshot. So what am I doing wrong? Thank's a lot for any hint! Armin _______________________________________________ Ocfs2-users mailing list Ocfs2-users@oss.oracle.com http://oss.oracle.com/mailman/listinfo/ocfs2-users