Bastian, How are you bringing up clvmd? manually? via a modified init script?
I don't see it as a resource in your crm output.
** Changed in: lvm2 (Ubuntu)
Status: Confirmed => Incomplete
--
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is a direct subscriber.
https://bugs.launchpad.net/bugs/719333
Title:
CLVM not locking LV's properly even when set to "exclusive"
Status in lvm2 - Logical Volume Manager:
Unknown
Status in “lvm2” package in Ubuntu:
Incomplete
Bug description:
Binary package hint: lvm2
Hello,
i got Ubuntu Lucid Server + openais + lvm + clvm running on a central
storage.
Metadata updates are distributed properly:
root@xen1:~# vgs
VG #PV #LV #SN Attr VSize VFree
vgsas1 1 5 0 wz--nc 1.36t 1.33t
root@xen1:~# lvs
LV VG Attr LSize Origin Snap% Move Log
Copy% Convert
play1.xxx.net-disk vgsas1 -wi-a- 4.00g
root@xen2:~# lvs
LV VG Attr LSize Origin Snap% Move Log
Copy% Convert
play1.xxx.net-disk vgsas1 -wi-a- 4.00g
root@xen1:~# lvchange -an vgsas1/play1.xxx.net-disk
root@xen1:~# lvs
LV VG Attr LSize Origin Snap% Move Log
Copy% Convert
play1.xxx.net-disk vgsas1 -wi--- 4.00g
NOW: Metadata obviously distributed correctly the xen2
root@xen2:~# lvs
LV VG Attr LSize Origin Snap% Move Log
Copy% Convert
play1.xxx.net-disk vgsas1 -wi--- 4.00g
root@xen1:~# lvchange -aey vgsas1/play1.xxx.net-disk
root@xen1:~# lvs
LV VG Attr LSize Origin Snap% Move Log
Copy% Convert
play1.xxx.net-disk vgsas1 -wi-a- 4.00g
NOW: LV still shown as inactive on the second node.
root@xen2:~# lvs
LV VG Attr LSize Origin Snap% Move Log
Copy% Convert
play1.xxx.net-disk vgsas1 -wi--- 4.00g
NOW: even better, i can activate it "exclusive" on the second node:
root@xen2:~# lvchange -aey vgsas1/play1.xxx.net-disk
root@xen2:~# lvs
LV VG Attr LSize Origin Snap% Move Log
Copy% Convert
play1.xxx.net-disk vgsas1 -wi-a- 4.00g
I can even mount it from both nodes:
root@xen2:~# mount /dev/vgsas1/play1.xxx.net-disk /mnt/
root@xen2:~# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/cciss/c0d0p1 57672700 9810272 44932816 18% /
none 525972 268 525704 1% /dev
none 525972 9512 516460 2% /dev/shm
none 525972 72 525900 1% /var/run
none 525972 0 525972 0% /var/lock
none 525972 0 525972 0% /lib/init/rw
/dev/mapper/vgsas1-play1.xxx.net--disk
4128448 754656 3164080 20% /mnt
AND ON:
root@xen1:~# !mou
mount /dev/vgsas1/play1.xxx.net-disk /mnt/
root@xen1:~# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/cciss/c0d0p1 24030076 14832148 7977272 66% /
none 525972 280 525692 1% /dev
none 525972 9392 516580 2% /dev/shm
none 525972 84 525888 1% /var/run
none 525972 0 525972 0% /var/lock
none 525972 0 525972 0% /lib/init/rw
/dev/mapper/vgsas1-play1.xxx.net--disk
4128448 754656 3164080 20% /mnt
If that was no test setting but 2 vm's accessing one ext3 fs simultaneously,
I would be in serious trouble now!
btw, the Clvm was recompiled against openais to get rid of cman.
root@xen1:~# dpkg --list|grep lvm
ii clvm 2.02.54-1ubuntu4.2
Cluster LVM Daemon for lvm2
ii lvm2 2.02.54-1ubuntu4.1
The Linux Logical Volume Manager
root@xen1:~# crm status
============
Last updated: Tue Feb 15 14:16:39 2011
Stack: openais
Current DC: xen2 - partition with quorum
Version: 1.0.8-042548a451fce8400660f6031f4da6f0223dd5dd
2 Nodes configured, 2 expected votes
0 Resources configured.
============
Online: [ xen1 xen2 ]
What's wrong there?
so long,
Bastian
_______________________________________________
Mailing list: https://launchpad.net/~ubuntu-ha
Post to : [email protected]
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help : https://help.launchpad.net/ListHelp