Hello, Try running pvscan && vgscan && vgmknodes. This should re-create the vg-dir under /dev, as you're forcing it to create the nodes (i.e /dev/myvg/lvol0, etc) without the need to stop clvmd. I haven't tested this, but it should work and only be required the first time you make changes to the lvm.
Cheers, Gonçalo Almeida Gomes IT Systems Unix Edifício Mar Mediterrâneo Av. D. João II - Lote 1.06.2.4 1990-095 Lisboa-Portugal + 351 93 100 3631 [EMAIL PROTECTED]<mailto:[EMAIL PROTECTED]> http://www.sonaecom.pt<http://www.sonaecom.pt/> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Collins, Kevin [Beeline] Sent: segunda-feira, 28 de Abril de 2008 16:43 To: Red Hat Enterprise Linux 5 (Tikanga) discussion mailing-list Subject: RE: [rhelv5-list] Clustered LVM I'll probably be opening a support ticket with RedHat on this today. I can't restart clvmd every time I need a new LV, as it requires umounting all the active LVs in each clustered VG. Thanks for confirming! Kevin ________________________________ From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Zavodsky, Daniel (GE Money) Sent: Sunday, April 27, 2008 11:58 PM To: Red Hat Enterprise Linux 5 (Tikanga) discussion mailing-list Subject: RE: [rhelv5-list] Clustered LVM Hello, I have been experiencing exactly the same issues... and I found the trick with restarting clvmd myself too. :-) It looks like the problem happens when you are using both non-clustered and clustered LVM. I don't think you are doing anything wrong. Regards, Daniel ________________________________ From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Collins, Kevin [Beeline] Sent: Friday, April 25, 2008 9:08 PM To: [email protected] Subject: [rhelv5-list] Clustered LVM Hi, I am working through the process of building my first GFS installation and cluster. So far, things are going well. I have shared SAN disk that is visible to both nodes, I've got the basic cluster up and I am now starting to create my LVM structure. I spent a lot of time trying to determine why I was seeing errors similar to the following when trying to create LVs in a new VG: root# lvcreate -L 50M /dev/vgtest Rounding up size to full physical extent 52.00 MB Error locking on node cpafisxb: Volume group for uuid not found: IC070PzNGG68uEesi33dH902E4GeGEvwcAccZY4AdnJRkPHUL7EYJzL0Xxkg3eqV Error locking on node cpafisxa: Volume group for uuid not found: IC070PzNGG68uEesi33dH902E4GeGEvwcAccZY4AdnJRkPHUL7EYJzL0Xxkg3eqV Failed to activate new LV. I noticed that I did not have a /dev/vgtest on the node where I created the VG. I discovered (by googling the error message above) that another person had seen a similar problem and resolved it by restarting the clvmd service. I did the same thing and the problem was resolved - I don't see that error when creating LVs in that VG, and I now have a /dev/vgtest and associated files (only on the node I created it on). Now, I created a new VG, try to create a new LV within it and see the same error! I see in the clvmd man page that there is a "-R" option, which sounds like maybe what should be done instead of restarting clvmd. Tried it, but no luck - same error and no /dev/ directory created. Restart clvmd, instant fix. So - what am I doing wrong here? I can't imagine I am supposed to restart clvmd every time I create a new VG, am I? Additionally, I can see the clustered VGs and LVs on my other node, but there are not any files in /dev for the clustered VGs - is this normal? I have also restarted clvmd there with no effect. Thanks, Kevin
_______________________________________________ rhelv5-list mailing list [email protected] https://www.redhat.com/mailman/listinfo/rhelv5-list
