Terry Davis wrote:
Awesome. I rebooted and applied all available updates and now it works. Only thing worth noting in the updates was a kernel update to 2.6.18-92.1.13.el5. I think a reboot did it (for some reason).

On Wed, Oct 1, 2008 at 12:06 PM, Terry Davis <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> wrote:

    On Wed, Oct 1, 2008 at 11:42 AM, Alasdair G Kergon <[EMAIL PROTECTED]
    <mailto:[EMAIL PROTECTED]>> wrote:

        I hope that problem was fixed in newer packages.

        Meanwhile try running 'clvmd -R' between some of the commands.

        If all else fails, you may have to kill the clvmd daemons in
        the cluster
        and restart them, or even add a 'vgscan' on each node before
        the restart.

        Alasdair
        --
        [EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>



    Just a sanity check.  I killed all the clvmd daemons and started
    clvmd back up.  I created the PV on node A:

    [EMAIL PROTECTED] ~]# pvcreate /dev/sdh1
      Physical volume "/dev/sdh1" successfully created

    Node B knows nothing of /dev/sdh1 but it does exist:
    [EMAIL PROTECTED] ~]# ls /dev/sdh*
    /dev/sdh


This is the problem. If you partition the device on one node, you must do a 'partprobe' on all nodes so that they update their partition tables. Without doing this LVM has no idea what /dev/sdh1 is and therefore cannot lock on it. After running partprobe do 'clvmd -R' so that clvmd reloads its device cache and knows which devices are available. After that you can proceed with pvcreate, vgcreate, lvcreate, etc.
John

--
Linux-cluster mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/linux-cluster

Reply via email to