Quoting Digimer <[email protected]>:

On 07/19/2011 11:13 AM, [email protected] wrote:
Quoting Digimer <[email protected]>:

On 07/19/2011 10:55 AM, [email protected] wrote:
I can create a Volume Group and the data gets replicated from one
machine to the other.  This command fails though:

[root@thing1 lvm]# /usr/sbin/lvcreate -n vmdata -l 69972 thing0
    Logging initialised at Tue Jul 19 10:53:21 2011
    Set umask to 0077
    Setting logging type to disk
    Finding volume group "thing0"
    Archiving volume group "thing0" metadata (seqno 15).
    Creating logical volume vmdata
    Creating volume group backup "/etc/lvm/backup/thing0" (seqno 16).
  Error locking on node thing1.eyemg.com: device-mapper: create ioctl
failed: Device or resource busy
  Error locking on node thing2.eyemg.com: device-mapper: create ioctl
failed: Device or resource busy
  Failed to activate new LV.
    Creating volume group backup "/etc/lvm/backup/thing0" (seqno 17).
    Wiping internal VG cache

I don't know if this is a drbd issue or an lvm.  I can't seem to find
anything on it.

Thanks
Ken Lowther

Obvius question first; Is the DRBD in Primary?

It was primary/primary but this gave me an idea.  I changed the 'other'
node to secondary and it did change the error message some in that it
gives a 'uuid not found' error instead on the second resource instead of
the lock error.

[root@thing1 lvm]# /usr/sbin/lvcreate -n vmdata -l 69972 thing0
    Logging initialised at Tue Jul 19 11:09:02 2011
    Set umask to 0077
    Setting logging type to disk
    Finding volume group "thing0"
    Archiving volume group "thing0" metadata (seqno 19).
    Creating logical volume vmdata
    Creating volume group backup "/etc/lvm/backup/thing0" (seqno 20).
  Error locking on node thing1.eyemg.com: device-mapper: create ioctl
failed: Device or resource busy
  Error locking on node thing2.eyemg.com: Volume group for uuid not
found: hqcys8c9fDoBtX4UGLV0lmAbTZ7FMW8516YBHLfh64TzKNxRBqDH1wYg7IQHMRul
  Failed to activate new LV.
    Creating volume group backup "/etc/lvm/backup/thing0" (seqno 21).
    Wiping internal VG cache



Please share more details about your config so that folks can better
help you, rather than making wild stabs in the dark. :)

Using locking 3 in lvm with clvmd running.
resource drbd0 {
        on thing1.eyemg.com {
                disk /dev/cciss/c0d1;
                device /dev/drbd0;
                meta-disk internal;
                address 192.168.244.1:7788;
        }
        on thing2.eyemg.com {
                disk /dev/cciss/c0d1;
                device /dev/drbd0;
                meta-disk internal;
                address 192.168.244.2:7788;
        }
Ken

Can I assume then that the cluster itself is running and quorate?

[root@thing1 ~]# cman_tool nodes
Node  Sts   Inc   Joined               Name
   1   M    392   2011-07-18 15:35:22  thing1.eyemg.com
   2   M    592   2011-07-18 15:37:02  thing2.eyemg.com
[root@thing1 ~]#

Also,
is anything else (trying to) use the backing devices?

Not that I know of. I'm setting up a new cluster. What types of things might I look for that I don't know about?

Does this error
occur on both nodes?

Yes.

Thanks
Ken








_______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to