You could try doing a vgexport then a vgimport. You probably need to blow away your lvm cache too.

There are ways to make it work without clvmd, but you can't have the VG imported on both hosts at the same time.

On 10/3/11 12:18 PM, [email protected] wrote:
Hi,

thanks for the answer. No I don't use clustered LVM. I tried to do deactivating and activating from the second node (client01) but nothing changed:

client01:~# vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "pve" using metadata type lvm2
  Found volume group "replicated" using metadata type lvm2
client01:~# vgchange -an
  Can't deactivate volume group "pve" with 4 open logical volume(s)
client01:~# vgchange -an replicated
  0 logical volume(s) in volume group "replicated" now active
client01:~# vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "pve" using metadata type lvm2
  Found volume group "replicated" using metadata type lvm2
client01:~# vgchange -ay replicated
  2 logical volume(s) in volume group "replicated" now active
client01:~# lvdisplay
output shows just local lv's

---------------------------------------

If I do lvdisplay on the first node (client00) I see the local LV's and the DRBD-LV's:
  --- Logical volume ---
  LV Name                /dev/replicated/vm-101-disk-1
  VG Name                replicated
  LV UUID                i7NM1a-heRf-ZufV-5pVJ-NsU8-0xHh-d2vALH
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                10.00 GB
  Current LE             2560
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:4

  --- Logical volume ---
  LV Name                /dev/replicated/vm-104-disk-1
  VG Name                replicated
  LV UUID                eEdV1h-42uJ-GH56-o0f8-Nk2i-DaMm-G1er2X
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                20.00 GB
  Current LE             5120
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:5

I think it should work without clustered LVM?

Best regards,
Markus

Zitat von David Coulson <[email protected]>:

 Are you using clustered LVM? If not, you probably should be.

You could try deactivating the VG then reactivating it and seeing if the new primary sees the LV.

On 10/3/11 6:02 AM, [email protected] wrote:
Dear all,

I want to build an HA setup with virtual machines on a two hardware node setup. I already setup a nested LVM DRBD configuration. But from the second node I can't see the lv's from the drbd device; pvscan displays it.

Has anybody an idea whats wrong with my setup? I think I need a primary/primary drbd device because if one hardware node fails I want to start the KVM container from the other one; therefore I need the latest data of the KVM container.

Please find some logs below...

Thanks a lot for your help in advance!

Best regards,
Markus

Client00:
---------
client00:~# lvscan
 ACTIVE            '/dev/pve/swap' [15.00 GB] inherit
 ACTIVE            '/dev/pve/root' [96.00 GB] inherit
 ACTIVE            '/dev/pve/data' [1.32 TB] inherit
 ACTIVE            '/dev/pve/r0' [60.00 GB] inherit
 ACTIVE            '/dev/replicated/vm-101-disk-1' [10.00 GB] inherit
client00:~# pvscan
 PV /dev/sda2    VG pve          lvm2 [1.82 TB / 343.99 GB free]
 PV /dev/drbd0   VG replicated   lvm2 [60.00 GB / 50.00 GB free]
 Total: 2 [1.88 TB] / in use: 2 [1.88 TB] / in no VG: 0 [0   ]
client00:~# cat /proc/drbd
version: 8.3.10 (api:88/proto:86-96)
GIT-hash: 5c0b0469666682443d4785d90a2c603378f9017b build by phil@fat-tyre, 2011-01-28 12:17:35
0: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r-----
ns:81968 nr:28 dw:28 dr:84752 al:0 bm:15 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0

Client01:
---------
client01:~# lvscan
 ACTIVE            '/dev/pve/swap' [4.00 GB] inherit
 ACTIVE            '/dev/pve/root' [96.00 GB] inherit
 ACTIVE            '/dev/pve/data' [194.14 GB] inherit
 ACTIVE            '/dev/pve/r0' [60.00 GB] inherit
client01:~# pvscan
 PV /dev/sda2    VG pve          lvm2 [698.13 GB / 343.99 GB free]
 PV /dev/drbd0   VG replicated   lvm2 [60.00 GB / 50.00 GB free]
 Total: 2 [758.13 GB] / in use: 2 [758.13 GB] / in no VG: 0 [0   ]
client01:~# cat /proc/drbd
version: 8.3.10 (api:88/proto:86-96)
GIT-hash: 5c0b0469666682443d4785d90a2c603378f9017b build by phil@fat-tyre, 2011-01-28 12:17:35
0: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r-----
ns:28 nr:81968 dw:81948 dr:2096 al:2 bm:14 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
_______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user



_______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to