Hi,

I have 3 proxmox nodes and just two of them have drbd storage backend:
LINSTOR ==> sp list
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ StoragePool ┊ Node     ┊ Driver   ┊ PoolName             ┊ FreeCapacity ┊ 
TotalCapacity ┊ SupportsSnapshots ┊ State ┊
╞═════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡
┊ drbdpool    ┊ vm-box-2 ┊ LVM_THIN ┊ vg_vm-box-2/drbdpool ┊   297.99 GiB ┊     
  300 GiB ┊ true              ┊ Ok    ┊
┊ drbdpool    ┊ vm-box-4 ┊ LVM_THIN ┊ vg_vm-box-4/drbdpool ┊   297.99 GiB ┊     
  300 GiB ┊ true              ┊ Ok    ┊
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

It used to work well moving VMs' virtual drives back and forth from local
storage to drbdpool.

After adding a diskless node to the pool:
LINSTOR ==> sp list
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ StoragePool ┊ Node     ┊ Driver   ┊ PoolName             ┊ FreeCapacity ┊ 
TotalCapacity ┊ SupportsSnapshots ┊ State ┊
╞═════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡
┊ drbdpool    ┊ vm-box-2 ┊ LVM_THIN ┊ vg_vm-box-2/drbdpool ┊   297.99 GiB ┊     
  300 GiB ┊ true              ┊ Ok    ┊
┊ drbdpool    ┊ vm-box-3 ┊ DISKLESS ┊                      ┊              ┊     
          ┊ false             ┊ Ok    ┊
┊ drbdpool    ┊ vm-box-4 ┊ LVM_THIN ┊ vg_vm-box-4/drbdpool ┊   297.99 GiB ┊     
  300 GiB ┊ true              ┊ Ok    ┊
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

moving  a drive to drbdpool increases nodes' IO enormously while nothing seems 
to
be  going on (well, the disk seems to be moving but VERY slow). The log displays
just this w/o any progress, so I had to stop the disk moving:
create full clone of drive scsi0 (LVM-Storage:126/vm-126-disk-0.qcow2)
trying to acquire cfs lock 'storage-drbdpool' ...
transferred: 0 bytes remaining: 10739277824 bytes total: 10739277824 bytes 
progression: 0.00 %

Did I miss something in the linstor configuration?

PS Am I correct that redundancy should be kept as 2 (in /etc/pve/storage.cfg)
when only two nodes have drbd storage backends?

-- 
Best regards,
Alex Kolesnik

_______________________________________________
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com
https://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to