On Tue, Apr 7, 2020 at 12:22 PM Strahil Nikolov <hunter86...@yahoo.com>
wrote:

>
>
> The simplest way would be to say that 'blacklisting everything in
> multipath.conf' will solve your problems.
> In reality it is a little bit more complicated.
>
>
Interesting your arguments Strahil. To be digged more at my part.
In the mean time this approach below seems to have solved all the problems.

Preamble: I was able to put the new disk in a new PCI slot so that the
/dev/nvmeXX names remained consistent with previous setup, but anyway LVM
seemed to complain and I was not able again  to create VG on PV
So it was confirmed my suspect that nvme disks were not filtered out by LVM
and created confusion..

- under /etc/lvm
[root@ovirt lvm]# diff lvm.conf lvm.conf.orig
142d141
< filter = [ "r|nvme|", "a|.*/|" ]
153d151
< global_filter = [ "r|nvme|", "a|.*/|" ]
[root@ovirt lvm]#

NOTE: it was not sufficient the "filter" directive alone, even if in theory
from what I read, the global_filter one should come in place only when lvm
metad is active, while in ovirt node it is not.. To be understood better...

- under /etc
I noticed that also the OS disk was recomprised into multipath, so I
blacklisted it, making the file private at the end...

[root@ovirt etc]# diff -u3 multipath.conf.orig multipath.conf
--- multipath.conf.orig 2020-04-07 16:25:12.148044435 +0200
+++ multipath.conf 2020-04-07 10:55:44.728734050 +0200
@@ -1,4 +1,5 @@
 # VDSM REVISION 1.8
+# VDSM PRIVATE

 # This file is managed by vdsm.
 #
@@ -164,6 +165,7 @@

 blacklist {
         protocol "(scsi:adt|scsi:sbp)"
+        wwid INTEL_SSDSCKKI256G8_PHLA835602TE256J
 }

 # Remove devices entries when overrides section is available.
[root@ovirt etc]#

- rebuild initramfs

cp /boot/$(imgbase layer --current)/initramfs-$(uname -r).img /root/
dracut -f /boot/$(imgbase layer --current)/initramfs-$(uname -r).img
cp -p  /boot/$(imgbase layer --current)/initramfs-$(uname -r).img /boot/

After reboot I see configured as multipath devices the disks to be used for
gluster

[root@ovirt etc]# multipath -l
nvme.8086-50484b53373530353031325233373541474e-494e54454c205353 dm-2
NVME,INTEL SSDPED1K375GA
size=349G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=0 status=active
  `- 0:0:1:0 nvme0n1 259:0 active undef running
eui.01000000010000005cd2e4b5e7db4d51 dm-0 NVME,INTEL SSDPEDKX040T7

size=3.6T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=0 status=active
  `- 2:0:1:0 nvme2n1 259:1 active undef running
eui.01000000010000005cd2e4e359284f51 dm-1 NVME,INTEL SSDPE2KX010T7

size=932G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=0 status=active
  `- 1:0:1:0 nvme1n1 259:2 active undef running
[root@ovirt etc]#

And LVM commands don't complain any more with duplicates:

[root@ovirt etc]# pvs
  PV
                                  VG                 Fmt  Attr PSize
 PFree
  /dev/mapper/eui.01000000010000005cd2e4b5e7db4d51
                                  gluster_vg_4t      lvm2 a--    <3.64t
 0
  /dev/mapper/eui.01000000010000005cd2e4e359284f51
                                  gluster_vg_nvme1n1 lvm2 a--   931.51g
 0

/dev/mapper/nvme.8086-50484b53373530353031325233373541474e-494e54454c20535344504544314b3337354741-00000001
gluster_vg_nvme0n1 lvm2 a--   349.32g      0
  /dev/sda2
                                 onn                lvm2 a--  <228.40g
<43.87g
[root@ovirt etc]#

And as you see I was able to create now the VG on top of the new PV on the
4Tb disk:

[root@ovirt etc]# vgs
  VG                 #PV #LV #SN Attr   VSize    VFree
  gluster_vg_4t        1   2   0 wz--n-   <3.64t      0
  gluster_vg_nvme0n1   1   3   0 wz--n-  349.32g      0
  gluster_vg_nvme1n1   1   2   0 wz--n-  931.51g      0
  onn                  1  11   0 wz--n- <228.40g <43.87g
[root@ovirt etc]#

[root@ovirt etc]# lvs gluster_vg_4t
  LV             VG            Attr       LSize  Pool    Origin Data%
 Meta%  Move Log Cpy%Sync Convert
  gluster_lv_big gluster_vg_4t Vwi-aot--- <4.35t my_pool        0.05

  my_pool        gluster_vg_4t twi-aot--- <3.61t                0.05   0.14

[root@ovirt etc]#

Let's go hunting for the next problem... ;-)

Gianluca
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AYF2AAB3YYCYRZSP75MU6IMQNGUFFHJS/

Reply via email to