On Mon, Sep 21, 2020 at 9:02 AM Jeremey Wise <jeremey.w...@gmail.com> wrote:
>
>
>
>
>
>
> vdo: ERROR - Device /dev/sdc excluded by a filter
>
>
>
>
>
> Other server
>
> vdo: ERROR - Device 
> /dev/mapper/nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001p1
>  excluded by a filter.
>
>
>
> All systems when I go to create VDO volume on blank drives.. I get this 
> filter error.  All disk outside of the HCI wizard setup are now blocked from 
> creating new Gluster volume group.
>
> Here is what I see in /dev/lvm/lvm.conf |grep filter
> [root@odin ~]# cat /etc/lvm/lvm.conf |grep filter
> filter = 
> ["a|^/dev/disk/by-id/lvm-pv-uuid-e1fvwo-kEfX-v3lT-SKBp-cgze-TwsO-PtyvmC$|", 
> "a|^/dev/disk/by-id/lvm-pv-uuid-mr9awW-oQH5-F4IX-CbEO-RgJZ-x4jK-e4YZS1$|", 
> "r|.*|"]

This filter is correct for a normal oVirt host. But gluster wants to
use more local disks,
so you should:

1. remove the lvm filter
2. configure gluster
3. create the lvm filter

This will create a filter including all the mounted logical volumes
created by gluster.

Can you explain how do you reproduce this?

The lvm filter is created when you add a host to engine. Did you add the host
to engine before configuring gluster? Or maybe you are trying to add a host that
was used previously by oVirt?

In the last case, removing the filter before installing gluster will
fix the issue.

Nir

> [root@odin ~]# ls -al /dev/disk/by-id/
> total 0
> drwxr-xr-x. 2 root root 1220 Sep 18 14:32 .
> drwxr-xr-x. 6 root root  120 Sep 18 14:32 ..
> lrwxrwxrwx. 1 root root    9 Sep 18 22:40 
> ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN -> ../../sda
> lrwxrwxrwx. 1 root root   10 Sep 18 22:40 
> ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part1 -> ../../sda1
> lrwxrwxrwx. 1 root root   10 Sep 18 22:40 
> ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part2 -> ../../sda2
> lrwxrwxrwx. 1 root root    9 Sep 18 14:32 
> ata-Micron_1100_MTFDDAV512TBN_17401F699137 -> ../../sdb
> lrwxrwxrwx. 1 root root    9 Sep 18 22:40 
> ata-WDC_WDS100T2B0B-00YS70_183533804564 -> ../../sdc
> lrwxrwxrwx. 1 root root   10 Sep 18 16:40 dm-name-cl-home -> ../../dm-2
> lrwxrwxrwx. 1 root root   10 Sep 18 16:40 dm-name-cl-root -> ../../dm-0
> lrwxrwxrwx. 1 root root   10 Sep 18 16:40 dm-name-cl-swap -> ../../dm-1
> lrwxrwxrwx. 1 root root   11 Sep 18 16:40 
> dm-name-gluster_vg_sdb-gluster_lv_data -> ../../dm-11
> lrwxrwxrwx. 1 root root   10 Sep 18 16:40 
> dm-name-gluster_vg_sdb-gluster_lv_engine -> ../../dm-6
> lrwxrwxrwx. 1 root root   11 Sep 18 16:40 
> dm-name-gluster_vg_sdb-gluster_lv_vmstore -> ../../dm-12
> lrwxrwxrwx. 1 root root   10 Sep 18 23:35 
> dm-name-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001
>  -> ../../dm-3
> lrwxrwxrwx. 1 root root   10 Sep 18 23:49 
> dm-name-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001p1
>  -> ../../dm-4
> lrwxrwxrwx. 1 root root   10 Sep 18 14:32 dm-name-vdo_sdb -> ../../dm-5
> lrwxrwxrwx. 1 root root   10 Sep 18 16:40 
> dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADc49gc6PWLRBCoJ2B3JC9tDJejyx5eDPT 
> -> ../../dm-1
> lrwxrwxrwx. 1 root root   10 Sep 18 16:40 
> dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADOMNJfgcat9ZLOpcNO7FyG8ixcl5s93TU 
> -> ../../dm-2
> lrwxrwxrwx. 1 root root   10 Sep 18 16:40 
> dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADzqPGk0yTQ19FIqgoAfsCxWg7cDMtl71r 
> -> ../../dm-0
> lrwxrwxrwx. 1 root root   10 Sep 18 16:40 
> dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOq6Om5comvRFWJDbtVZAKtE5YGl4jciP9 
> -> ../../dm-6
> lrwxrwxrwx. 1 root root   11 Sep 18 16:40 
> dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOqVheASEgerWSEIkjM1BR3us3D9ekHt0L 
> -> ../../dm-11
> lrwxrwxrwx. 1 root root   11 Sep 18 16:40 
> dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOQz6vXuivIfup6cquKAjPof8wIGOSe4Vz 
> -> ../../dm-12
> lrwxrwxrwx. 1 root root   10 Sep 18 23:35 
> dm-uuid-mpath-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001
>  -> ../../dm-3
> lrwxrwxrwx. 1 root root   10 Sep 18 23:49 
> dm-uuid-part1-mpath-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001
>  -> ../../dm-4
> lrwxrwxrwx. 1 root root   10 Sep 18 14:32 
> dm-uuid-VDO-472035cc-8d2b-40ac-afe9-fa60b62a887f -> ../../dm-5
> lrwxrwxrwx. 1 root root   10 Sep 18 14:32 
> lvm-pv-uuid-e1fvwo-kEfX-v3lT-SKBp-cgze-TwsO-PtyvmC -> ../../dm-5
> lrwxrwxrwx. 1 root root   10 Sep 18 22:40 
> lvm-pv-uuid-mr9awW-oQH5-F4IX-CbEO-RgJZ-x4jK-e4YZS1 -> ../../sda2
> lrwxrwxrwx. 1 root root   13 Sep 18 14:32 
> nvme-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001
>  -> ../../nvme0n1
> lrwxrwxrwx. 1 root root   15 Sep 18 14:32 
> nvme-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1
>  -> ../../nvme0n1p1
> lrwxrwxrwx. 1 root root   13 Sep 18 14:32 
> nvme-SPCC_M.2_PCIe_SSD_AA000000000000002458 -> ../../nvme0n1
> lrwxrwxrwx. 1 root root   15 Sep 18 14:32 
> nvme-SPCC_M.2_PCIe_SSD_AA000000000000002458-part1 -> ../../nvme0n1p1
> lrwxrwxrwx. 1 root root    9 Sep 18 22:40 
> scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN -> ../../sda
> lrwxrwxrwx. 1 root root   10 Sep 18 22:40 
> scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part1 -> ../../sda1
> lrwxrwxrwx. 1 root root   10 Sep 18 22:40 
> scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part2 -> ../../sda2
> lrwxrwxrwx. 1 root root    9 Sep 18 14:32 
> scsi-0ATA_Micron_1100_MTFD_17401F699137 -> ../../sdb
> lrwxrwxrwx. 1 root root    9 Sep 18 22:40 
> scsi-0ATA_WDC_WDS100T2B0B-_183533804564 -> ../../sdc
> lrwxrwxrwx. 1 root root    9 Sep 18 22:40 
> scsi-1ATA_INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN -> ../../sda
> lrwxrwxrwx. 1 root root   10 Sep 18 22:40 
> scsi-1ATA_INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part1 -> ../../sda1
> lrwxrwxrwx. 1 root root   10 Sep 18 22:40 
> scsi-1ATA_INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part2 -> ../../sda2
> lrwxrwxrwx. 1 root root    9 Sep 18 14:32 
> scsi-1ATA_Micron_1100_MTFDDAV512TBN_17401F699137 -> ../../sdb
> lrwxrwxrwx. 1 root root    9 Sep 18 22:40 
> scsi-1ATA_WDC_WDS100T2B0B-00YS70_183533804564 -> ../../sdc
> lrwxrwxrwx. 1 root root    9 Sep 18 22:40 scsi-35001b448b9608d90 -> ../../sdc
> lrwxrwxrwx. 1 root root    9 Sep 18 14:32 scsi-3500a07511f699137 -> ../../sdb
> lrwxrwxrwx. 1 root root    9 Sep 18 22:40 scsi-355cd2e404b581cc0 -> ../../sda
> lrwxrwxrwx. 1 root root   10 Sep 18 22:40 scsi-355cd2e404b581cc0-part1 -> 
> ../../sda1
> lrwxrwxrwx. 1 root root   10 Sep 18 22:40 scsi-355cd2e404b581cc0-part2 -> 
> ../../sda2
> lrwxrwxrwx. 1 root root   10 Sep 18 23:35 
> scsi-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001
>  -> ../../dm-3
> lrwxrwxrwx. 1 root root   10 Sep 18 23:49 
> scsi-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1
>  -> ../../dm-4
> lrwxrwxrwx. 1 root root    9 Sep 18 22:40 
> scsi-SATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN -> ../../sda
> lrwxrwxrwx. 1 root root   10 Sep 18 22:40 
> scsi-SATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part1 -> ../../sda1
> lrwxrwxrwx. 1 root root   10 Sep 18 22:40 
> scsi-SATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part2 -> ../../sda2
> lrwxrwxrwx. 1 root root    9 Sep 18 14:32 
> scsi-SATA_Micron_1100_MTFD_17401F699137 -> ../../sdb
> lrwxrwxrwx. 1 root root    9 Sep 18 22:40 
> scsi-SATA_WDC_WDS100T2B0B-_183533804564 -> ../../sdc
> lrwxrwxrwx. 1 root root    9 Sep 18 22:40 wwn-0x5001b448b9608d90 -> ../../sdc
> lrwxrwxrwx. 1 root root    9 Sep 18 14:32 wwn-0x500a07511f699137 -> ../../sdb
> lrwxrwxrwx. 1 root root    9 Sep 18 22:40 wwn-0x55cd2e404b581cc0 -> ../../sda
> lrwxrwxrwx. 1 root root   10 Sep 18 22:40 wwn-0x55cd2e404b581cc0-part1 -> 
> ../../sda1
> lrwxrwxrwx. 1 root root   10 Sep 18 22:40 wwn-0x55cd2e404b581cc0-part2 -> 
> ../../sda2
> lrwxrwxrwx. 1 root root   10 Sep 18 23:35 
> wwn-0xvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001
>  -> ../../dm-3
> lrwxrwxrwx. 1 root root   10 Sep 18 23:49 
> wwn-0xvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1
>  -> ../../dm-4
> lrwxrwxrwx. 1 root root   15 Sep 18 14:32 
> wwn-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1
>  -> ../../nvme0n1p1
> [root@odin ~]# ls -al /dev/disk/by-id/
>
> So filter notes three objects:
> lvm-pv-uuid-e1fvwo.... -> dm-5  ->vdo_sdb  (used by HCI for all the three 
> gluster base volumes )
> lvm-pv-uuid-mr9awW... -> sda2 -> boot volume
>
> [root@odin ~]# lsblk
> NAME                                                                          
>                        MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
> sda                                                                           
>                          8:0    0  74.5G  0 disk
> ├─sda1                                                                        
>                          8:1    0     1G  0 part  /boot
> └─sda2                                                                        
>                          8:2    0  73.5G  0 part
>   ├─cl-root                                                                   
>                        253:0    0  44.4G  0 lvm   /
>   ├─cl-swap                                                                   
>                        253:1    0   7.5G  0 lvm   [SWAP]
>   └─cl-home                                                                   
>                        253:2    0  21.7G  0 lvm   /home
> sdb                                                                           
>                          8:16   0   477G  0 disk
> └─vdo_sdb                                                                     
>                        253:5    0   2.1T  0 vdo
>   ├─gluster_vg_sdb-gluster_lv_engine                                          
>                        253:6    0   100G  0 lvm   /gluster_bricks/engine
>   ├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb_tmeta                      
>                        253:7    0     1G  0 lvm
>   │ └─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb-tpool                    
>                        253:9    0     2T  0 lvm
>   │   ├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb                        
>                        253:10   0     2T  1 lvm
>   │   ├─gluster_vg_sdb-gluster_lv_data                                        
>                        253:11   0  1000G  0 lvm   /gluster_bricks/data
>   │   └─gluster_vg_sdb-gluster_lv_vmstore                                     
>                        253:12   0  1000G  0 lvm   /gluster_bricks/vmstore
>   └─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb_tdata                      
>                        253:8    0     2T  0 lvm
>     └─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb-tpool                    
>                        253:9    0     2T  0 lvm
>       ├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb                        
>                        253:10   0     2T  1 lvm
>       ├─gluster_vg_sdb-gluster_lv_data                                        
>                        253:11   0  1000G  0 lvm   /gluster_bricks/data
>       └─gluster_vg_sdb-gluster_lv_vmstore                                     
>                        253:12   0  1000G  0 lvm   /gluster_bricks/vmstore
> sdc                                                                           
>                          8:32   0 931.5G  0 disk
> nvme0n1                                                                       
>                        259:0    0 953.9G  0 disk
> ├─nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001
>      253:3    0 953.9G  0 mpath
> │ 
> └─nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001p1
>  253:4    0 953.9G  0 part
> └─nvme0n1p1
>
> So I don't think this is LVM filtering things
>
> Multipath showing weird treatment of the NVMe drive.. but that is outside 
> this converstation
> [root@odin ~]# multipath -l
> nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001
>  dm-3 NVME,SPCC M.2 PCIe SSD
> size=954G features='1 queue_if_no_path' hwhandler='0' wp=rw
> `-+- policy='service-time 0' prio=0 status=active
>   `- 0:1:1:1 nvme0n1 259:0 active undef running
> [root@odin ~]#
>
> Where is getting this filter.
> I have done gdisk /dev/sdc ( new 1TB Drive) and shows no partition.  I even 
> did a full dd if=/dev/zero   and no change.
>
> I reloaded OS on system to get through wizard setup.  Now that all three 
> nodes are in the HCI cluster..  all six drives (2 x 1TB in each server) are 
> now locked from any use due to this error about filter.
>
> Ideas?
>
> --
> jeremey.w...@gmail.com
> _______________________________________________
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/JIG3DCS72QCYYYSWF5XTQIMQSQVXDMSR/
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UIMVHK3HCP5DJYRRQKTHCQ4K36JAEV5H/

Reply via email to