Hey!

Are you running over CentOS?
Either you have to uncheck the "Blacklist gluster devices" option on the
bricks page and try again
Or you can add a filter to /etc/lvm/lvm.conf something like this
- a|^/dev/sda2$|",

On Mon, Nov 23, 2020 at 6:17 PM <[email protected]> wrote:

> Trying to deploy a 3 Node Hyperconverged Ovirt Cluster with Gluster as the
> backend storage.  I have tried this against the three nodes that I have as
> well as with just a single node to get a working base line.  The failure
> that I keep getting stuck on is:
>
> TASK [gluster.infra/roles/backend_setup : Create volume groups]
> ****************
> task path:
> /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml:59
> failed: [ovirt01-storage.poling.local] (item={'key': 'gluster_vg_sdb',
> 'value': [{'vgname': 'gluster_vg_sdb', 'pvname': '/dev/sdb'}]}) =>
> {"ansible_loop_var": "item", "changed": false, "err": "  Device /dev/sdb
> excluded by a filter.\n", "item": {"key": "gluster_vg_sdb", "value":
> [{"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}]}, "msg": "Creating
> physical volume '/dev/sdb' failed", "rc": 5}
>
> I have verified my dns records and have reverse dns set up.  The Front End
> Network and Storage Networks are physical separated and are 10GB
> connections.  In the reading i have done this seems to point to possibly
> being a multipath issue, but i do see multipath configs being set in the
> Gluster Wizard and when I check after the wizard fails out - it does look
> like the mpath is set correctly.
>
> [root@ovirt01 ~]# lsblk
> NAME                                               MAJ:MIN RM   SIZE RO
> TYPE MOUNTPOINT
> sda                                                  8:0    0 446.1G  0
> disk
> ├─sda1                                               8:1    0     1G  0
> part /boot
> └─sda2                                               8:2    0 445.1G  0
> part
>   ├─onn-pool00_tmeta                               253:0    0     1G  0 lvm
>   │ └─onn-pool00-tpool                             253:2    0 351.7G  0 lvm
>   │   ├─onn-ovirt--node--ng--4.4.3--0.20201110.0+1 253:3    0 314.7G  0
> lvm  /
>   │   ├─onn-pool00                                 253:5    0 351.7G  1 lvm
>   │   ├─onn-var_log_audit                          253:6    0     2G  0
> lvm  /var/log/audit
>   │   ├─onn-var_log                                253:7    0     8G  0
> lvm  /var/log
>   │   ├─onn-var_crash                              253:8    0    10G  0
> lvm  /var/crash
>   │   ├─onn-var                                    253:9    0    15G  0
> lvm  /var
>   │   ├─onn-tmp                                    253:10   0     1G  0
> lvm  /tmp
>   │   ├─onn-home                                   253:11   0     1G  0
> lvm  /home
>   │   └─onn-ovirt--node--ng--4.4.2--0.20200918.0+1 253:12   0 314.7G  0 lvm
>   ├─onn-pool00_tdata                               253:1    0 351.7G  0 lvm
>   │ └─onn-pool00-tpool                             253:2    0 351.7G  0 lvm
>   │   ├─onn-ovirt--node--ng--4.4.3--0.20201110.0+1 253:3    0 314.7G  0
> lvm  /
>   │   ├─onn-pool00                                 253:5    0 351.7G  1 lvm
>   │   ├─onn-var_log_audit                          253:6    0     2G  0
> lvm  /var/log/audit
>   │   ├─onn-var_log                                253:7    0     8G  0
> lvm  /var/log
>   │   ├─onn-var_crash                              253:8    0    10G  0
> lvm  /var/crash
>   │   ├─onn-var                                    253:9    0    15G  0
> lvm  /var
>   │   ├─onn-tmp                                    253:10   0     1G  0
> lvm  /tmp
>   │   ├─onn-home                                   253:11   0     1G  0
> lvm  /home
>   │   └─onn-ovirt--node--ng--4.4.2--0.20200918.0+1 253:12   0 314.7G  0 lvm
>   └─onn-swap                                       253:4    0     4G  0
> lvm  [SWAP]
> sdb                                                  8:16   0   5.5T  0
> disk
> └─sdb1                                               8:17   0   5.5T  0
> part /sdb
>
>
> Looking for any pointers on what else I should be looking at to get
> gluster to deploy successfully.  Thanks ~ R
> _______________________________________________
> Users mailing list -- [email protected]
> To unsubscribe send an email to [email protected]
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/[email protected]/message/5DB2ENDLVT7E2F73MVKWDLDW2HRZFDMJ/
>
_______________________________________________
Users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/[email protected]/message/JSP5VHOSCGNBYLANRG4KMKPMAITD27SR/

Reply via email to