[ovirt-users] Re: fresh hyperconverged Gluster setup failed in ovirt 4.4.8

2021-10-21 Thread bgriffis
Fix listed in the bugzilla link above worked for me.  
add 'glusterfs-selinux' to the includepkgs section of 
[ovirt-4.4-centos-gluster8] in /etc/yum.repos.d/ovirt-4.4-dependencies.repo

(Copying the bugzilla page)

[ovirt-4.4-centos-gluster8]
name = CentOS-$releasever - Gluster 8
mirrorlist = 
http://mirrorlist.centos.org?arch=$basearch&release=$releasever&repo=storage-gluster-8
gpgcheck = 1
enabled = 1
gpgkey = https://www.centos.org/keys/RPM-GPG-KEY-CentOS-SIG-Storage
includepkgs = ovirt-node-ng-image-update ovirt-node-ng-image 
ovirt-engine-appliance vdsm-hook-fcoe vdsm-hook-vhostmd vdsm-hook-openstacknet 
vdsm-hook-ethtool-options glusterfs-selinux
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KV6UQFYWL3WBGYQM5ODJJRWZOKAYPTPI/


[ovirt-users] Re: fresh hyperconverged Gluster setup failed in ovirt 4.4.8

2021-10-20 Thread Strahil Nikolov via Users
It was later discovered that the selinux policy was removed from the selinux 
packages. You will need gluster-selinux which should be available in the latest 
version of oVirt.
Best Regards,Strahil Nikolov
 
 
  On Wed, Oct 20, 2021 at 3:17, 
ad...@foundryserver.com wrote:   I have the same 
issue.  I am trying to figure out where you want me to make the change.  You 
said in the ui, but I can see where. Can you help please,

Thanks
Brad
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZABW725GVWMSIHF5JUPKHPUZ3T53SZ6F/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/76CS5462K74Y4PNHXIEAOYGVDXNIVLTG/


[ovirt-users] Re: fresh hyperconverged Gluster setup failed in ovirt 4.4.8

2021-10-19 Thread admin
I have the same issue.  I am trying to figure out where you want me to make the 
change.  You said in the ui, but I can see where. Can you help please,

Thanks
Brad
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZABW725GVWMSIHF5JUPKHPUZ3T53SZ6F/


[ovirt-users] Re: fresh hyperconverged Gluster setup failed in ovirt 4.4.8

2021-10-10 Thread Strahil Nikolov via Users
Actually it seems that glusterfs-selinux should fix the problem.
Best Regards,Strahil Nikolov
 
 
  On Mon, Oct 11, 2021 at 0:28, Strahil Nikolov via Users 
wrote:   ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GWDN72YVDZLXYXCJ63J6DNA2ZMQ7YZRX/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EO3ICC3T2SBIZEK56CUUIZ32X6453FGA/


[ovirt-users] Re: fresh hyperconverged Gluster setup failed in ovirt 4.4.8

2021-10-10 Thread Strahil Nikolov via Users
Hey Sandro,

do we know why this has been done ?
Best Regards,Strahil Nikolov
 
 
  On Sun, Oct 10, 2021 at 16:48, Ax Olmos wrote:   
The problem is that the ‘glusterd_brick_t’ file context is missing from 
selinux-policy-targeted 3.14.3-80 on CentOS 8 Stream.

It exists in the CentOS 8.4 version:
rpm -qpl selinux-policy-targeted-3.14.3-67.el8_4.2.noarch.rpm | grep gluster
/usr/share/selinux/targeted/default/active/modules/100/glusterd
/usr/share/selinux/targeted/default/active/modules/100/glusterd/cil
/usr/share/selinux/targeted/default/active/modules/100/glusterd/lang_ext
/var/lib/selinux/targeted/active/modules/100/glusterd
/var/lib/selinux/targeted/active/modules/100/glusterd/cil
/var/lib/selinux/targeted/active/modules/100/glusterd/hll
/var/lib/selinux/targeted/active/modules/100/glusterd/lang_ext

Not on CentOS 8 Stream:
rpm -qpl selinux-policy-targeted-3.14.3-80.el8.noarch.rpm | grep gluster

You can remove the selinux checks from:
/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/mount.yml
but I’m not sure of the implications of that.

This is a show stopper for oVirt and someone from oVirt needs to contact the 
CentOS 8 Stream maintainers and have them put the selinux context back, or come 
up with some other workaround.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EYZWYQRCC5YOL2OB6QI27FJHPLQNPC5Z/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GWDN72YVDZLXYXCJ63J6DNA2ZMQ7YZRX/


[ovirt-users] Re: fresh hyperconverged Gluster setup failed in ovirt 4.4.8

2021-10-10 Thread Klaas Demter
https://bugzilla.redhat.com/show_bug.cgi?id=2002178 seems this is going 
to be fixed in 4.4.9 :)

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YQZSSWJU4TX62QVVZEG6Q2V3RHDF6W63/


[ovirt-users] Re: fresh hyperconverged Gluster setup failed in ovirt 4.4.8

2021-10-10 Thread Klaas Demter

Hi,

the bug states that has been done because glusterfs includes it's own 
selinux policy module. So maybe ovirt-host needs a dependency to 
glusterfs-selinux ?




Greetings

Klaas


On 10/7/21 01:04, jinyx...@gmail.com wrote:

I've been having the same issue and I barely have any hair left. I did find 
something though; per selinux-policy changelog:

https://centos.pkgs.org/8-stream/centos-baseos-aarch64/selinux-policy-3.14.3-78.el8.noarch.rpm.html

2021-08-12 - Zdenek Pytela  - 3.14.3-77
...
- Remove glusterd SELinux module from distribution policy
Resolves: rhbz#1816718

selinux-policy no longer contains gluster module thats why the policy can't be 
found. oVirt node iso is based in Stream which is up to date. Installing on a 
CentOS 8 or Almalinux/Rocky distro still uses the older *-67 so applying 
glusterfs selinux policy works fine. This issue deals with node iso.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CSTF7XZ2RJPWEE7HLA57FRUOXEQPDSIM/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DO4WE7YTHLTX5BQWW4GMNAPEXMHN3VBT/


[ovirt-users] Re: fresh hyperconverged Gluster setup failed in ovirt 4.4.8

2021-10-10 Thread ovirt2021
I used this as a workaround on the CentOS 8 Stream system doing the install:

cp /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/mount.yml 
/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/mount.yml.orig

head -63 
/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/mount.yml.orig > 
/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/mount.yml

It removes the SE Linux lines from the mount.yml file:
63a64,75
> - name: Set Gluster specific SeLinux context on the bricks
>   sefcontext:
>  target: "{{ (item.path | realpath | regex_escape()) + '(/.*)?' }}"
>  setype: glusterd_brick_t
>  state: present
>   with_items: "{{ gluster_infra_mount_devices }}"
>   when: gluster_set_selinux_labels| default(false)| bool == true
> 
> - name: restore file(s) default SELinux security contexts
>   command: restorecon -Rv "{{ item.path }}"
>   with_items: "{{ gluster_infra_mount_devices }}"
>   when: gluster_set_selinux_labels| default(false)| bool == true

After that, it was successful.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SSYIEZILCWSPCPOCFIFE6PXFBN4C5FZS/


[ovirt-users] Re: fresh hyperconverged Gluster setup failed in ovirt 4.4.8

2021-10-10 Thread Ax Olmos
The problem is that the ‘glusterd_brick_t’ file context is missing from 
selinux-policy-targeted 3.14.3-80 on CentOS 8 Stream.

It exists in the CentOS 8.4 version:
rpm -qpl selinux-policy-targeted-3.14.3-67.el8_4.2.noarch.rpm | grep gluster
/usr/share/selinux/targeted/default/active/modules/100/glusterd
/usr/share/selinux/targeted/default/active/modules/100/glusterd/cil
/usr/share/selinux/targeted/default/active/modules/100/glusterd/lang_ext
/var/lib/selinux/targeted/active/modules/100/glusterd
/var/lib/selinux/targeted/active/modules/100/glusterd/cil
/var/lib/selinux/targeted/active/modules/100/glusterd/hll
/var/lib/selinux/targeted/active/modules/100/glusterd/lang_ext

Not on CentOS 8 Stream:
rpm -qpl selinux-policy-targeted-3.14.3-80.el8.noarch.rpm | grep gluster

You can remove the selinux checks from:
/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/mount.yml
but I’m not sure of the implications of that.

This is a show stopper for oVirt and someone from oVirt needs to contact the 
CentOS 8 Stream maintainers and have them put the selinux context back, or come 
up with some other workaround.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EYZWYQRCC5YOL2OB6QI27FJHPLQNPC5Z/


[ovirt-users] Re: fresh hyperconverged Gluster setup failed in ovirt 4.4.8

2021-10-10 Thread jinyx007
I've been having the same issue and I barely have any hair left. I did find 
something though; per selinux-policy changelog:

https://centos.pkgs.org/8-stream/centos-baseos-aarch64/selinux-policy-3.14.3-78.el8.noarch.rpm.html

2021-08-12 - Zdenek Pytela  - 3.14.3-77
...
- Remove glusterd SELinux module from distribution policy
Resolves: rhbz#1816718

selinux-policy no longer contains gluster module thats why the policy can't be 
found. oVirt node iso is based in Stream which is up to date. Installing on a 
CentOS 8 or Almalinux/Rocky distro still uses the older *-67 so applying 
glusterfs selinux policy works fine. This issue deals with node iso.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CSTF7XZ2RJPWEE7HLA57FRUOXEQPDSIM/


[ovirt-users] Re: fresh hyperconverged Gluster setup failed in ovirt 4.4.8

2021-10-05 Thread Eddie Garcia
I have been having this issue as well with no solution working but I came
across something interesting. In selinux-policy 3.14.3-77 they removed the
glusterd selinux module.

https://centos.pkgs.org/8-stream/centos-baseos-x86_64/selinux-policy-3.14.3-78.el8.noarch.rpm.html

so that policy doesn exist anymore for ansible to apply. I plan on testing
installing an older version into an ovirt node box to test which defaults
as of 4.4.8 to selinux-policy version 3.14.3-79.el8.

A clean install of CentOS 8 using latest iso has version 3.14.3-67.el8_4.2
and I can verify that manually setting policy with command 'semanage
fcontext -a -t glusterd_brick_t "/gluster_bricks(/.*)?"' is completed
successfully.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/O3L7CF5S5EYZ5UBKMVGGQMJZCI7P3M4J/


[ovirt-users] Re: fresh hyperconverged Gluster setup failed in ovirt 4.4.8

2021-10-02 Thread Strahil Nikolov via Users
I just checked the module source and it should be working with 
'glusterd_brick_t'.
Do you have gluster-server installed on all nodes ?

Best Regards,Strahil Nikolov
 
 
  On Sat, Oct 2, 2021 at 23:13, Strahil Nikolov via Users 
wrote:   ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6K7JXZZLKIM3N74KPPYZTRDMQ5E4N6SC/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/S3CMLGX3XO4AXH4PHYFVPPYM5Z4CXYTN/


[ovirt-users] Re: fresh hyperconverged Gluster setup failed in ovirt 4.4.8

2021-10-02 Thread Strahil Nikolov via Users
Don't you have a task just like 
 
https://github.com/gluster/gluster-ansible-infra/blob/master/roles/backend_setup/tasks/mount.yml#L64-L70
?
Best Regards,Strahil Nikolov
 
  On Sat, Oct 2, 2021 at 23:00, Woo Hsutung wrote:   
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WEZBFDCBJDPAXS46V5S4LAHVU22OPQPO/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6K7JXZZLKIM3N74KPPYZTRDMQ5E4N6SC/


[ovirt-users] Re: fresh hyperconverged Gluster setup failed in ovirt 4.4.8

2021-10-02 Thread Woo Hsutung
Strahil,

Thanks for your response!

Below is ansible script I can edit at last step:

-
hc_nodes:
  hosts:
node00:
  gluster_infra_volume_groups:
- vgname: gluster_vg_sdd
  pvname: /dev/sdd
  gluster_infra_mount_devices:
- path: /gluster_bricks/engine
  lvname: gluster_lv_engine
  vgname: gluster_vg_sdd
- path: /gluster_bricks/data
  lvname: gluster_lv_data
  vgname: gluster_vg_sdd
- path: /gluster_bricks/vmstore
  lvname: gluster_lv_vmstore
  vgname: gluster_vg_sdd
  blacklist_mpath_devices:
- sdd
  gluster_infra_thick_lvs:
- vgname: gluster_vg_sdd
  lvname: gluster_lv_engine
  size: 100G
  gluster_infra_thinpools:
- vgname: gluster_vg_sdd
  thinpoolname: gluster_thinpool_gluster_vg_sdd
  poolmetadatasize: 2G
  gluster_infra_lv_logicalvols:
- vgname: gluster_vg_sdd
  thinpool: gluster_thinpool_gluster_vg_sdd
  lvname: gluster_lv_data
  lvsize: 400G
- vgname: gluster_vg_sdd
  thinpool: gluster_thinpool_gluster_vg_sdd
  lvname: gluster_lv_vmstore
  lvsize: 400G
  vars:
gluster_infra_disktype: JBOD
gluster_set_selinux_labels: true
gluster_infra_fw_ports:
  - 2049/tcp
  - 54321/tcp
  - 5900/tcp
  - 5900-6923/tcp
  - 5666/tcp
  - 16514/tcp
gluster_infra_fw_permanent: true
gluster_infra_fw_state: enabled
gluster_infra_fw_zone: public
gluster_infra_fw_services:
  - glusterfs
gluster_features_force_varlogsizecheck: false
cluster_nodes:
  - node00
gluster_features_hci_cluster: '{{ cluster_nodes }}'
gluster_features_hci_volumes:
  - volname: engine
brick: /gluster_bricks/engine/engine
arbiter: 0
  - volname: data
brick: /gluster_bricks/data/data
arbiter: 0
  - volname: vmstore
brick: /gluster_bricks/vmstore/vmstore
arbiter: 0
gluster_features_hci_volume_options:
  storage.owner-uid: '36'
  storage.owner-gid: '36'
  features.shard: 'on'
  performance.low-prio-threads: '32'
  performance.strict-o-direct: 'on'
  network.remote-dio: 'off'
  network.ping-timeout: '30'
  user.cifs: 'off'
  nfs.disable: 'on'
  performance.quick-read: 'off'
  performance.read-ahead: 'off'
  performance.io-cache: 'off'
  cluster.eager-lock: enable
-

There is no words “glusterd_brick_t”:(

At last, I change the value of “gluster_set_selinux_labels”  from “ true” to 
false, the deployment is successfully completed.

But I don’t know whether this change will impact the system…., could you give 
some suggestions?
 
BR,
Hsutung



> 在 2021年10月1日,下午10:58,Strahil Nikolov  写道:
> 
> In cockpit installer last step allows you to edit the ansible before running 
> it.
> Just search for glusterd_brick_t and replace it.
> 
> Best Regards,
> Strahil Nikolov
> 
> On Fri, Oct 1, 2021 at 17:48, Woo Hsutung
>  wrote:
> Same issue happens when I deploy on single node.
> 
> And I can’t find where I can edit text to replace glusterd_brick_t with 
> system_u:object_r:glusterd_brick_t:s0
> 
> Any suggestion?
> 
> Best Regards
> Hsutung
>  
> ___
> Users mailing list -- users@ovirt.org 
> To unsubscribe send an email to users-le...@ovirt.org 
> 
> Privacy Statement: https://www.ovirt.org/privacy-policy.html 
> 
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/ 
> 
> List Archives:

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WEZBFDCBJDPAXS46V5S4LAHVU22OPQPO/


[ovirt-users] Re: fresh hyperconverged Gluster setup failed in ovirt 4.4.8

2021-10-02 Thread Strahil Nikolov via Users
Also, you can edit the /etc/fstab entries by adding in the mount options:
context="system_u:object_r:glusterd_brick_t:s0"
Then remount the bricks (umount ; mount ). This tells the kernel to 
skip selinux lookups and assume everything has gluster brick files, which will 
reduce the I/O.
Best Regards,Strahil Nikolov 
 
  On Sat, Oct 2, 2021 at 11:02, Strahil Nikolov wrote:   
Most probably it's in a variable.
just run the following:semanage fcontext -a -t  
"system_u:object_r:glusterd_brick_t:s0" "/gluster_bricks(/.*)?"
restorecon -RFvv /gluster_bricks/
Best Regards,Strahil Nikolov
 
 
  On Sat, Oct 2, 2021 at 3:08, Woo Hsutung wrote:   
Strahil,
Thanks for your response!
Below is ansible script I can edit at last step:
-hc_nodes:
  hosts:    node00:      gluster_infra_volume_groups:        - vgname: 
gluster_vg_sdd          pvname: /dev/sdd      gluster_infra_mount_devices:      
  - path: /gluster_bricks/engine          lvname: gluster_lv_engine          
vgname: gluster_vg_sdd        - path: /gluster_bricks/data          lvname: 
gluster_lv_data          vgname: gluster_vg_sdd        - path: 
/gluster_bricks/vmstore          lvname: gluster_lv_vmstore          vgname: 
gluster_vg_sdd      blacklist_mpath_devices:        - sdd      
gluster_infra_thick_lvs:        - vgname: gluster_vg_sdd          lvname: 
gluster_lv_engine          size: 100G      gluster_infra_thinpools:        - 
vgname: gluster_vg_sdd          thinpoolname: gluster_thinpool_gluster_vg_sdd   
       poolmetadatasize: 2G      gluster_infra_lv_logicalvols:        - vgname: 
gluster_vg_sdd          thinpool: gluster_thinpool_gluster_vg_sdd          
lvname: gluster_lv_data          lvsize: 400G        - vgname: gluster_vg_sdd   
       thinpool: gluster_thinpool_gluster_vg_sdd          lvname: 
gluster_lv_vmstore          lvsize: 400G  vars:    gluster_infra_disktype: JBOD 
   gluster_set_selinux_labels: true    gluster_infra_fw_ports:      - 2049/tcp  
    - 54321/tcp      - 5900/tcp      - 5900-6923/tcp      - 5666/tcp      - 
16514/tcp    gluster_infra_fw_permanent: true    gluster_infra_fw_state: 
enabled    gluster_infra_fw_zone: public    gluster_infra_fw_services:      - 
glusterfs    gluster_features_force_varlogsizecheck: false    cluster_nodes:    
  - node00    gluster_features_hci_cluster: '{{ cluster_nodes }}'    
gluster_features_hci_volumes:      - volname: engine        brick: 
/gluster_bricks/engine/engine        arbiter: 0      - volname: data        
brick: /gluster_bricks/data/data        arbiter: 0      - volname: vmstore      
  brick: /gluster_bricks/vmstore/vmstore        arbiter: 0    
gluster_features_hci_volume_options:      storage.owner-uid: '36'      
storage.owner-gid: '36'      features.shard: 'on'      
performance.low-prio-threads: '32'      performance.strict-o-direct: 'on'      
network.remote-dio: 'off'      network.ping-timeout: '30'      user.cifs: 'off' 
     nfs.disable: 'on'      performance.quick-read: 'off'      
performance.read-ahead: 'off'      performance.io-cache: 'off'      
cluster.eager-lock: 
enable-
There is no words “glusterd_brick_t”:(
At last, I change the value of “gluster_set_selinux_labels”  from “ true” to 
false, the deployment is successfully completed.
But I don’t know whether this change will impact the system…., could you give 
some suggestions? BR,Hsutung



在 2021年10月1日,下午10:58,Strahil Nikolov  写道:
In cockpit installer last step allows you to edit the ansible before running 
it.Just search for glusterd_brick_t and replace it.
Best Regards,Strahil Nikolov
 
 
  On Fri, Oct 1, 2021 at 17:48, Woo Hsutung wrote:   Same 
issue happens when I deploy on single node.
And I can’t find where I can edit text to replace glusterd_brick_t with 
system_u:object_r:glusterd_brick_t:s0
Any suggestion?
Best RegardsHsutung ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
  


  
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5Y5ZYTFOIAJ7BVSUSTHTQ3GVJXL2WKVG/


[ovirt-users] Re: fresh hyperconverged Gluster setup failed in ovirt 4.4.8

2021-10-02 Thread Strahil Nikolov via Users
Most probably it's in a variable.
just run the following:semanage fcontext -a -t  
"system_u:object_r:glusterd_brick_t:s0" "/gluster_bricks(/.*)?"
restorecon -RFvv /gluster_bricks/
Best Regards,Strahil Nikolov
 
 
  On Sat, Oct 2, 2021 at 3:08, Woo Hsutung wrote:   
Strahil,
Thanks for your response!
Below is ansible script I can edit at last step:
-hc_nodes:
  hosts:    node00:      gluster_infra_volume_groups:        - vgname: 
gluster_vg_sdd          pvname: /dev/sdd      gluster_infra_mount_devices:      
  - path: /gluster_bricks/engine          lvname: gluster_lv_engine          
vgname: gluster_vg_sdd        - path: /gluster_bricks/data          lvname: 
gluster_lv_data          vgname: gluster_vg_sdd        - path: 
/gluster_bricks/vmstore          lvname: gluster_lv_vmstore          vgname: 
gluster_vg_sdd      blacklist_mpath_devices:        - sdd      
gluster_infra_thick_lvs:        - vgname: gluster_vg_sdd          lvname: 
gluster_lv_engine          size: 100G      gluster_infra_thinpools:        - 
vgname: gluster_vg_sdd          thinpoolname: gluster_thinpool_gluster_vg_sdd   
       poolmetadatasize: 2G      gluster_infra_lv_logicalvols:        - vgname: 
gluster_vg_sdd          thinpool: gluster_thinpool_gluster_vg_sdd          
lvname: gluster_lv_data          lvsize: 400G        - vgname: gluster_vg_sdd   
       thinpool: gluster_thinpool_gluster_vg_sdd          lvname: 
gluster_lv_vmstore          lvsize: 400G  vars:    gluster_infra_disktype: JBOD 
   gluster_set_selinux_labels: true    gluster_infra_fw_ports:      - 2049/tcp  
    - 54321/tcp      - 5900/tcp      - 5900-6923/tcp      - 5666/tcp      - 
16514/tcp    gluster_infra_fw_permanent: true    gluster_infra_fw_state: 
enabled    gluster_infra_fw_zone: public    gluster_infra_fw_services:      - 
glusterfs    gluster_features_force_varlogsizecheck: false    cluster_nodes:    
  - node00    gluster_features_hci_cluster: '{{ cluster_nodes }}'    
gluster_features_hci_volumes:      - volname: engine        brick: 
/gluster_bricks/engine/engine        arbiter: 0      - volname: data        
brick: /gluster_bricks/data/data        arbiter: 0      - volname: vmstore      
  brick: /gluster_bricks/vmstore/vmstore        arbiter: 0    
gluster_features_hci_volume_options:      storage.owner-uid: '36'      
storage.owner-gid: '36'      features.shard: 'on'      
performance.low-prio-threads: '32'      performance.strict-o-direct: 'on'      
network.remote-dio: 'off'      network.ping-timeout: '30'      user.cifs: 'off' 
     nfs.disable: 'on'      performance.quick-read: 'off'      
performance.read-ahead: 'off'      performance.io-cache: 'off'      
cluster.eager-lock: 
enable-
There is no words “glusterd_brick_t”:(
At last, I change the value of “gluster_set_selinux_labels”  from “ true” to 
false, the deployment is successfully completed.
But I don’t know whether this change will impact the system…., could you give 
some suggestions? BR,Hsutung



在 2021年10月1日,下午10:58,Strahil Nikolov  写道:
In cockpit installer last step allows you to edit the ansible before running 
it.Just search for glusterd_brick_t and replace it.
Best Regards,Strahil Nikolov
 
 
  On Fri, Oct 1, 2021 at 17:48, Woo Hsutung wrote:   Same 
issue happens when I deploy on single node.
And I can’t find where I can edit text to replace glusterd_brick_t with 
system_u:object_r:glusterd_brick_t:s0
Any suggestion?
Best RegardsHsutung ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
  


  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/M5ZALZ4NJTCV3MUOFOFJV36NNOZ3FRZP/


[ovirt-users] Re: fresh hyperconverged Gluster setup failed in ovirt 4.4.8

2021-10-01 Thread Strahil Nikolov via Users
In cockpit installer last step allows you to edit the ansible before running 
it.Just search for glusterd_brick_t and replace it.
Best Regards,Strahil Nikolov
 
 
  On Fri, Oct 1, 2021 at 17:48, Woo Hsutung wrote:   Same 
issue happens when I deploy on single node.
And I can’t find where I can edit text to replace glusterd_brick_t with 
system_u:object_r:glusterd_brick_t:s0
Any suggestion?
Best RegardsHsutung ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/62YZH6TFFBB7HDSSA6FH5LXPBZJKU2E2/


[ovirt-users] Re: fresh hyperconverged Gluster setup failed in ovirt 4.4.8

2021-10-01 Thread Woo Hsutung
Same issue happens when I deploy on single node.

And I can’t find where I can edit text to replace glusterd_brick_t with 
system_u:object_r:glusterd_brick_t:s0

Any suggestion?

Best Regards
Hsutung
 ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives:


[ovirt-users] Re: fresh hyperconverged Gluster setup failed in ovirt 4.4.8

2021-09-16 Thread Strahil Nikolov via Users
It should be 
  system_u:object_r:glusterd_brick_t:s0
Best Regards,Strahil Nikolov
 
I'm having this same issue on 4.4.8 with a fresh 3-node install as well.

Same errors as the OP.  

Potentially relevant test command: 

[root@ovirt-node0 ~]# semanage fcontext -a -t glusterd_brick_t 
"/gluster_bricks(/.*)?"
ValueError: Type glusterd_brick_t is invalid, must be a file or device type

Seems like the glusterd selinux fcontexts are missing.  Are they provided by 
glusterd_selinux?

[root@ovirt-node0 ~]# dnf install selinux-policy
Last metadata expiration check: 0:03:51 ago on Wed 15 Sep 2021 11:31:59 AM UTC.
Package selinux-policy-3.14.3-79.el8.noarch is already installed.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7KJYS43SATHCHCRZUHIJA5475SAJ/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LVEKFE6HUHZGIHX36FDQV4KIMTTSQYHF/


[ovirt-users] Re: fresh hyperconverged Gluster setup failed in ovirt 4.4.8

2021-09-16 Thread Strahil Nikolov via Users
I think that in the UI there is an option to edit.Find and replace 
glusterd_brick_t with system_u:object_r:glusterd_brick_t:s0 and run again.
Best Regards,Strahil Nikolov 
 
  On Thu, Sep 16, 2021 at 12:33, 
bgrif...@affinityplus.org wrote:   I had the same 
issue with a new 3-node deploy on 4.4.8

[root@ovirt-node0 ~]# dnf list ovirt*
Last metadata expiration check: 0:20:41 ago on Wed 15 Sep 2021 10:54:31 AM UTC.
Installed Packages
ovirt-ansible-collection.noarch                          1.6.2-1.el8            
              @System
ovirt-host.x86_64                                        4.4.8-1.el8            
              @System


TASK [gluster.infra/roles/backend_setup : Set Gluster specific SeLinux context 
on the bricks] ***
failed: [ovirt-node1] (item={'path': '/gluster_bricks/engine', 'lvname': 
'gluster_lv_engine', 'vgname': 'gluster_vg_sdb'}) => {"ansible_loop_var": 
"item", "changed": false, "item": {"lvname": "gluster_lv_engine", "path": 
"/gluster_bricks/engine", "vgname": "gluster_vg_sdb"}, "msg": "ValueError: Type 
glusterd_brick_t is invalid, must be a file or device type\n"}
failed: [ovirt-node0] (item={'path': '/gluster_bricks/engine', 'lvname': 
'gluster_lv_engine', 'vgname': 'gluster_vg_sdb'}) => {"ansible_loop_var": 
"item", "changed": false, "item": {"lvname": "gluster_lv_engine", "path": 
"/gluster_bricks/engine", "vgname": "gluster_vg_sdb"}, "msg": "ValueError: Type 
glusterd_brick_t is invalid, must be a file or device type\n"}
failed: [ovirt-node2] (item={'path': '/gluster_bricks/engine', 'lvname': 
'gluster_lv_engine', 'vgname': 'gluster_vg_sdb'}) => {"ansible_loop_var": 
"item", "changed": false, "item": {"lvname": "gluster_lv_engine", "path": 
"/gluster_bricks/engine", "vgname": "gluster_vg_sdb"}, "msg": "ValueError: Type 
glusterd_brick_t is invalid, must be a file or device type\n"}
failed: [ovirt-node1] (item={'path': '/gluster_bricks/data', 'lvname': 
'gluster_lv_data', 'vgname': 'gluster_vg_sdb'}) => {"ansible_loop_var": "item", 
"changed": false, "item": {"lvname": "gluster_lv_data", "path": 
"/gluster_bricks/data", "vgname": "gluster_vg_sdb"}, "msg": "ValueError: Type 
glusterd_brick_t is invalid, must be a file or device type\n"}
failed: [ovirt-node0] (item={'path': '/gluster_bricks/data', 'lvname': 
'gluster_lv_data', 'vgname': 'gluster_vg_sdb'}) => {"ansible_loop_var": "item", 
"changed": false, "item": {"lvname": "gluster_lv_data", "path": 
"/gluster_bricks/data", "vgname": "gluster_vg_sdb"}, "msg": "ValueError: Type 
glusterd_brick_t is invalid, must be a file or device type\n"}
failed: [ovirt-node2] (item={'path': '/gluster_bricks/data', 'lvname': 
'gluster_lv_data', 'vgname': 'gluster_vg_sdb'}) => {"ansible_loop_var": "item", 
"changed": false, "item": {"lvname": "gluster_lv_data", "path": 
"/gluster_bricks/data", "vgname": "gluster_vg_sdb"}, "msg": "ValueError: Type 
glusterd_brick_t is invalid, must be a file or device type\n"}
failed: [ovirt-node1] (item={'path': '/gluster_bricks/vmstore', 'lvname': 
'gluster_lv_vmstore', 'vgname': 'gluster_vg_sdb'}) => {"ansible_loop_var": 
"item", "changed": false, "item": {"lvname": "gluster_lv_vmstore", "path": 
"/gluster_bricks/vmstore", "vgname": "gluster_vg_sdb"}, "msg": "ValueError: 
Type glusterd_brick_t is invalid, must be a file or device type\n"}
failed: [ovirt-node0] (item={'path': '/gluster_bricks/vmstore', 'lvname': 
'gluster_lv_vmstore', 'vgname': 'gluster_vg_sdb'}) => {"ansible_loop_var": 
"item", "changed": false, "item": {"lvname": "gluster_lv_vmstore", "path": 
"/gluster_bricks/vmstore", "vgname": "gluster_vg_sdb"}, "msg": "ValueError: 
Type glusterd_brick_t is invalid, must be a file or device type\n"}
failed: [ovirt-node2] (item={'path': '/gluster_bricks/vmstore', 'lvname': 
'gluster_lv_vmstore', 'vgname': 'gluster_vg_sdb'}) => {"ansible_loop_var": 
"item", "changed": false, "item": {"lvname": "gluster_lv_vmstore", "path": 
"/gluster_bricks/vmstore", "vgname": "gluster_vg_sdb"}, "msg": "ValueError: 
Type glusterd_brick_t is invalid, must be a file or device type\n"}

NO MORE HOSTS LEFT *

NO MORE HOSTS LEFT *

PLAY RECAP *
ovirt-node0    : ok=52  changed=13  unreachable=0    failed=1    skipped=117  
rescued=0    ignored=1  
ovirt-node1  : ok=51  changed=12  unreachable=0    failed=1    skipped=117  
rescued=0    ignored=1  
ovirt-node2  : ok=51  changed=12  unreachable=0    failed=1    skipped=117  
rescued=0    ignored=1  

Please check /var/log/cockpit/ovirt-dashboard/gluster-deployment.log for more 
informations.




Just to confirm: 

[root@ovirt-node0 ~]# semanage fcontext -a -t glusterd_brick_t 
"/gluster_bricks(/.*)?"
ValueError: Type glusterd_brick_t is invalid, must be a file or device type
[root@ovirt-node0 ~]#
___
Users mailing list -- users@ovirt.org
To unsubscr

[ovirt-users] Re: fresh hyperconverged Gluster setup failed in ovirt 4.4.8

2021-09-16 Thread bgriffis
I'm having this same issue on 4.4.8 with a fresh 3-node install as well.

Same errors as the OP.  

Potentially relevant test command: 

[root@ovirt-node0 ~]# semanage fcontext -a -t glusterd_brick_t 
"/gluster_bricks(/.*)?"
ValueError: Type glusterd_brick_t is invalid, must be a file or device type

Seems like the glusterd selinux fcontexts are missing.  Are they provided by 
glusterd_selinux?

[root@ovirt-node0 ~]# dnf install selinux-policy
Last metadata expiration check: 0:03:51 ago on Wed 15 Sep 2021 11:31:59 AM UTC.
Package selinux-policy-3.14.3-79.el8.noarch is already installed.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7KJYS43SATHCHCRZUHIJA5475SAJ/


[ovirt-users] Re: fresh hyperconverged Gluster setup failed in ovirt 4.4.8

2021-09-16 Thread bgriffis
I had the same issue with a new 3-node deploy on 4.4.8

[root@ovirt-node0 ~]# dnf list ovirt*
Last metadata expiration check: 0:20:41 ago on Wed 15 Sep 2021 10:54:31 AM UTC.
Installed Packages
ovirt-ansible-collection.noarch   1.6.2-1.el8   
   @System
ovirt-host.x86_64 4.4.8-1.el8   
   @System


TASK [gluster.infra/roles/backend_setup : Set Gluster specific SeLinux context 
on the bricks] ***
failed: [ovirt-node1] (item={'path': '/gluster_bricks/engine', 'lvname': 
'gluster_lv_engine', 'vgname': 'gluster_vg_sdb'}) => {"ansible_loop_var": 
"item", "changed": false, "item": {"lvname": "gluster_lv_engine", "path": 
"/gluster_bricks/engine", "vgname": "gluster_vg_sdb"}, "msg": "ValueError: Type 
glusterd_brick_t is invalid, must be a file or device type\n"}
failed: [ovirt-node0] (item={'path': '/gluster_bricks/engine', 'lvname': 
'gluster_lv_engine', 'vgname': 'gluster_vg_sdb'}) => {"ansible_loop_var": 
"item", "changed": false, "item": {"lvname": "gluster_lv_engine", "path": 
"/gluster_bricks/engine", "vgname": "gluster_vg_sdb"}, "msg": "ValueError: Type 
glusterd_brick_t is invalid, must be a file or device type\n"}
failed: [ovirt-node2] (item={'path': '/gluster_bricks/engine', 'lvname': 
'gluster_lv_engine', 'vgname': 'gluster_vg_sdb'}) => {"ansible_loop_var": 
"item", "changed": false, "item": {"lvname": "gluster_lv_engine", "path": 
"/gluster_bricks/engine", "vgname": "gluster_vg_sdb"}, "msg": "ValueError: Type 
glusterd_brick_t is invalid, must be a file or device type\n"}
failed: [ovirt-node1] (item={'path': '/gluster_bricks/data', 'lvname': 
'gluster_lv_data', 'vgname': 'gluster_vg_sdb'}) => {"ansible_loop_var": "item", 
"changed": false, "item": {"lvname": "gluster_lv_data", "path": 
"/gluster_bricks/data", "vgname": "gluster_vg_sdb"}, "msg": "ValueError: Type 
glusterd_brick_t is invalid, must be a file or device type\n"}
failed: [ovirt-node0] (item={'path': '/gluster_bricks/data', 'lvname': 
'gluster_lv_data', 'vgname': 'gluster_vg_sdb'}) => {"ansible_loop_var": "item", 
"changed": false, "item": {"lvname": "gluster_lv_data", "path": 
"/gluster_bricks/data", "vgname": "gluster_vg_sdb"}, "msg": "ValueError: Type 
glusterd_brick_t is invalid, must be a file or device type\n"}
failed: [ovirt-node2] (item={'path': '/gluster_bricks/data', 'lvname': 
'gluster_lv_data', 'vgname': 'gluster_vg_sdb'}) => {"ansible_loop_var": "item", 
"changed": false, "item": {"lvname": "gluster_lv_data", "path": 
"/gluster_bricks/data", "vgname": "gluster_vg_sdb"}, "msg": "ValueError: Type 
glusterd_brick_t is invalid, must be a file or device type\n"}
failed: [ovirt-node1] (item={'path': '/gluster_bricks/vmstore', 'lvname': 
'gluster_lv_vmstore', 'vgname': 'gluster_vg_sdb'}) => {"ansible_loop_var": 
"item", "changed": false, "item": {"lvname": "gluster_lv_vmstore", "path": 
"/gluster_bricks/vmstore", "vgname": "gluster_vg_sdb"}, "msg": "ValueError: 
Type glusterd_brick_t is invalid, must be a file or device type\n"}
failed: [ovirt-node0] (item={'path': '/gluster_bricks/vmstore', 'lvname': 
'gluster_lv_vmstore', 'vgname': 'gluster_vg_sdb'}) => {"ansible_loop_var": 
"item", "changed": false, "item": {"lvname": "gluster_lv_vmstore", "path": 
"/gluster_bricks/vmstore", "vgname": "gluster_vg_sdb"}, "msg": "ValueError: 
Type glusterd_brick_t is invalid, must be a file or device type\n"}
failed: [ovirt-node2] (item={'path': '/gluster_bricks/vmstore', 'lvname': 
'gluster_lv_vmstore', 'vgname': 'gluster_vg_sdb'}) => {"ansible_loop_var": 
"item", "changed": false, "item": {"lvname": "gluster_lv_vmstore", "path": 
"/gluster_bricks/vmstore", "vgname": "gluster_vg_sdb"}, "msg": "ValueError: 
Type glusterd_brick_t is invalid, must be a file or device type\n"}

NO MORE HOSTS LEFT *

NO MORE HOSTS LEFT *

PLAY RECAP *
ovirt-node0: ok=52   changed=13   unreachable=0failed=1skipped=117  
rescued=0ignored=1   
ovirt-node1   : ok=51   changed=12   unreachable=0failed=1skipped=117  
rescued=0ignored=1   
ovirt-node2   : ok=51   changed=12   unreachable=0failed=1skipped=117  
rescued=0ignored=1   

Please check /var/log/cockpit/ovirt-dashboard/gluster-deployment.log for more 
informations.




Just to confirm: 

[root@ovirt-node0 ~]# semanage fcontext -a -t glusterd_brick_t 
"/gluster_bricks(/.*)?"
ValueError: Type glusterd_brick_t is invalid, must be a file or device type
[root@ovirt-node0 ~]#
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/arch

[ovirt-users] Re: fresh hyperconverged Gluster setup failed in ovirt 4.4.8

2021-09-09 Thread Strahil Nikolov via Users
When setting up over the UI, last step shows the ansible tasks.
Can you find your version of and print it here: Set Gluster specific SeLinux 
context on the bricks' 
 

Best Regards,Strahil Nikolov

 
  On Wed, Sep 8, 2021 at 12:43, dhanaraj.ramesh--- via Users 
wrote:   Hi  Team 


I'm trying to setup 3 node Gluster + ovirt  setup with latest stable 4.4.8 
version but while deploying the gluster from cokpit getting below error what 
could be the reason 


TASK [gluster.infra/roles/backend_setup : Set Gluster specific SeLinux context 
on the bricks] ***
failed: [beclovkvma03.bec. lab ] (item={'path': '/gluster_bricks/engine', 
'lvname': 'gluster_lv_engine', 'vgname': 'gluster_vg_sde'}) => 
{"ansible_loop_var": "item", "changed": false, "item": {"lvname": 
"gluster_lv_engine", "path": "/gluster_bricks/engine", "vgname": 
"gluster_vg_sde"}, "msg": "ValueError: Type glusterd_brick_t is invalid, must 
be a file or device type\n"}
failed: [beclovkvma01.bec. lab ] (item={'path': '/gluster_bricks/engine', 
'lvname': 'gluster_lv_engine', 'vgname': 'gluster_vg_sde'}) => 
{"ansible_loop_var": "item", "changed": false, "item": {"lvname": 
"gluster_lv_engine", "path": "/gluster_bricks/engine", "vgname": 
"gluster_vg_sde"}, "msg": "ValueError: Type glusterd_brick_t is invalid, must 
be a file or device type\n"}
failed: [beclovkvma02.bec. lab ] (item={'path': '/gluster_bricks/engine', 
'lvname': 'gluster_lv_engine', 'vgname': 'gluster_vg_sde'}) => 
{"ansible_loop_var": "item", "changed": false, "item": {"lvname": 
"gluster_lv_engine", "path": "/gluster_bricks/engine", "vgname": 
"gluster_vg_sde"}, "msg": "ValueError: Type glusterd_brick_t is invalid, must 
be a file or device type\n"}
failed: [beclovkvma03.bec. lab ] (item={'path': '/gluster_bricks/data', 
'lvname': 'gluster_lv_data', 'vgname': 'gluster_vg_sde'}) => 
{"ansible_loop_var": "item", "changed": false, "item": {"lvname": 
"gluster_lv_data", "path": "/gluster_bricks/data", "vgname": "gluster_vg_sde"}, 
"msg": "ValueError: Type glusterd_brick_t is invalid, must be a file or device 
type\n"}
failed: [beclovkvma01.bec. lab ] (item={'path': '/gluster_bricks/data', 
'lvname': 'gluster_lv_data', 'vgname': 'gluster_vg_sde'}) => 
{"ansible_loop_var": "item", "changed": false, "item": {"lvname": 
"gluster_lv_data", "path": "/gluster_bricks/data", "vgname": "gluster_vg_sde"}, 
"msg": "ValueError: Type glusterd_brick_t is invalid, must be a file or device 
type\n"}
failed: [beclovkvma02.bec. lab ] (item={'path': '/gluster_bricks/data', 
'lvname': 'gluster_lv_data', 'vgname': 'gluster_vg_sde'}) => 
{"ansible_loop_var": "item", "changed": false, "item": {"lvname": 
"gluster_lv_data", "path": "/gluster_bricks/data", "vgname": "gluster_vg_sde"}, 
"msg": "ValueError: Type glusterd_brick_t is invalid, must be a file or device 
type\n"}
failed: [beclovkvma03.bec. lab ] (item={'path': '/gluster_bricks/vmstore', 
'lvname': 'gluster_lv_vmstore', 'vgname': 'gluster_vg_sde'}) => 
{"ansible_loop_var": "item", "changed": false, "item": {"lvname": 
"gluster_lv_vmstore", "path": "/gluster_bricks/vmstore", "vgname": 
"gluster_vg_sde"}, "msg": "ValueError: Type glusterd_brick_t is invalid, must 
be a file or device type\n"}
failed: [beclovkvma01.bec. lab ] (item={'path': '/gluster_bricks/vmstore', 
'lvname': 'gluster_lv_vmstore', 'vgname': 'gluster_vg_sde'}) => 
{"ansible_loop_var": "item", "changed": false, "item": {"lvname": 
"gluster_lv_vmstore", "path": "/gluster_bricks/vmstore", "vgname": 
"gluster_vg_sde"}, "msg": "ValueError: Type glusterd_brick_t is invalid, must 
be a file or device type\n"}
failed: [beclovkvma02.bec. lab ] (item={'path': '/gluster_bricks/vmstore', 
'lvname': 'gluster_lv_vmstore', 'vgname': 'gluster_vg_sde'}) => 
{"ansible_loop_var": "item", "changed": false, "item": {"lvname": 
"gluster_lv_vmstore", "path": "/gluster_bricks/vmstore", "vgname": 
"gluster_vg_sde"}, "msg": "ValueError: Type glusterd_brick_t is invalid, must 
be a file or device type\n"}

NO MORE HOSTS LEFT *

NO MORE HOSTS LEFT *

PLAY RECAP *
beclovkvma01.bec. lab  : ok=53  changed=14  unreachable=0    failed=1    
skipped=116  rescued=0    ignored=1  
beclovkvma02.bec. lab  : ok=52  changed=13  unreachable=0    failed=1    
skipped=116  rescued=0    ignored=1  
beclovkvma03.bec. lab  : ok=52  changed=13  unreachable=0    failed=1    
skipped=116  rescued=0    ignored=1  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HXKAQXCVQ2445FD53DONNXXHMED6HU2V/
  
___
U