Fix listed in the bugzilla link above worked for me.
add 'glusterfs-selinux' to the includepkgs section of
[ovirt-4.4-centos-gluster8] in /etc/yum.repos.d/ovirt-4.4-dependencies.repo
(Copying the bugzilla page)
[ovirt-4.4-centos-gluster8]
name = CentOS-$releasever - Gluster 8
mirrorlist =
It was later discovered that the selinux policy was removed from the selinux
packages. You will need gluster-selinux which should be available in the latest
version of oVirt.
Best Regards,Strahil Nikolov
On Wed, Oct 20, 2021 at 3:17,
ad...@foundryserver.com wrote: I have the same
I have the same issue. I am trying to figure out where you want me to make the
change. You said in the ui, but I can see where. Can you help please,
Thanks
Brad
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to
Actually it seems that glusterfs-selinux should fix the problem.
Best Regards,Strahil Nikolov
On Mon, Oct 11, 2021 at 0:28, Strahil Nikolov via Users
wrote: ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to
Hey Sandro,
do we know why this has been done ?
Best Regards,Strahil Nikolov
On Sun, Oct 10, 2021 at 16:48, Ax Olmos wrote:
The problem is that the ‘glusterd_brick_t’ file context is missing from
selinux-policy-targeted 3.14.3-80 on CentOS 8 Stream.
It exists in the CentOS 8.4 version:
https://bugzilla.redhat.com/show_bug.cgi?id=2002178 seems this is going
to be fixed in 4.4.9 :)
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
Hi,
the bug states that has been done because glusterfs includes it's own
selinux policy module. So maybe ovirt-host needs a dependency to
glusterfs-selinux ?
Greetings
Klaas
On 10/7/21 01:04, jinyx...@gmail.com wrote:
I've been having the same issue and I barely have any hair left. I
I used this as a workaround on the CentOS 8 Stream system doing the install:
cp /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/mount.yml
/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/mount.yml.orig
head -63
The problem is that the ‘glusterd_brick_t’ file context is missing from
selinux-policy-targeted 3.14.3-80 on CentOS 8 Stream.
It exists in the CentOS 8.4 version:
rpm -qpl selinux-policy-targeted-3.14.3-67.el8_4.2.noarch.rpm | grep gluster
I've been having the same issue and I barely have any hair left. I did find
something though; per selinux-policy changelog:
https://centos.pkgs.org/8-stream/centos-baseos-aarch64/selinux-policy-3.14.3-78.el8.noarch.rpm.html
2021-08-12 - Zdenek Pytela - 3.14.3-77
...
- Remove glusterd SELinux
I have been having this issue as well with no solution working but I came
across something interesting. In selinux-policy 3.14.3-77 they removed the
glusterd selinux module.
https://centos.pkgs.org/8-stream/centos-baseos-x86_64/selinux-policy-3.14.3-78.el8.noarch.rpm.html
so that policy doesn
I just checked the module source and it should be working with
'glusterd_brick_t'.
Do you have gluster-server installed on all nodes ?
Best Regards,Strahil Nikolov
On Sat, Oct 2, 2021 at 23:13, Strahil Nikolov via Users
wrote: ___
Users
Don't you have a task just like
https://github.com/gluster/gluster-ansible-infra/blob/master/roles/backend_setup/tasks/mount.yml#L64-L70
?
Best Regards,Strahil Nikolov
On Sat, Oct 2, 2021 at 23:00, Woo Hsutung wrote:
___
Users mailing list --
Strahil,
Thanks for your response!
Below is ansible script I can edit at last step:
-
hc_nodes:
hosts:
node00:
gluster_infra_volume_groups:
- vgname: gluster_vg_sdd
pvname: /dev/sdd
Also, you can edit the /etc/fstab entries by adding in the mount options:
context="system_u:object_r:glusterd_brick_t:s0"
Then remount the bricks (umount ; mount ). This tells the kernel to
skip selinux lookups and assume everything has gluster brick files, which will
reduce the I/O.
Best
Most probably it's in a variable.
just run the following:semanage fcontext -a -t
"system_u:object_r:glusterd_brick_t:s0" "/gluster_bricks(/.*)?"
restorecon -RFvv /gluster_bricks/
Best Regards,Strahil Nikolov
On Sat, Oct 2, 2021 at 3:08, Woo Hsutung wrote:
Strahil,
Thanks for your
In cockpit installer last step allows you to edit the ansible before running
it.Just search for glusterd_brick_t and replace it.
Best Regards,Strahil Nikolov
On Fri, Oct 1, 2021 at 17:48, Woo Hsutung wrote: Same
issue happens when I deploy on single node.
And I can’t find where I can
Same issue happens when I deploy on single node.
And I can’t find where I can edit text to replace glusterd_brick_t with
system_u:object_r:glusterd_brick_t:s0
Any suggestion?
Best Regards
Hsutung
___
Users mailing list -- users@ovirt.org
To
It should be
system_u:object_r:glusterd_brick_t:s0
Best Regards,Strahil Nikolov
I'm having this same issue on 4.4.8 with a fresh 3-node install as well.
Same errors as the OP.
Potentially relevant test command:
[root@ovirt-node0 ~]# semanage fcontext -a -t glusterd_brick_t
I think that in the UI there is an option to edit.Find and replace
glusterd_brick_t with system_u:object_r:glusterd_brick_t:s0 and run again.
Best Regards,Strahil Nikolov
On Thu, Sep 16, 2021 at 12:33,
bgrif...@affinityplus.org wrote: I had the same
issue with a new 3-node deploy on
I'm having this same issue on 4.4.8 with a fresh 3-node install as well.
Same errors as the OP.
Potentially relevant test command:
[root@ovirt-node0 ~]# semanage fcontext -a -t glusterd_brick_t
"/gluster_bricks(/.*)?"
ValueError: Type glusterd_brick_t is invalid, must be a file or device
I had the same issue with a new 3-node deploy on 4.4.8
[root@ovirt-node0 ~]# dnf list ovirt*
Last metadata expiration check: 0:20:41 ago on Wed 15 Sep 2021 10:54:31 AM UTC.
Installed Packages
ovirt-ansible-collection.noarch 1.6.2-1.el8
@System
When setting up over the UI, last step shows the ansible tasks.
Can you find your version of and print it here: Set Gluster specific SeLinux
context on the bricks'
Best Regards,Strahil Nikolov
On Wed, Sep 8, 2021 at 12:43, dhanaraj.ramesh--- via Users
wrote: Hi Team
I'm trying to
23 matches
Mail list logo