Apologies for the delay
yes sir all folders and the uid/gid of the gluste vol is 36
Thanks
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oV
yes sir all hosts, volumes, and bricks have this setting
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovir
Sir,
I can see the data domain written to
"/rhev/data-center/mnt/glusterSD/vstore00:_engine" on the host I am trying to
deploy from. when I attempt to write to the directory with:
sudo -u vdsm dd if=/dev/zero
of=/rhev/data-center/mnt/glusterSD/vstore00:_engine/test.txt oflag=direct
bs=512 cou
Can you write on the storage domain like this:
sudo -u vdsm if=/dev/zero of=/rhev//full/path/ oflag=direct
bs=512 count=10
Best Regards,Strahil Nikolov
On Fri, Jan 7, 2022 at 0:19, Andy via Users wrote:
___
Users mailing list -- users@ovirt.o
The latest on this I downgraded qemu-kvm to the lowest version in the CentOS8
Stream/Ovirt Repo:
qemu-kvm-common-6.0.0-26.el8s.x86_64
qemu-kvm-block-ssh-6.0.0-26.el8s.x86_64
qemu-kvm-block-gluster-6.0.0-26.el8s.x86_64
qemu-kvm-6.0.0-26.el8s.x86_64
qemu-kvm-ui-opengl-6.0.0-26.el8s.x86_64
qemu-kvm
Here are the configured options for the gluster volume:
Options Reconfigured:
cluster.lookup-optimize: off
server.keepalive-count: 5
server.keepalive-interval: 2
server.keepalive-time: 10
server.tcp-user-timeout: 20
network.ping-timeout: 30
server.event-threads: 4
client.event-threads: 4
cluster.c
Yes sir and I do see the storage domain being created. I also validated the UID, GID, and the folder for the brick is all owned by 36. Thanks On Jan 3, 2022 4:06 PM, Darrell Budic wrote:Did you confirm that vdsm:kvm (36:36) has full permissions to the selected storage?On Jan 2, 2022, at 10:43 AM
Did you confirm that vdsm:kvm (36:36) has full permissions to the selected
storage?
> On Jan 2, 2022, at 10:43 AM, Andy via Users wrote:
>
> Attached are the setup logs
>
> On Sunday, January 2, 2022, 11:20:34 AM EST, AK via Users
> wrote:
>
>
> Aldo I didnt know if this was an SELINUX pro
Aldo I didnt know if this was an SELINUX problem, as I it is set to permissive
which produces the same error. thanks
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org
The specific error on install is: libguestfs: error: appliance closed the
connection
unexpectedly.\nThis usually means the libguestfs appliance crashed and the
ansible setup
fails at injecting network config with guestfish.
The specific error form the install:
fatal: [localhost]: FAILED! => {
Thank you for the reply. I did downgrade QEMU to the version 6 based off the
other ticket/problem and still received the same result. When sending email I
accidentally used a different email and there are two threads opened on the
same problem. Attaching the logs to this thread as well. tha
On Sun, Jan 2, 2022 at 9:19 AM Andy Kress wrote:
>
> Support,
>
> Happy new year, I am trying a fresh install of ovirt (4.4.9) and seem to not
> be able to get past deploying the host to the glusterfs volume. I am able to
> mount the volume from each host and have checked the configuration of
12 matches
Mail list logo