I have gone in an changed the libvirt configuration files on the cluster
nodes, which has resolved the issue for the time being.
I can reverse one of them and post the logs to help with the issue,
hopefully tomorrow.
On 2019-06-14 17:56, Nir Soffer wrote:
> On Fri, Jun 14, 2019 at 7:05 PM
On Fri, Jun 14, 2019 at 7:05 PM Milan Zamazal wrote:
> Alex McWhirter writes:
>
> > In this case, i should be able to edit /etc/libvirtd/qemu.conf on all
> > the nodes to disable dynamic ownership as a temporary measure until
> > this is patched for libgfapi?
>
> No, other devices might have
Alex McWhirter writes:
> In this case, i should be able to edit /etc/libvirtd/qemu.conf on all
> the nodes to disable dynamic ownership as a temporary measure until
> this is patched for libgfapi?
No, other devices might have permission problems in such a case.
> On 2019-06-13 10:37, Milan
In this case, i should be able to edit /etc/libvirtd/qemu.conf on all
the nodes to disable dynamic ownership as a temporary measure until this
is patched for libgfapi?
On 2019-06-13 10:37, Milan Zamazal wrote:
Shani Leviim writes:
Hi,
It seems that you hit this bug:
Yes we are using GlusterFS distributed replicate with libgfapi
VDSM 4.30.17
On 2019-06-13 10:37, Milan Zamazal wrote:
Shani Leviim writes:
Hi,
It seems that you hit this bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1666795
Adding +Milan Zamazal , Can you please confirm?
There may
Shani Leviim writes:
> Hi,
> It seems that you hit this bug:
> https://bugzilla.redhat.com/show_bug.cgi?id=1666795
>
> Adding +Milan Zamazal , Can you please confirm?
There may still be problems when using GlusterFS with libgfapi:
https://bugzilla.redhat.com/1719789.
What's your Vdsm version
Gluster storage type,
Ill do some migrations and attach logs from the same period shortly
On 2019-06-13 07:47, Benny Zlotnik wrote:
Also, what is the storage domain type? Block or File?
On Thu, Jun 13, 2019 at 2:46 PM Benny Zlotnik
wrote:
Can you attach vdsm and engine logs?
Does this
Also, what is the storage domain type? Block or File?
On Thu, Jun 13, 2019 at 2:46 PM Benny Zlotnik wrote:
>
> Can you attach vdsm and engine logs?
> Does this happen for new VMs as well?
>
> On Thu, Jun 13, 2019 at 12:15 PM Alex McWhirter wrote:
> >
> > after upgrading from 4.2 to 4.3, after a
Can you attach vdsm and engine logs?
Does this happen for new VMs as well?
On Thu, Jun 13, 2019 at 12:15 PM Alex McWhirter wrote:
>
> after upgrading from 4.2 to 4.3, after a vm live migrates it's disk
> images are become owned by root:root. Live migration succeeds and the vm
> stays up, but
Hi,
It seems that you hit this bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1666795
Adding +Milan Zamazal , Can you please confirm?
*Regards,*
*Shani Leviim*
On Thu, Jun 13, 2019 at 12:18 PM Alex McWhirter wrote:
> after upgrading from 4.2 to 4.3, after a vm live migrates it's disk
>
My gluster servers are already on 5.6
On 2019-06-13 06:10, Strahil wrote:
Hi Alex,
Did you migrate from gluster v3 to v5 ?
If yes, then it could be the issue with v5.3 where permissions go
wrong.
If so, pick ovirt 4.3.4 as it uses a newer (fixed ) version of gluster
-> v5.6
Best Regards,
engine: 4.3.4.2-1.el7
Node Versions
OS Version:
RHEL - 7 - 6.1810.2.el7.centos
OS Description:
CentOS Linux 7 (Core)
Kernel Version:
3.10.0 - 957.12.2.el7.x86_64
KVM Version:
2.12.0 - 18.el7_6.5.1
LIBVIRT Version:
libvirt-4.5.0-10.el7_6.10
VDSM Version:
vdsm-4.30.17-1.el7
On Thu, Jun 13, 2019 at 11:18 AM Alex McWhirter wrote:
> after upgrading from 4.2 to 4.3, after a vm live migrates it's disk
> images are become owned by root:root. Live migration succeeds and the vm
> stays up, but after shutting down the VM from this point, starting it up
> again will cause it
Hi Alex,
Did you migrate from gluster v3 to v5 ?
If yes, then it could be the issue with v5.3 where permissions go wrong.
If so, pick ovirt 4.3.4 as it uses a newer (fixed ) version of gluster -> v5.6
Best Regards,
Strahil NikolovOn Jun 13, 2019 09:46, Alex McWhirter wrote:
>
> after
On Thu, Jun 13, 2019, 12:19 Alex McWhirter wrote:
> after upgrading from 4.2 to 4.3, after a vm live migrates it's disk
> images are become owned by root:root. Live migration succeeds and the vm
> stays up, but after shutting down the VM from this point, starting it up
> again will cause it to
15 matches
Mail list logo