I'm running oVirt 4.3.2, and just upgraded gluster to 5.5. I see that the
gluster event daemon now works, however the events are not being processed by
the ovirt engine. On the engine side I'm seeing:
"engine.log:2019-03-25 17:14:04,707-04 ERROR
Just FYI, I have observed similar issues where a volume becomes unstable
for a period of time after the upgrade, but then seems to settle down after
a while. I've only witnessed this in the 4.3.x versions. I suspect it's
more of a Gluster issue than oVirt, but troubling none the less.
> On Fri, Mar 1, 2019 at 9:47 AM Ron Jerome wrote:
>> Thanks Michal,
>> I think we are onto something here. That request is getting a 401
>> unauthorized response...
>> ssl_access_log:10.10.10.41 - - [01/Mar/2019:09:26:46 -0500] &quo
9 at 04:50, Michal Skrivanek wrote:
> On 1 Mar 2019, at 02:34, Ron Jerome wrote:
> Here is the JS error that is being generated when I push the "Migrate"
> DataProvider failed to fetch data SyntaxError: JSON.parse: unexpected
>> character at l
sounds like a bug. I'll talk with Michal about reverting this dialog
>> to the 4.2 version.
>> On Tue, Feb 26, 2019 at 1:47 PM Ron Jerome wrote:
>>> Hi Sharon,
>>> This happens with all the VM's, regardless of uptime. I've never tried
ot; state then it sometimes takes time for UI to be refreshed
> with state and data. It was reproduced to me too.
> Greg, does it sound reasonable?
> Can you please send a screenshot of the "Migrate VM(s)" dialog with th
I've toggled all the hosts into and out of maintenance, and VM's migrate
off of each as expected, but I still can't manually initiate a VM migration
from the UI. Do you have any hints as to where to look for error messages?
Thanks in advance,
On Mon, 25 Feb 2019 at 19:56, Ron Jerome
t's a bad error message. It just means there are no hosts
> available to migrate to.
> Do you have other hosts up with capacity?
> On Mon, Feb 25, 2019 at 3:01 PM Ron Jerome wrote:
>> I've been running 4.3.0 for a few weeks now and just discovered t
I've been running 4.3.0 for a few weeks now and just discovered that I can't
manually migrate VM's from the UI. I get an error message saying: "Could not
fetch data needed for VM migrate operation"
Sounds like https://bugzilla.redhat.com/show_bug.cgi?format=multiple=1670701
> Can you be more specific? What things did you see, and did you report bugs?
I've got this one: https://bugzilla.redhat.com/show_bug.cgi?id=1649054
and this one: https://bugzilla.redhat.com/show_bug.cgi?id=1651246
and I've got bricks randomly going offline and getting out of sync with the
> I can confirm that this worked. I had to shut down every single VM then
> change ownership to vdsm:kvm of the image file then start VM back up.
Not to rain on your parade, but you should keep a close eye on your gluster
file system after the upgrade. The stability of my gluster file system
> This may be happening because I changed cluster compatibility to 4.3 then
> immediately after changed data center compatibility to 4.3 (before
> restarting VMs after cluster compatibility change). If this is the case I
> can't fix by downgrading the data center compatibility to 4.2 as it won't
> On Fri, Feb 8, 2019 at 10:39 PM wrote:
> For Hetz it was due to a missing foreign key constraints in image_transfers
> Can you please check if 'fk_image_transfers_command_enitites'[*] exists by
> executing the following sql command:
> SELECT COUNT(1) FROM
> Can you please check this?
> It seems that under certain unclear circumstances, during 4.2 -> 4.3 host
> upgrade, the file ownership and permissions of VMs disks got set as
> root:root and 640 instead of vdsm:kvm and 660 and so vdsm fails to start
Mail list logo