On 02/20/2018 12:17 PM, Michal Skrivanek wrote:
I wasn't really thinking in terms of logs. I was thinking a database
field that tracks the ovirt version that created the VM.
On 19 Feb 2018, at 23:36, Jason Keltz <j...@cse.yorku.ca
On 2/15/2018 12:05 PM, Michal Skrivanek wrote:
I believe it was originally a 3.6 VM. Is there anywhere I can verify
this info? If not, it would be helpful if oVirt kept track of the
version that created the VM for cases just like this.
On 15 Feb 2018, at 16:37, Jason Keltz<j...@cse.yorku.ca> wrote:
On 02/15/2018 08:48 AM,nico...@devels.es wrote:
We upgraded one of our infrastructures to 4.2.0 recently and since then some of our
machines have the "Console" button greyed-out in the Admin UI, like they were
I changed their compatibility to 4.2 but with no luck, as they're still
Is there a way to know why is that, and how to solve it?
I'm attaching a screenshot.
I had the same problem with most of my VMs after the upgrade from 4.1 to 4.2.
See bugzilla here:https://bugzilla.redhat.com/show_bug.cgi?id=1528868
(which admittedly was a mesh of a bunch of different issues that occurred)
yeah, that’s not a good idea to mix more issues:)
Seemshttps://bugzilla.redhat.com/show_bug.cgi?id=1528868#c26 is the last one
relevant to the grayed out console problem in this email thread.
it’s also possible to check "VM Devices” subtab and list the graphical devices.
If this is the same problem as from Nicolas then it would list cirrus and it would
be great if you can confirm the conditionas are similar (i.e. originally a 3.6 VM)
well, we keep the date and who did that, but we can’t really keep all
the logs forever. Well, you can if you archive them somewhere, but I
guess that’s impractical for such a long time:-D
VM Device subtab: (no Cirrus)
so this is a screenshot from VM where the button is grayed out when
you start it?
Hm..it doesn’t look wrong.
All I know is that everything was working fine, then I updated to
4.2, updated cluster version, and then most of my consoles were not
available. I can't remember if this happened before the cluster
upgrade or not. I suspect it was most and not all VMs since some of
them had been created later than 3.6, and this was an older one. I
only have this one VM left in this state because I had deleted the
other VMs and recreated them one at a time...
I will wait to see if you want me to try Vineet's solution of making
And then - if possible - describe some history of what happened. When was the
VM created, when was cluster updated, when the system was upgraded and to what
Can you get engine.log and vdsm log when you attempt to start that VM
? just the relevant part is enough.
Sure.. I restarted the VM (called "rs").
vdsm log: http://www.eecs.yorku.ca/~jas/ovirt-debug/02202018/vdsm.log
Users mailing list