yes they are still saying "paused" state.
No, bouncing libvirt didn't help.
I noticed the errors about the ISO domain. Didn't think that was related.
I have been migrating a lot of VMs to ovirt lately, and recently added
Also had some problems with /etc/exports for a while, but I think those
issues are all resolved.
Last "unresponsive" message in vdsm.log was:
vmId=`b6a13808-9552-401b-840b-4f7022e8293d`::monitor become unresponsive
(command timeout, age=310323.97)
vmId=`5bfb140a-a971-4c9c-82c6-277929eb45d4`::monitor become unresponsive
(command timeout, age=310323.97)
On 4/29/16 1:40 AM, Michal Skrivanek wrote:
On 28 Apr 2016, at 19:40, Bill James <bill.ja...@j2.com
thank you for response.
I bold-ed the ones that are listed as "paused".
[root@ovirt1 test vdsm]# virsh -r list --all
Id Name State
Looks like problem started around 2016-04-17 20:19:34,822, based on
yes, that time looks correct. Any idea what might have been a trigger?
Anything interesting happened at that time (power outage of some host,
some maintenance action, anything)?
logs indicate a problem when vdsm talks to libvirt(all those "monitor
It does seem that at that time you started to have some storage
connectivity issues - first one at 2016-04-17 20:06:53,929. And it
doesn’t look temporary because such errors are still there couple
hours later(in your most recent file you attached I can see at 23:00:54)
When I/O gets blocked the VMs may experience issues (then VM gets
Paused), or their qemu process gets stuck(resulting in libvirt either
reporting error or getting stuck as well -> resulting in what vdsm
sees as “monitor unresponsive”)
Since you now bounced libvirtd - did it help? Do you still see wrong
status for those VMs and still those "monitor unresponsive" errors in
If not…then I would suspect the “vm recovery” code not working
correctly. Milan is looking at that.
There's a lot of vdsm logs!
fyi, the storage domain for these Vms is a "local" nfs share,
attached more logs.
On 04/28/2016 12:53 AM, Michal Skrivanek wrote:
On 27 Apr 2016, at 19:16, Bill James<bill.ja...@j2.com> wrote:
virsh # list --all
error: failed to connect to the hypervisor
error: no valid connection
error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such
file or directory
you need to run virsh in read-only mode
virsh -r list —all
[root@ovirt1 test vdsm]# systemctl status libvirtd
● libvirtd.service - Virtualization daemon
Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor
Active: active (running) since Thu 2016-04-21 16:00:03 PDT; 5 days ago
tried systemctl restart libvirtd.
Attached vdsm.log and supervdsm.log.
[root@ovirt1 test vdsm]# systemctl status vdsmd
● vdsmd.service - Virtual Desktop Server Manager
Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor
Active: active (running) since Wed 2016-04-27 10:09:14 PDT; 3min 46s ago
the vdsm.log attach is good, but it’s too short interval, it only shows
recovery(vdsm restart) phase when the VMs are identified as paused….can you add
earlier logs? Did you restart vdsm yourself or did it crash?
On 04/26/2016 11:35 PM, Michal Skrivanek wrote:
On 27 Apr 2016, at 02:04, Nir Soffer<nsof...@redhat.com> wrote:
jjOn Wed, Apr 27, 2016 at 2:03 AM, Bill James<bill.ja...@j2.com> wrote:
I have a hardware node that has 26 VMs.
9 are listed as "running", 17 are listed as "paused".
In truth all VMs are up and running fine.
I tried telling the db they are up:
engine=> update vm_dynamic set status = 1 where vm_guid =(select
vm_guid from vm_static where vm_name = 'api1.test.j2noc.com
GUI then shows it up for a short while,
then puts it back in paused state.
2016-04-26 15:16:46,095 INFO [org.ovirt.engine.core.vdsbroker.VmAnalyzer]
(DefaultQuartzScheduler_Worker-16) [157cc21e] VM '242ca0af-4ab2-4dd6-b515-5
d435e6452c4'(api1.test.j2noc.com <http://api1.test.j2noc.com>) moved from 'Up'
2016-04-26 15:16:46,221 INFO [org.ovirt.engine.core.dal.dbbroker.auditlogh
andling.AuditLogDirector] (DefaultQuartzScheduler_Worker-16) [157cc21e] Cor
relation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM api1.
test.j2noc.com <http://test.j2noc.com> has been paused.
Why does the engine think the VMs are paused?
I can fix the problem by powering off the VM then starting it back up.
But the VM is working fine! How do I get ovirt to realize that?
If this is an issue in engine, restarting engine may fix this.
but having this problem only with one node, I don't think this is the issue.
If this is an issue in vdsm, restarting vdsm may fix this.
If this does not help, maybe this is libvirt issue? did you try to check vm
status using virsh?
this looks more likely as it seems such status is being reported
logs would help, vdsm.log at the very least.
If virsh thinks that the vms are paused, you can try to restart libvirtd.
Please file a bug about this in any case with engine and vdsm logs.
Adding Michal in case he has better idea how to proceed.
Cloud Services for Business www.j2.com
j2 | eFax | eVoice | FuseMail | Campaigner | KeepItSafe | Onebox
This email, its contents and attachments contain information from j2 Global,
Inc. and/or its affiliates which may be privileged, confidential or otherwise
protected from disclosure. The information is intended to be for the
addressee(s) only. If you are not an addressee, any disclosure, copy,
distribution, or use of the contents of this message is prohibited. If you have
received this email in error please notify the sender by reply e-mail and
delete the original message and any copies. (c) 2015 j2 Global, Inc. All rights
reserved. eFax, eVoice, Campaigner, FuseMail, KeepItSafe, and Onebox are
registered trademarks of j2 Global, Inc. and its affiliates.
Users mailing list