On 9/17/13 10:51 AM, Robert Mustacchi wrote:
On 9/16/13 11:40 AM, Kent Watsen wrote:
1. anyone else notice this behavior? - I'm using AMD, does it happen on Intel systems too?
Hopefully someone else on the list can chime in about that.
No one on this list has, but I did get a reply on the OpenBSD list, where someone says OpenBSD ACPI handling works correctly on both his Intel-based Proxmox VE ***AND*** his AMD-based RedHat Enterprise Virtualization systems (both of which use KVM): http://www.mail-archive.com/[email protected]/msg123364.html



   2. what's the best list to report the issue to?  - Qemu, libvirt?
No, probably the best place is on the illumos kvm related bug trackers.
I can't say that I'll personally have a lot of time to dig into this for
a while.

Based on the above report, you might be right about it being an illumos-kvm issue... FWIW, this is again an AMD-based KVM, which I compiled and installed myself. I have no Intel-based system to test with...


But here are a few things to start with when the guest appears to freeze:

First run `kvmstat 1`. That'll let us know whether there is activity
going on inside the guest or not and will direct a lot of the rest of
the investigation.
OK, I just reproduced the error, below is the `kvmstat` output, the PID in question is "2831" - the other PID (692) is also an OpenBSD-based guest, which has been running for over a week, and continues to run fine through all this...

Before sending system_powerdown command:

# kvmstat 1
pid vcpu | exits : haltx irqx irqwx iox mmiox | irqs emul eptv 692 0 | 2006 : 100 0 0 1 1376 | 0 1656 0
  2831    0 |    375 :     98      0      0      1    171 | 0    277      0
pid vcpu | exits : haltx irqx irqwx iox mmiox | irqs emul eptv 692 0 | 1813 : 100 0 0 1 1279 | 0 1562 0
  2831    0 |    389 :     97      0      0      1    180 | 0    287      0
pid vcpu | exits : haltx irqx irqwx iox mmiox | irqs emul eptv 692 0 | 1895 : 100 0 0 1 1323 | 0 1608 0
  2831    0 |    374 :     97      0      0      1    171 | 0    276      0

After sending system_powerdown command and witnessing total VNC console freeze:

# kvmstat 1
pid vcpu | exits : haltx irqx irqwx iox mmiox | irqs emul eptv 692 0 | 1784 : 99 1 0 1 1249 | 0 1503 0 2831 0 | 177153 : 0 21 0 16078 97 | 32378 144890 0 pid vcpu | exits : haltx irqx irqwx iox mmiox | irqs emul eptv 692 0 | 1810 : 100 1 0 1 1260 | 0 1512 0 2831 0 | 176881 : 0 39 0 16051 97 | 32312 144654 0 pid vcpu | exits : haltx irqx irqwx iox mmiox | irqs emul eptv 692 0 | 1863 : 100 0 0 1 1291 | 0 1546 0 2831 0 | 177214 : 0 33 0 16081 97 | 32384 144927 0

Comparison:
  - "exits" has spiked
  - "haltx" went to zero
  - "irqx" is non-zero
  - "irqwx" is still zero
  - "iox" has spiked
  - "mmiox" is about half
  - "irqs" has spiked
  - "emul" has spiked
  - "eptv" is still zero



If that shows up as basically all zeros then the next
things to go run are pstack on the qemu and given the process, run the
following in mdb -k:

0t<pid>::walk thread | ::findstack -v
It didn't show up all zeros, but I went ahead and tried `mdb`:

# mdb -k
Loading modules: [ unix genunix specfs dtrace mac cpu.generic uppc pcplusmp scsi_vhci zfs sata sd ip hook neti sockfs arp usba stmf stmf_sbd idm cpc kvm md crypto random lofs smbsrv nfs ufs logindmux ptm nsmb ]
> 0t2831::walk thread | ::findstack -v
mdb: failed to read proc at b0f: no mapping for address
mdb: failed to perform walk: failed to initialize walk
> 0t2831::walk thread
mdb: failed to read proc at b0f: no mapping for address
mdb: failed to perform walk: failed to initialize walk
> 0t2831
mdb: failed to read proc at b0f: no mapping for address
mdb: failed to perform walk: failed to initialize walk
> ::quit


Any ideas?


Thanks,
Kent




-------------------------------------------
illumos-discuss
Archives: https://www.listbox.com/member/archive/182180/=now
RSS Feed: https://www.listbox.com/member/archive/rss/182180/21175430-2e6923be
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21175430&id_secret=21175430-6a77cda4
Powered by Listbox: http://www.listbox.com

Reply via email to