Package: qemu-system-x86Version: 1:2.8+dfsg-6+deb9u2
Problem:======Latest Stretch `qemu-system-i386` process consumes the
majority of Xen Dom0's RAM, ultimately crashes DomU.
Symptoms:========When starting a Xen HVM guest with qemu-system-x86
version 1:2.8+dfsg-6+deb9u2 installed, the `qemu-system-i386` process's
CPU usage is high and its RAM balloons to consume up to 75% of Dom0's
RAM. This makes Dom0 extremely sluggish and forces Dom0 to page out to
its swap partition.
Watching the DomU boot via its virtual serial console, it hangs at the
"loading initial image" portion of Linux's bootstrap for far longer
than normal.
Dom0 CPU usage normalizes at around 4% once the DomU has finished
booting, but the process's RAM usage does not decrease. If left
running, the DomU ultimately dies. Using `xl list`, the DomU's state
prints as "------".
At this point, the DomU can only be destroyed. Killing the DomU with
`xl destroy` yields:
libxl: error: libxl_dm.c:2303:kill_device_model: Device Model already
What I've tried:==========- Commenting out all but essential lines from
the DomU's config file,- When commenting out all storage and starting
the DomU, qemu-system-i386's resource utilization remains fine with the
DomU boot-looping at the SeaBIOS. Booting from any kind of storage
causes the issue.
Downgrading back to qemu-system-x86 1:2.8+dfsg-6 resolves the issue.
Setup:====- OS: Debian Stretch, all packages up to date as of 10
August, 2017- Architecture: 64-bit Intel x86- Hypervisor: xen-
hypervisor-4.8-amd64 (4.8.1-1+deb9u1)- QEMU package: qemu-system-x86
(1:2.8+dfsg-6+deb9u2)- Kernel (Dom0 and DomU): linux-image-4.9.0-3-
amd64 (4.9.30-2+deb9u3)- RAM allocated to Dom0: 512MB (ballooning
disabled)- Physical CPU cores allocated to Dom0: 4 of 4
Steps to reproduce:=============1. Set up the Dom0 system with latest
packages for Debian Stretch,2. Create a generic HVM DomU configuration
file (default HVM builder, default device_model_version),3. Start the
HVM DomU with `xl create`,4. Monitor qemu-system-i386 CPU, RAM usage
with `top.`

Reply via email to