er... This bug/issue should be closed.
** Changed in: qemu
Status: New => Invalid
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1888601
Title:
QEMU v5.1.0-rc0/rc1 hang with nested
This problem no longer occurs. I suspect it was an issue in my
environment or possibly with the options I compiled QEMU with. This
big/issue should be closed.
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
Not currently, but let me try and setup just a simple qemu test
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1888601
Title:
QEMU v5.1.0-rc0/rc1 hang with nested virtualization
Status in QEMU:
If even with that fix it's failing =and giving a similar back trace,
then:
Thread 4 (LWP 23729):
#0 0x7f9ae5fed337 in __strcmp_sse2 ()
#1 0x7f9ae5d9a8ad in g_str_equal () at pthread_create.c:679
#2 0x7f9ae5d99a9d in g_hash_table_lookup () at pthread_create.c:679
#3 0x7f9ae5ac728f
Is there anything more I could do to help track this down?
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1888601
Title:
QEMU v5.1.0-rc0/rc1 hang with nested virtualization
Status in QEMU:
New
Hi Jason,
See Comment#10 for trace -- 5.1.0-rc2 includes that fix...
https://github.com/qemu/qemu/commit/a48aaf882b100b30111b5c7c75e1d9e83fe76cfd
... so hang is still happening.
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
Thanks for the information.
It could be fixed by this commit upstream.
a48aaf882b100b30111b5c7c75e1d9e83fe76cfd ("virtio-pci: fix wrong index in
virtio_pci_queue_enabled")
Please try.
Thanks
--
You received this bug notification because you are a member of qemu-
devel-ml, which is
Here's what I get with 5.1.0-rc2
```
(gdb) thread apply all bt
Thread 5 (LWP 23730):
#0 0x7f9ae6040ebb in ioctl ()
#1 0x7f9ae57cf98b in kvm_vcpu_ioctl (cpu=cpu@entry=0x57539ea0,
type=type@entry=44672) at /root/qemu/accel/kvm/kvm-all.c:2631
#2 0x7f9ae57cfac5 in kvm_cpu_exec
```
(gdb) thread apply all bt
Thread 5 (LWP 211759):
#0 0x7ff56a9988d8 in g_str_hash ()
#1 0x7ff56a997a0c in g_hash_table_lookup ()
#2 0x7ff56a6c528f in type_table_lookup (name=0x7ff56ac9a9dd "virtio-bus")
at qom/object.c:84
#3 type_get_by_name (name=0x7ff56ac9a9dd "virtio-bus")
Thanks Jason,
Ok will do... gimme a day and I'll post what I see.
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1888601
Title:
QEMU v5.1.0-rc0/rc1 hang with nested virtualization
Status in QEMU:
Yon can get the calltrace by:
1) compile the qemu with --enable-debug
2) using gdb -p $pid_of_qemu when you see the hang
3) thread apply all bt
Thanks
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
It hangs (still guessing here) immediately -- before anything is logged.
I'll try to get you a calltrace but have to figure out how to do that
first ;) Any pointers appreciated.
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
:) Sorry a VSI is a virtual server instance e.g a VM vs. a bare metal
server.
I'm using IBM Cloud IKS which is a managed Kubernetes Service. I'm
installing QEMU 5.1.x on the worker nodes. It is this instance of QEMU
-- e.g. /opt/kata/bin/qemu-system-x86_64 -- that is hanging. The Kernel
version
Simon: Please answer Jason's questions
But my reading is; I don't know what a VSI is but I'm guessing it's the
host and he doesn't know the host qemu; so the 5.1is the qemu that kata
is running?
It perhaps would make more sense if this has nothing to do with nesting,
and more to do with Kata's
Hi:
It's not clear to me:
- Is the hang happen on the host or L1 guest?
- Is qemu 5.1-rc0 used on the host or L1 guest?
- When did you see the hung, just after launching the guest?
- Can you use gdb to get a calltrace of qemu when you see the hang?
- What's the version of kernel in L1 and L2
I believe the VSI itself is QEMU based but don't know the version or
details but suspect it's 4.1 based. We compile our own QEMU version for
use with Kata and that's where we're now using 5.1.0-rc1 with the above
commit reverted.
Host Kernel is ... 4.15.0-101-generic if that helps
re: cpu --
Seems an odd one to be the culprit? Still lets ask Jason.
Can you just clarify; is this qemu 5.1.0-rc on both the host and the kata, or
are you just changing to using qemu 5.1 as the host, and you're sticking with
the existing kata qemu?
Which host CPU have you got?
--
You received this bug
** Description changed:
We're running Kata Containers using QEMU and with v5.1.0rc0 and rc1 have
noticed a problem at startup where QEMu appears to hang. We are not
seeing this problem on our bare metal nodes and only on a VSI that
supports nested virtualization.
We unfortunately see
18 matches
Mail list logo