[apologies for the delay -- I've been off for a week and am catching up
on email]
On Tue, Oct 20, 2015 at 09:58:51AM -0400, Sasha Levin wrote:
> On 10/20/2015 09:42 AM, Dmitry Vyukov wrote:
> > I now have another issue. My binary fails to mmap a file within lkvm
> > sandbox. The same binary
It seems that real mode virtualisation on Nehalem has regressed in 4.2:
On Sun, 2015-10-25 at 10:08 +0100, Stefan Fritsch wrote:
[...]
> I cannot use KVM with 4.2, qemu loops with 100% CPU during seabios
> initialization. Booting with the latest linux-image-4.1.0-2-amd64 fixes
> the issue.
[...]
On 10/20/2015 09:42 AM, Dmitry Vyukov wrote:
> I now have another issue. My binary fails to mmap a file within lkvm
> sandbox. The same binary works fine on host and in qemu. I've added
> strace into sandbox script, and here is the output:
>
> [pid 837] openat(AT_FDCWD,
m=2048 --cpus=4 --kernel
>>>>>> /arch/x86/boot/bzImage --network mode=user -- /my_prog
>>>>>>
>>>>>> /my_prog then connects to a program on host over a tcp socket.
>>>>>> I see that host receives some data
a tcp socket.
>>> I see that host receives some data, sends some data back, but then
>>> my_prog hangs on network read.
>>>
>>> To localize this I wrote 2 programs (attached). ping is run on host
>>> and pong is run from lkvm sandbox. They successfully es
t;> /arch/x86/boot/bzImage --network mode=user -- /my_prog
>>>>
>>>> /my_prog then connects to a program on host over a tcp socket.
>>>> I see that host receives some data, sends some data back, but then
>>>> my_prog hangs on network read.
>>>&
;
>>>>> ./lkvm sandbox --disk sandbox-test --mem=2048 --cpus=4 --kernel
>>>>> /arch/x86/boot/bzImage --network mode=user -- /my_prog
>>>>>
>>>>> /my_prog then connects to a program on host over a tcp socket.
>>>>> I see th
mode=user -- /my_prog
>
> /my_prog then connects to a program on host over a tcp socket.
> I see that host receives some data, sends some data back, but then
> my_prog hangs on network read.
>
> To localize this I wrote 2 programs (attached). ping is run on host
> and pong i
.
I see that host receives some data, sends some data back, but then
my_prog hangs on network read.
To localize this I wrote 2 programs (attached). ping is run on host
and pong is run from lkvm sandbox. They successfully establish tcp
connection, but after some iterations both hang on read
https://bugzilla.kernel.org/show_bug.cgi?id=103851
Bug ID: 103851
Summary: qemu windows guest hangs on 100% cpu usage
Product: Virtualization
Version: unspecified
Kernel Version: 3.13.6
Hardware: Intel
OS: Linux
https://bugzilla.kernel.org/show_bug.cgi?id=103851
Wanpeng Li changed:
What|Removed |Added
CC|
https://bugzilla.kernel.org/show_bug.cgi?id=93251
--- Comment #15 from Igor Mammedov imamm...@redhat.com ---
Fixed in v4.0
commit 744961341d472db6272ed9b42319a90f5a2aa7c4
kvm: avoid page allocation failure in kvm_set_memory_region()
--
You are receiving this mail because:
You are watching
https://bugzilla.kernel.org/show_bug.cgi?id=93251
--- Comment #12 from Thomas Stein himbe...@meine-oma.de ---
Hello Igor.
Is this bug in 3.18 also present? I'm asking because i consider a downgrade.
thanks and cheers
t.
--
You are receiving this mail because:
You are watching the assignee of
https://bugzilla.kernel.org/show_bug.cgi?id=93251
--- Comment #13 from Igor Mammedov imamm...@redhat.com ---
Nope, it's only since 3.19.
Could you test patch in comment 11?
--
You are receiving this mail because:
You are watching the assignee of the bug.
--
To unsubscribe from this list: send
https://bugzilla.kernel.org/show_bug.cgi?id=93251
--- Comment #14 from Thomas Stein himbe...@meine-oma.de ---
I have patch from comment 11 already running on two machines. No problems so
far.
--
You are receiving this mail because:
You are watching the assignee of the bug.
--
To unsubscribe
https://bugzilla.kernel.org/show_bug.cgi?id=93251
--- Comment #11 from Igor Mammedov imamm...@redhat.com ---
(In reply to Thomas Stein from comment #10)
Hello.
I suppose this patch is not included in 3.19.3?
Nope,
I've posted v2 of the patch fixing problems Marcelo pointed out:
https://bugzilla.kernel.org/show_bug.cgi?id=93251
--- Comment #10 from Thomas Stein himbe...@meine-oma.de ---
Hello.
I suppose this patch is not included in 3.19.3?
thanks and cheers
t.
--
You are receiving this mail because:
You are watching the assignee of the bug.
--
To unsubscribe from
https://bugzilla.kernel.org/show_bug.cgi?id=93251
--- Comment #8 from Igor Mammedov imamm...@redhat.com ---
(In reply to Thomas Stein from comment #7)
Hello.
After reverting commit 1d4e7e3c0bca747d0fc54069a6ab8393349431c0 i had no
problem any more. But we have to keep in mind this error only
https://bugzilla.kernel.org/show_bug.cgi?id=93251
--- Comment #9 from Thomas Stein himbe...@meine-oma.de ---
Hello.
I applied the patch to vanilla 3.19.2. No problems so far. Did a few snapshots
and vm restarts.
cheers
t.
--
You are receiving this mail because:
You are watching the assignee
https://bugzilla.kernel.org/show_bug.cgi?id=93251
--- Comment #7 from Thomas Stein himbe...@meine-oma.de ---
Hello.
After reverting commit 1d4e7e3c0bca747d0fc54069a6ab8393349431c0 i had no
problem any more. But we have to keep in mind this error only happend now and
then. Especially creating
https://bugzilla.kernel.org/show_bug.cgi?id=93251
Igor Mammedov imamm...@redhat.com changed:
What|Removed |Added
CC||imamm...@redhat.com
https://bugzilla.kernel.org/show_bug.cgi?id=93251
--- Comment #5 from Thomas Stein himbe...@meine-oma.de ---
Hi.
Reverted the commit right now. I have to boot with this kernel. More tomorrow.
Thanks and cheers
t.
--
You are receiving this mail because:
You are watching the assignee of the
https://bugzilla.kernel.org/show_bug.cgi?id=93251
Thomas Stein himbe...@meine-oma.de changed:
What|Removed |Added
CC||himbe...@meine-oma.de
https://bugzilla.kernel.org/show_bug.cgi?id=93251
Bandan Das b...@makefile.in changed:
What|Removed |Added
CC||b...@makefile.in
---
--- Comment #2 from Marcelo Tosatti mtosa...@redhat.com ---
(In reply to Ondřej Súkup from comment #1)
I run about 7 nodes, an simulate build of openstack cloud in qemu-kvm env ..
nodes randomly after reboot stop working, or hangs in few sec after reboot
from journalctl:
úno 13 18:10
https://bugzilla.kernel.org/show_bug.cgi?id=93251
Bug ID: 93251
Summary: qemu-kvm guests randomly hangs after reboot command in
guest
Product: Virtualization
Version: unspecified
Kernel Version: 3.19.0
Hardware: Intel
https://bugzilla.kernel.org/show_bug.cgi?id=93251
--- Comment #1 from Ondřej Súkup mimi...@gmail.com ---
I run about 7 nodes, an simulate build of openstack cloud in qemu-kvm env ..
nodes randomly after reboot stop working, or hangs in few sec after reboot
from journalctl:
úno 13 18:10:22
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
On 01/23/2014 07:55 PM, Dave Hansen wrote:
On 01/21/2014 08:38 AM, Toralf Förster wrote:
Jan 21 17:18:57 n22 kernel: INFO: rcu_sched self-detected stall on CPU { 2}
(t=60001 jiffies g=18494 c=18493 q=183951)
Jan 21 17:18:57 n22 kernel: sending
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
On 01/23/2014 07:55 PM, Dave Hansen wrote:
On 01/21/2014 08:38 AM, Toralf Förster wrote:
Jan 21 17:18:57 n22 kernel: INFO: rcu_sched self-detected stall on CPU { 2}
(t=60001 jiffies g=18494 c=18493 q=183951)
Jan 21 17:18:57 n22 kernel: sending
hangs and for those cases then sometimes not even
| sysrq buttons do work.
Can you reproduce it with the wlan driver disabled completely?
yes - with CONFIG_WLAN=n I do get a similar thing :
Jan 26 11:23:17 n22 kernel: NET: Registered protocol family 17
Jan 26 11:23:17 n22 kernel: device vnet0
hangs and for those cases then sometimes not even | sysrq buttons
do work.
Can you reproduce it with the wlan driver disabled completely?
yes - root cause is not the wlan - that's just a victim.
Paolo
- --
MfG/Sincerely
Toralf Förster
pgp finger print:1A37 6F99 4A9D 026F 13E2 4DCF C4EA
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Il 23/01/2014 20:50, Toralf Förster ha scritto:
| What makes the situation really annyoing - sometimes I just can
| restart my wlan device it the system works normal, but sometimes
| the whole system hangs and for those cases then sometimes not even
On 01/21/2014 08:38 AM, Toralf Förster wrote:
Jan 21 17:18:57 n22 kernel: INFO: rcu_sched self-detected stall on CPU { 2}
(t=60001 jiffies g=18494 c=18493 q=183951)
Jan 21 17:18:57 n22 kernel: sending NMI to all CPUs:
Jan 21 17:18:57 n22 kernel: NMI backtrace for cpu 2
Jan 21 17:18:57 n22
On 01/23/2014 10:55 AM, Dave Hansen wrote:
On 01/21/2014 08:38 AM, Toralf Förster wrote:
Jan 21 17:18:57 n22 kernel: INFO: rcu_sched self-detected stall on CPU { 2}
(t=60001 jiffies g=18494 c=18493 q=183951)
Jan 21 17:18:57 n22 kernel: sending NMI to all CPUs:
Jan 21 17:18:57 n22 kernel:
CONFIG_PREEMPT_NOTIFIERS=y
# CONFIG_PREEMPT_NONE is not set
CONFIG_PREEMPT_VOLUNTARY=y
# CONFIG_PREEMPT is not set
What makes the situation really annyoing - sometimes I just can restart my wlan
device it the system works normal, but sometimes the whole system hangs and for
those cases
https://bugzilla.kernel.org/show_bug.cgi?id=63291
Greg Sheremeta g...@gregsheremeta.com changed:
What|Removed |Added
Status|NEW |RESOLVED
https://bugzilla.kernel.org/show_bug.cgi?id=63291
Bug ID: 63291
Summary: KVM USB passthrough to Windows 7 guest fails with
error -110, hangs
Product: Virtualization
Version: unspecified
Kernel Version: 3.11.1-200.fc19
https://bugzilla.kernel.org/show_bug.cgi?id=63291
Gleb g...@redhat.com changed:
What|Removed |Added
CC||g...@redhat.com
--- Comment #1
https://bugzilla.kernel.org/show_bug.cgi?id=60642
Bug ID: 60642
Summary: guest uses 100% and completely hangs
Product: Virtualization
Version: unspecified
Kernel Version: guest: 3.10, host: 3.10
Hardware: x86-64
OS: Linux
https://bugzilla.kernel.org/show_bug.cgi?id=60642
--- Comment #1 from Folkert van Heusden folk...@vanheusden.com ---
In the console of the guest I see:
[sched_delayed] sched: RT throttling activated
--
You are receiving this mail because:
You are watching the assignee of the bug.
--
To
I have the same issue, with 3.9.1 (3.9.0 too) it hangs right after seabios...
(no problem in 3.8.11)
qemu-1.4.1
seabios-1.7.2.1
after setting emulate_invalid_guest_state=0 everything works just fine.
virsh # qemu-monitor-command vm-jack --hmp x/8i \$pc
0x000fc46b: lgdtw %cs:-0x2c60
On Wed, May 08, 2013 at 11:22:01AM +, Tomas Papan wrote:
I have the same issue, with 3.9.1 (3.9.0 too) it hangs right after seabios...
(no problem in 3.8.11)
qemu-1.4.1
seabios-1.7.2.1
Is there anything interesting in libvirt logfile?
Also please send the output of qemu-monitor
wrote:
I have the same issue, with 3.9.1 (3.9.0 too) it hangs right after seabios...
(no problem in 3.8.11)
qemu-1.4.1
seabios-1.7.2.1
Is there anything interesting in libvirt logfile?
Also please send the output of qemu-monitor-command vm-jack --hmp info
registers
And, just in case, can
On Wed, May 08, 2013 at 02:08:55PM +0200, Tomas Papan wrote:
Hi,
I found this in the libvirt (but those messages are same in 3.8.x)
anakin libvirt # cat libvirtd.log
2013-05-08 11:59:29.645+: 3750: info : libvirt version: 1.0.5
2013-05-08 11:59:29.645+: 3750: error :
Hi,
No nothing, I check all logs (even syslog)
1) virsh # qemu-monitor-command vm-jack --hmp info status
VM status: running
2) morpheus@anakin ~ $ ps aux | grep vm-jack
qemu 3822 0.5 0.1 8952256 23600 ? Sl 13:59 0:08
/usr/bin/qemu-system-x86_64 -machine accel=kvm -name vm-jack
On Wed, May 08, 2013 at 02:08:55PM +0200, Tomas Papan wrote:
Hi,
I found this in the libvirt (but those messages are same in 3.8.x)
anakin libvirt # cat libvirtd.log
2013-05-08 11:59:29.645+: 3750: info : libvirt version: 1.0.5
2013-05-08 11:59:29.645+: 3750: error :
Sorry, I didn't write that well, I checked that log too... nothing is there...
anakin qemu # cat vm-jack.log
2013-05-08 13:02:52.358+: starting up
LC_ALL=C
PATH=/bin:/sbin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/opt/bin
HOME=/root USER=root
On Wed, May 08, 2013 at 02:51:48PM +0200, Tomas Papan wrote:
Hi,
No nothing, I check all logs (even syslog)
Yeah, since status of the vm is running you are not suppose to see
there anything.
1) virsh # qemu-monitor-command vm-jack --hmp info status
VM status: running
2) morpheus@anakin
Ok, the cpu stays at 0% when it hangs, there is only one 100% cpu peak
which happens when the vm starts ( I think this is quite normal).
However I run following command, and I stop it right when it hangs:
anakin trace2 # virsh start vm-jack; pid=`virsh qemu-monitor-command
vm-jack --hmp info cpus
On Wed, May 08, 2013 at 03:50:47PM +0200, Tomas Papan wrote:
Ok, the cpu stays at 0% when it hangs, there is only one 100% cpu peak
which happens when the vm starts ( I think this is quite normal).
However I run following command, and I stop it right when it hangs:
anakin trace2 # virsh
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
patch is working :)
Thank you very much Gleb.
Regards
Tomas
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Wed, May 08, 2013 at 04:52:52PM +0200, Tomas Papan wrote:
patch is working :)
Thank you very much Gleb.
Thank you for your patience. Curious but it was.
--
Gleb.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to
this for us so get rid of kvmppc_layz_ee() calls. With this fix
we eliminate irqs_disabled() warnings and some guest and host hangs revealed
under stress tests, but guests still exhibit some unresponsiveness.
The unresponsiveness has to do with the fact that arch_local_irq_restore()
does
with soft_enabled == 1, a local_irq_enable()
call will do this for us so get rid of kvmppc_layz_ee() calls. With this fix
we eliminate irqs_disabled() warnings and some guest and host hangs revealed
under stress tests, but guests still exhibit some unresponsiveness.
The unresponsiveness has to do
with soft_enabled == 1, a
local_irq_enable()
call will do this for us so get rid of kvmppc_layz_ee() calls. With
this fix
we eliminate irqs_disabled() warnings and some guest and host hangs
revealed
under stress tests, but guests still exhibit some unresponsiveness.
The unresponsiveness has to do
and hangs
The unresponsiveness has to do with the fact that
arch_local_irq_restore()
does not guarantees to hard enable interrupts.
Could you elaborate? If the saved IRQ state was enabled, why
wouldn't arch_local_irq_restore() hard-enable IRQs? The last thing it
does
-B02008
Subject: Re: [PATCH] KVM: PPC: Book3E 64: Fix IRQs warnings and
hangs
The unresponsiveness has to do with the fact that
arch_local_irq_restore()
does not guarantees to hard enable interrupts.
Could you elaborate? If the saved IRQ state was enabled, why
wouldn't
-Original Message-
From: Wood Scott-B07421
Sent: Friday, May 03, 2013 11:15 PM
To: Caraman Mihai Claudiu-B02008
Cc: Wood Scott-B07421; kvm-...@vger.kernel.org; kvm@vger.kernel.org;
linuxppc-...@lists.ozlabs.org
Subject: Re: [PATCH] KVM: PPC: Book3E 64: Fix IRQs warnings and hangs
On Fri, 2013-05-03 at 18:24 +0200, Alexander Graf wrote:
There is no reason to exit guest with soft_enabled == 1, a
local_irq_enable()
call will do this for us so get rid of kvmppc_layz_ee() calls. With this fix
we eliminate irqs_disabled() warnings and some guest and host hangs revealed
: Re: [PATCH] KVM: PPC: Book3E 64: Fix IRQs warnings and
hangs
The unresponsiveness has to do with the fact that
arch_local_irq_restore()
does not guarantees to hard enable interrupts.
Could you elaborate? If the saved IRQ state was enabled, why
wouldn't
-Original Message-
From: Wood Scott-B07421
Sent: Saturday, May 04, 2013 1:07 AM
To: Caraman Mihai Claudiu-B02008
Cc: Wood Scott-B07421; kvm-...@vger.kernel.org; kvm@vger.kernel.org;
linuxppc-...@lists.ozlabs.org
Subject: Re: [PATCH] KVM: PPC: Book3E 64: Fix IRQs warnings and hangs
Subject: Re: [PATCH] KVM: PPC: Book3E 64: Fix IRQs warnings and
hangs
I replaced the two calls to kvmppc_lazy_ee_enable() with calls to
hard_irq_disable(), and it seems to be working fine.
Please take a look on 'KVM: PPC64: booke: Hard disable interrupts when
entering guest' RFC thread and see
this for us so get rid of kvmppc_layz_ee() calls. With this fix
we eliminate irqs_disabled() warnings and some guest and host hangs revealed
under stress tests, but guests still exhibit some unresponsiveness.
The unresponsiveness has to do with the fact that arch_local_irq_restore()
does
with soft_enabled == 1, a local_irq_enable()
call will do this for us so get rid of kvmppc_layz_ee() calls. With this fix
we eliminate irqs_disabled() warnings and some guest and host hangs revealed
under stress tests, but guests still exhibit some unresponsiveness.
The unresponsiveness has to do
with soft_enabled == 1, a
local_irq_enable()
call will do this for us so get rid of kvmppc_layz_ee() calls. With
this fix
we eliminate irqs_disabled() warnings and some guest and host hangs
revealed
under stress tests, but guests still exhibit some unresponsiveness.
The unresponsiveness has to do
and hangs
The unresponsiveness has to do with the fact that
arch_local_irq_restore()
does not guarantees to hard enable interrupts.
Could you elaborate? If the saved IRQ state was enabled, why
wouldn't arch_local_irq_restore() hard-enable IRQs? The last thing it
does
-B02008
Subject: Re: [PATCH] KVM: PPC: Book3E 64: Fix IRQs warnings and
hangs
The unresponsiveness has to do with the fact that
arch_local_irq_restore()
does not guarantees to hard enable interrupts.
Could you elaborate? If the saved IRQ state was enabled, why
wouldn't
-Original Message-
From: Wood Scott-B07421
Sent: Friday, May 03, 2013 11:15 PM
To: Caraman Mihai Claudiu-B02008
Cc: Wood Scott-B07421; kvm-ppc@vger.kernel.org; k...@vger.kernel.org;
linuxppc-...@lists.ozlabs.org
Subject: Re: [PATCH] KVM: PPC: Book3E 64: Fix IRQs warnings and hangs
On Fri, 2013-05-03 at 18:24 +0200, Alexander Graf wrote:
There is no reason to exit guest with soft_enabled == 1, a
local_irq_enable()
call will do this for us so get rid of kvmppc_layz_ee() calls. With this fix
we eliminate irqs_disabled() warnings and some guest and host hangs revealed
Subject: Re: [PATCH] KVM: PPC: Book3E 64: Fix IRQs warnings and
hangs
The unresponsiveness has to do with the fact that
arch_local_irq_restore()
does not guarantees to hard enable interrupts.
Could you elaborate? If the saved IRQ state was enabled, why
wouldn't
-Original Message-
From: Wood Scott-B07421
Sent: Saturday, May 04, 2013 1:07 AM
To: Caraman Mihai Claudiu-B02008
Cc: Wood Scott-B07421; kvm-ppc@vger.kernel.org; k...@vger.kernel.org;
linuxppc-...@lists.ozlabs.org
Subject: Re: [PATCH] KVM: PPC: Book3E 64: Fix IRQs warnings and hangs
Subject: Re: [PATCH] KVM: PPC: Book3E 64: Fix IRQs warnings and
hangs
I replaced the two calls to kvmppc_lazy_ee_enable() with calls to
hard_irq_disable(), and it seems to be working fine.
Please take a look on 'KVM: PPC64: booke: Hard disable interrupts when
entering guest' RFC thread and see
https://bugzilla.kernel.org/show_bug.cgi?id=50921
--- Comment #21 from Gleb g...@redhat.com 2013-02-03 08:43:20 ---
It is queued for 3.8.
--
Configure bugmail: https://bugzilla.kernel.org/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are watching the
https://bugzilla.kernel.org/show_bug.cgi?id=50921
--- Comment #22 from Gleb g...@redhat.com 2013-02-03 08:48:16 ---
(In reply to comment #21)
It is queued for 3.8.
Sorry, for 3.9
--
Configure bugmail: https://bugzilla.kernel.org/userprefs.cgi?tab=email
--- You are receiving this mail
https://bugzilla.kernel.org/show_bug.cgi?id=50921
--- Comment #20 from Lucio Crusca lu...@sulweb.org 2013-02-02 22:32:39 ---
Did this fix go into vanilla kernels? Is 3.7.5 patched?
--
Configure bugmail: https://bugzilla.kernel.org/userprefs.cgi?tab=email
--- You are receiving this mail
On 01/14/2013 01:24 PM, Andrew Clayton wrote:
On Mon, 14 Jan 2013 15:27:36 +0200, Gleb Natapov wrote:
On Sun, Jan 13, 2013 at 10:29:58PM +, Andrew Clayton wrote:
When running qemu-kvm under 64but Fedora 16 under current 3.8, it
just hangs at start up. Dong a ps -ef hangs the process
On Tue, 15 Jan 2013 11:48:39 -0500, Rik van Riel wrote:
On 01/14/2013 01:24 PM, Andrew Clayton wrote:
[snip]
bashS 88013b2b0d00 0 3203 3133 0x
880114dabe58 0082 800113558065
880114dabfd8 880114dabfd8 4000
On Tue, 15 Jan 2013 17:17:56 +, Andrew Clayton wrote:
On Tue, 15 Jan 2013 11:48:39 -0500, Rik van Riel wrote:
On 01/14/2013 01:24 PM, Andrew Clayton wrote:
[snip]
bashS 88013b2b0d00 0 3203 3133 0x
880114dabe58 0082
On Tue, 15 Jan 2013, Andrew Clayton wrote:
bashS 88013b2b0d00 0 3203 3133 0x
880114dabe58 0082 800113558065
880114dabfd8 880114dabfd8 4000 88013b0c5b00
88013b2b0d00 880114dabd88 8109067d
On Tue, 15 Jan 2013 19:41:32 +0100 (CET), Jiri Kosina wrote:
[snip]
Could you please try the patch below and report backt? Thanks.
From: Jiri Kosina jkos...@suse.cz
Subject: [PATCH] lockdep, rwsem: fix down_write_nest_lock()
if !CONFIG_DEBUG_LOCK_ALLOC
Commit 1b963c81b1 (lockdep,
On Sun, Jan 13, 2013 at 10:29:58PM +, Andrew Clayton wrote:
When running qemu-kvm under 64but Fedora 16 under current 3.8, it just
hangs at start up. Dong a ps -ef hangs the process at the point where it
would display the qemu process (trying to list the qemu-kvm /proc pid
directory
On Mon, 14 Jan 2013 15:27:36 +0200, Gleb Natapov wrote:
On Sun, Jan 13, 2013 at 10:29:58PM +, Andrew Clayton wrote:
When running qemu-kvm under 64but Fedora 16 under current 3.8, it
just hangs at start up. Dong a ps -ef hangs the process at the
point where it would display the qemu
Copying linux-mm.
On Mon, Jan 14, 2013 at 06:24:49PM +, Andrew Clayton wrote:
On Mon, 14 Jan 2013 15:27:36 +0200, Gleb Natapov wrote:
On Sun, Jan 13, 2013 at 10:29:58PM +, Andrew Clayton wrote:
When running qemu-kvm under 64but Fedora 16 under current 3.8, it
just hangs
When running qemu-kvm under 64but Fedora 16 under current 3.8, it just
hangs at start up. Dong a ps -ef hangs the process at the point where it
would display the qemu process (trying to list the qemu-kvm /proc pid
directory contents just hangs ls).
I also noticed some other weirdness
-Original Message-
From: Gleb Natapov [mailto:g...@redhat.com]
Sent: Monday, January 07, 2013 5:21 PM
To: Ren, Yongjie
Cc: Stefan Pietsch; kvm@vger.kernel.org
Subject: Re: Installation of Windows 8 hangs with KVM
On Mon, Jan 07, 2013 at 09:13:37AM +, Ren, Yongjie wrote
* Ren, Yongjie yongjie@intel.com [2013-01-07 09:38]:
you met issue only for 32bit Win8 (not 64 bit Win8), right?
I think it's the same issue as the below bug I reported.
https://bugs.launchpad.net/qemu/+bug/1007269
You can try with '-cpu coreduo' or '-cpu core2duo' in qemu-kvm command
-Original Message-
From: kvm-ow...@vger.kernel.org [mailto:kvm-ow...@vger.kernel.org]
On Behalf Of Stefan Pietsch
Sent: Monday, January 07, 2013 2:25 AM
To: Gleb Natapov
Cc: kvm@vger.kernel.org
Subject: Re: Installation of Windows 8 hangs with KVM
* Gleb Natapov g...@redhat.com
: Installation of Windows 8 hangs with KVM
* Gleb Natapov g...@redhat.com [2013-01-06 11:11]:
On Fri, Jan 04, 2013 at 10:58:33PM +0100, Stefan Pietsch wrote:
Hi all,
when I run KVM with this command the Windows 8 installation stops
with
error code 0x005D:
kvm -m 1024 -hda
-Original Message-
From: kvm-ow...@vger.kernel.org [mailto:kvm-ow...@vger.kernel.org]
On Behalf Of Gleb Natapov
Sent: Monday, January 07, 2013 4:54 PM
To: Ren, Yongjie
Cc: Stefan Pietsch; kvm@vger.kernel.org
Subject: Re: Installation of Windows 8 hangs with KVM
On Mon, Jan 07
Subject: Re: Installation of Windows 8 hangs with KVM
On Mon, Jan 07, 2013 at 08:38:59AM +, Ren, Yongjie wrote:
-Original Message-
From: kvm-ow...@vger.kernel.org
[mailto:kvm-ow...@vger.kernel.org]
On Behalf Of Stefan Pietsch
Sent: Monday, January 07, 2013 2:25 AM
screen and hangs.
With Virtualbox the installation succeeds.
The host CPU is an Intel Core Duo L2400.
Do you have any suggestions?
What is your kernel/qemu version?
--
Gleb.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message
the option -cpu host the installation proceeds to a black
screen and hangs.
With Virtualbox the installation succeeds.
The host CPU is an Intel Core Duo L2400.
Do you have any suggestions?
What is your kernel/qemu version?
I'm using Debian unstable.
qemu-kvm 1.1.2+dfsg-3
Hi all,
when I run KVM with this command the Windows 8 installation stops with
error code 0x005D:
kvm -m 1024 -hda win8.img -cdrom windows_8_x86.iso
After adding the option -cpu host the installation proceeds to a black
screen and hangs.
With Virtualbox the installation succeeds.
The host
https://bugzilla.kernel.org/show_bug.cgi?id=50921
Lucio Crusca lu...@sulweb.org changed:
What|Removed |Added
Status|NEW |RESOLVED
https://bugzilla.kernel.org/show_bug.cgi?id=50921
--- Comment #18 from Gleb g...@redhat.com 2012-12-05 14:15:30 ---
Created an attachment (id=88501)
-- (https://bugzilla.kernel.org/attachment.cgi?id=88501)
patch to implement aad (b5) instruction.
Can you see if this patch helps?
--
https://bugzilla.kernel.org/show_bug.cgi?id=50921
Gleb g...@redhat.com changed:
What|Removed |Added
Attachment #88501|patch to implement aad (b5) |patch to implement aad (D5)
https://bugzilla.kernel.org/show_bug.cgi?id=50921
--- Comment #17 from Gleb g...@redhat.com 2012-11-27 08:22:15 ---
(In reply to comment #16)
@xiaoguangrong: YOU ARE THE MAN! 'emulate_invalid_guest_state = 0' did the
trick, now I have win2000 running in a 3.6.7 kvm guest! Thanks.
Still
https://bugzilla.kernel.org/show_bug.cgi?id=50921
Alan a...@lxorguk.ukuu.org.uk changed:
What|Removed |Added
CC||a...@lxorguk.ukuu.org.uk
https://bugzilla.kernel.org/show_bug.cgi?id=50921
--- Comment #13 from Lucio Crusca lu...@sulweb.org 2012-11-26 13:13:56 ---
@Alan: see comment #5, since then I've always tested with and without vbox
modules.
@Gleb: can't run on 3.5.0 right now, I'll take the stack trace ASAP.
--
1 - 100 of 406 matches
Mail list logo