Marcelo Tosatti wrote:
On Sat, May 16, 2009 at 10:38:25AM +0200, Hans de Bruin wrote:
I ran memtest for 11 hours and it completed 4.7 passes with no problems.
But then memtest is about cpu and memmory interaction. If the problem is disk related there is also disk/chipset/dma and memmory interaction. I could degrade my system by turning dma on disk io off, or i could have a closer look at kvm-autotest.

Hans,

It would be helpful if you can capture a few more KVM oopses, then.



v2.6.30-rc6-144-g5805977
kvm-86-122-ge2478f5
kvm from kernel
simultaneously booting two w2k8.
One vm dies:

[  167.829503] rmap_remove:  ffff8801079a7a00 800000017c5c6047 1->BUG
[  167.829510] ------------[ cut here ]------------
[  167.829513] kernel BUG at arch/x86/kvm/mmu.c:582!
[  167.829515] invalid opcode: 0000 [#1] SMP
[ 167.829518] last sysfs file: /sys/devices/pci0000:00/0000:00:10.0/0000:01:09.0/resource
[  167.829520] CPU 1
[  167.829522] Modules linked in:
[ 167.829526] Pid: 2908, comm: qemu-system-x86 Not tainted 2.6.30-rc6 #5 System Product Name [ 167.829528] RIP: 0010:[<ffffffff80216fff>] [<ffffffff80216fff>] rmap_remove+0xdf/0x200
[  167.829536] RSP: 0018:ffff8801a13f19f8  EFLAGS: 00010292
[ 167.829538] RAX: 0000000000000049 RBX: 800000017c5c6047 RCX: ffffffff809a3b40 [ 167.829541] RDX: ffff88002804d000 RSI: 0000000000000046 RDI: ffffffff809a3a30 [ 167.829543] RBP: ffff8801a13f1a28 R08: 000000000000ae32 R09: 00000000ffffffff [ 167.829545] R10: 0000000000000000 R11: 0000000000000000 R12: 000000000017c5c6 [ 167.829548] R13: ffff8801079a7a00 R14: ffff8801094f8580 R15: ffff8801a121c000 [ 167.829551] FS: 000000004239a950(0063) GS:ffff88002804d000(0000) knlGS:000007fffffd6000
[  167.829553] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 167.829556] CR2: 00007fd70812e540 CR3: 00000001a3fb8000 CR4: 00000000000006e0 [ 167.829558] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 167.829560] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 [ 167.829563] Process qemu-system-x86 (pid: 2908, threadinfo ffff8801a13f0000, task ffff8801ae9d1c20)
[  167.829565] Stack:
[ 167.829566] ffff8801a13f1a28 0000000000035b00 0000000000035b2e 88e0000035b2e867 [ 167.829570] 0000000000000a00 ffff8801094f8580 ffff8801a13f1ac8 ffffffff8021ad8d [ 167.829574] 0000000000000000 ffff880100000000 000000000003633a 000000000017d5ea
[  167.829578] Call Trace:
[  167.829580]  [<ffffffff8021ad8d>] paging64_sync_page+0x9d/0x1a0
[  167.829585]  [<ffffffff80218825>] ? rmap_write_protect+0xd5/0x150
[  167.829589]  [<ffffffff8021890b>] kvm_sync_page+0x6b/0x90
[  167.829592]  [<ffffffff8021a1ad>] mmu_sync_children+0xcd/0x120
[  167.829596]  [<ffffffff8021c242>] ? x86_decode_insn+0x412/0xf10
[  167.829600]  [<ffffffff8021a2c2>] mmu_sync_roots+0xc2/0xd0
[  167.829603]  [<ffffffff8021a658>] kvm_mmu_load+0x138/0x200
[  167.829606]  [<ffffffff8022821a>] ? handle_exit+0x14a/0x2c0
[  167.829610]  [<ffffffff80213873>] kvm_arch_vcpu_ioctl_run+0x863/0xaa0
[  167.829615]  [<ffffffff8020b5d5>] ? kvm_vm_ioctl+0x165/0x910
[  167.829618]  [<ffffffff8027cde9>] ? do_futex+0x689/0x9c0
[  167.829623]  [<ffffffff8020cad3>] kvm_vcpu_ioctl+0x5d3/0x790
[  167.829626]  [<ffffffff8022b88e>] ? common_interrupt+0xe/0x13
[  167.829630]  [<ffffffff8024eaeb>] ? __dequeue_entity+0x2b/0x50
[  167.829633]  [<ffffffff802d8fa1>] vfs_ioctl+0x31/0x90
[  167.829638]  [<ffffffff802d92f1>] do_vfs_ioctl+0x2f1/0x4e0
[  167.829641]  [<ffffffff802d9562>] sys_ioctl+0x82/0xa0
[  167.829645]  [<ffffffff8022af6b>] system_call_fastpath+0x16/0x1b
[ 167.829649] Code: 48 85 c0 0f 84 81 00 00 00 a8 01 75 3d 4c 39 e8 0f 84 f4 00 00 00 49 8b 55 00 4c 89 ee 48 c7 c7 f0 2e 7f 80 31 c0 e8 a1 29 04 00 <0f> 0b eb fe 4c 89 e7 e8 f5 2d ff ff eb 9b 48 89 c7 e8 cb 52 ff
[  167.829675] RIP  [<ffffffff80216fff>] rmap_remove+0xdf/0x200
[  167.829678]  RSP <ffff8801a13f19f8>
[  167.829681] ---[ end trace bee56bd865cfd2e1 ]---
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to