Re: Biweekly KVM Test report, kernel 7597f... qemu 1c45e...

2009-08-25 Thread Avi Kivity

On 08/21/2009 10:14 AM, Xu, Jiajun wrote:

I found the migration failure is caused by a configuration mistake on our 
testing machine. Now 64-bit migration works well.
But I found on PAE host, migration will cause host kernel call trace.

   




Pid: 12053, comm: qemu-system-x86 Tainted: G  D(2.6.31-rc2 #1)
EIP: 0060:[c043e023] EFLAGS: 00210202 CPU: 0
EIP is at lock_hrtimer_base+0x11/0x33
EAX: f5d1541c EBX: 0010 ECX: 04a9 EDX: f5c1bc7c
ESI: f5d1541c EDI: f5c1bc7c EBP: f5c1bc74 ESP: f5c1bc68
  DS: 007b ES: 007b FS: 00d8 GS: 0033 SS: 0068
Process qemu-system-x86 (pid: 12053, ti=f5c1b000 task=f61cb410
task.ti=f5c1b000)
Stack:
  f5d1541c  04a9 f5c1bc8c c043e097 f9b7f7cb f5d1541c 
0  04a9 f5c1bc98 c043e0f0 f5d153d0 f5c1bcb0 f9b9b4df  bfd8a102
0  f3c1e000 f5d15440 f5c1bcc0 f9b9b56d bfd8a10c f3c1e000 f5c1bda0 f9b8c26b
Call Trace:
  [c043e097] ? hrtimer_try_to_cancel+0x16/0x62
  [f9b7f7cb] ? kvm_flush_remote_tlbs+0xd/0x1a [kvm]
  [c043e0f0] ? hrtimer_cancel+0xd/0x18
  [f9b9b4df] ? pit_load_count+0x98/0x9e [kvm]
  [f9b9b56d] ? kvm_pit_load_count+0x21/0x35 [kvm]
   


Marcelo, any idea? Looks like the PIT was reloaded, but the hrtimer 
wasn't initialized?


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Biweekly KVM Test report, kernel 7597f... qemu 1c45e...

2009-08-21 Thread Xu, Jiajun
On Wednesday, August 19, 2009 4:09 PM Avi Kivity wrote:

 On 08/19/2009 05:14 AM, Xu, Jiajun wrote:
 I tried this with latest commit, sometimes linux guest can
 do migration with more than 4G memory.
 But sometimes I found guest will hang after migration and on
 host console it will print Unknown savevm section type 40,
 load of migration failed.
 
 Did you meet such issue? I met such error with both linux and
 windows sometimes. 
 
 
 I haven't seen it.  How many migrations does it take to reproduce?

I found the migration failure is caused by a configuration mistake on our 
testing machine. Now 64-bit migration works well.
But I found on PAE host, migration will cause host kernel call trace.

Pid: 12053, comm: qemu-system-x86 Tainted: G  D(2.6.31-rc2 #1)
EIP: 0060:[c043e023] EFLAGS: 00210202 CPU: 0
EIP is at lock_hrtimer_base+0x11/0x33
EAX: f5d1541c EBX: 0010 ECX: 04a9 EDX: f5c1bc7c
ESI: f5d1541c EDI: f5c1bc7c EBP: f5c1bc74 ESP: f5c1bc68
 DS: 007b ES: 007b FS: 00d8 GS: 0033 SS: 0068
Process qemu-system-x86 (pid: 12053, ti=f5c1b000 task=f61cb410
task.ti=f5c1b000)
Stack:
 f5d1541c  04a9 f5c1bc8c c043e097 f9b7f7cb f5d1541c 
0 04a9 f5c1bc98 c043e0f0 f5d153d0 f5c1bcb0 f9b9b4df  bfd8a102
0 f3c1e000 f5d15440 f5c1bcc0 f9b9b56d bfd8a10c f3c1e000 f5c1bda0 f9b8c26b
Call Trace:
 [c043e097] ? hrtimer_try_to_cancel+0x16/0x62
 [f9b7f7cb] ? kvm_flush_remote_tlbs+0xd/0x1a [kvm]
 [c043e0f0] ? hrtimer_cancel+0xd/0x18
 [f9b9b4df] ? pit_load_count+0x98/0x9e [kvm]
 [f9b9b56d] ? kvm_pit_load_count+0x21/0x35 [kvm]
 [f9b8c26b] ? kvm_arch_vm_ioctl+0x91e/0x9f5 [kvm]
 [f9b7f3b4] ? kvm_set_memory_region+0x2f/0x37 [kvm]
 [f9b809c7] ? kvm_vm_ioctl+0xafb/0xb45 [kvm]
 [c043ddf8] ? enqueue_hrtimer+0x5d/0x68
 [c043e258] ? __hrtimer_start_range_ns+0x15d/0x168
 [c043e272] ? hrtimer_start+0xf/0x11
 [f9cd51cd] ? vmx_vcpu_put+0x8/0xa [kvm_intel]
 [f9b83e8b] ? kvm_arch_vcpu_put+0x16/0x19 [kvm]
 [f9b8b943] ? kvm_arch_vcpu_ioctl+0x7d5/0x7df [kvm]
 [c041f1e5] ? kmap_atomic+0x14/0x16
 [c046ec2f] ? get_page_from_freelist+0x27c/0x2d2
 [c046ed72] ? __alloc_pages_nodemask+0xd7/0x402
 [c04714a6] ? lru_cache_add_lru+0x22/0x24
 [f9b7f6b5] ? kvm_dev_ioctl+0x22d/0x250 [kvm]
 [f9b7fecc] ? kvm_vm_ioctl+0x0/0xb45 [kvm]
 [c049a9ab] ? vfs_ioctl+0x22/0x67
 [c049af1d] ? do_vfs_ioctl+0x46c/0x4b7
 [c05fb0fb] ? sys_recv+0x18/0x1a
 [c0446bef] ? sys_futex+0xed/0x103
 [c049afa8] ? sys_ioctl+0x40/0x5a
 [c04028a4] ? sysenter_do_call+0x12/0x22
Code: c0 ff 45 e4 83 45 dc 24 83 7d e4 02 0f 85 cf fe ff ff 8d 65 f4 5b 5e 5f
5d c3 55 89 e5 57 89 d7 56 89 c6 53 8b 5e 20 85 db 74 17 8b 03 e8 0e dd 23 00
89 07 3b 5e 20 74 0d 89 c2 8b 03 e8 8a dd
EIP: [c043e023] lock_hrtimer_base+0x11/0x33 SS:ESP 0068:f5c1bc68
CR2: 0010
---[ end trace f747f57e7d1b76c8 ]---


Best Regards
Jiajun--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Biweekly KVM Test report, kernel 7597f... qemu 1c45e...

2009-08-18 Thread Xu, Jiajun
On Monday, August 17, 2009 8:18 PM Avi Kivity wrote:

 On 08/17/2009 04:32 AM, Xu, Jiajun wrote:
 5. failure to migrate guests with more than 4GB of RAM
 
 https://sourceforge.net/tracker/index.php?func=detailaid=19715
 12group_id=180599atid=893831
 
 
 Now that I have a large host, I tested this, and it works well.  When
 was this most recently tested?

I tried this with latest commit, sometimes linux guest can do migration with 
more than 4G memory. 
But sometimes I found guest will hang after migration and on host console it 
will print Unknown savevm section type 40, load of migration failed.

Did you meet such issue? I met such error with both linux and windows sometimes.


Best Regards
Jiajun--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Biweekly KVM Test report, kernel 7597f... qemu 1c45e...

2009-08-17 Thread Avi Kivity

On 08/17/2009 04:32 AM, Xu, Jiajun wrote:

5. failure to migrate guests with more than 4GB of RAM
https://sourceforge.net/tracker/index.php?func=detailaid=1971512group_id=180599atid=893831
   


Now that I have a large host, I tested this, and it works well.  When 
was this most recently tested?


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html