Re: [PATCH 1/3] KVM: x86 emulator: Make one byte instruction emulation separate

2011-03-10 Thread Avi Kivity
On 03/10/2011 09:35 AM, Takuya Yoshikawa wrote: x86_emulate_insn() is too long and has many confusing goto statements. This patch is the first part of a work which tries to split it into a few meaningful functions: just encapsulates the switch statement for the one byte instruction emulation as

Re: [PATCHv2] fix regression caused by e48672fa25e879f7ae21785c7efd187738139593

2011-03-10 Thread Avi Kivity
On 03/10/2011 12:36 AM, Nikola Ciprich wrote: commit 387b9f97750444728962b236987fbe8ee8cc4f8c moved kvm_request_guest_time_update(vcpu), breaking 32bit SMP guests using kvm-clock. Fix this by moving (new) clock update function to proper place. Applied, thanks. -- error compiling

Re: FreeBSD boot hangs on qemu-kvm on AMD host

2011-03-10 Thread Avi Kivity
On 03/09/2011 07:11 PM, Michael Tokarev wrote: 09.03.2011 19:34, Avi Kivity wrote: [] Sorry, I misread. So kvm.git works, we just have to identify what patch fixed it. 2.6.38-rc8 does not work - verified by Dominik. But kvm.git works. There's some chance it's e5d135f80b98b0.

Re: [PATCH 1/3] KVM: x86 emulator: Make one byte instruction emulation separate

2011-03-10 Thread Takuya Yoshikawa
On Thu, 10 Mar 2011 11:05:38 +0200 Avi Kivity a...@redhat.com wrote: On 03/10/2011 09:35 AM, Takuya Yoshikawa wrote: x86_emulate_insn() is too long and has many confusing goto statements. This patch is the first part of a work which tries to split it into a few meaningful functions: just

Re: [PATCH 1/3] KVM: x86 emulator: Make one byte instruction emulation separate

2011-03-10 Thread Avi Kivity
On 03/10/2011 11:26 AM, Takuya Yoshikawa wrote: The plan is to migrate all of the contents of the switch statements into -execute() callbacks. This way, all of the information about an instruction is present in the decode tables. I see. I'm looking forward to the completion of the plan!

Re: [PATCH 1/3] KVM: x86 emulator: Make one byte instruction emulation separate

2011-03-10 Thread Takuya Yoshikawa
On Thu, 10 Mar 2011 11:27:30 +0200 Avi Kivity a...@redhat.com wrote: On 03/10/2011 11:26 AM, Takuya Yoshikawa wrote: I don't know if anyone is working on it, so feel free to send patches! Yes, I'm interested in it. So I will take a look and try! I was doing some live migration tests using

Re: [PATCH v2 4/4] KVM: MMU: cleanup pte write path

2011-03-10 Thread Avi Kivity
On 03/09/2011 09:43 AM, Xiao Guangrong wrote: This patch does: - call vcpu-arch.mmu.update_pte directly - use gfn_to_pfn_atomic in update_pte path The suggestion is from Avi. - mmu_guess_page_from_pte_write(vcpu, gpa, gentry); + mmu_seq = vcpu-kvm-mmu_notifier_seq; +

Re: [PATCH v2 1/4] KVM: fix rcu usage in init_rmode_* functions

2011-03-10 Thread Avi Kivity
On 03/09/2011 09:41 AM, Xiao Guangrong wrote: fix: [ 3494.671786] stack backtrace: [ 3494.671789] Pid: 10527, comm: qemu-system-x86 Not tainted 2.6.38-rc6+ #23 [ 3494.671790] Call Trace: [ 3494.671796] [] ? lockdep_rcu_dereference+0x9d/0xa5 [ 3494.671826] [] ? kvm_memslots+0x6b/0x73 [kvm] [

Re: VNC and SDL/VGA simultaneously?

2011-03-10 Thread Avi Kivity
On 03/09/2011 11:31 PM, Erik Rull wrote: Hi all, is it possible to parameterize qemu in a way where the VNC port and the VGA output is available in parallel? Not really, though it should be possible to do it with some effort. My system screen remains dark if I run it with the -vnc :0

[PATCH 1/2] sockets: add qemu_socketpair()

2011-03-10 Thread Corentin Chary
Signed-off-by: Corentin Chary corentin.ch...@gmail.com --- osdep.c | 83 + qemu_socket.h |1 + 2 files changed, 84 insertions(+), 0 deletions(-) diff --git a/osdep.c b/osdep.c index 327583b..93bfbe0 100644 --- a/osdep.c +++

[PATCH 2/2] vnc: don't mess up with iohandlers in the vnc thread

2011-03-10 Thread Corentin Chary
The threaded VNC servers messed up with QEMU fd handlers without any kind of locking, and that can cause some nasty race conditions. Using qemu_mutex_lock_iothread() won't work because vnc_dpy_cpy(), which will wait for the current job queue to finish, can be called with the iothread lock held.

Re: [PATCH 2/2] vnc: don't mess up with iohandlers in the vnc thread

2011-03-10 Thread Paolo Bonzini
On 03/10/2011 01:59 PM, Corentin Chary wrote: Instead, we now store the data in a temporary buffer, and use a socket pair to notify the main thread that new data is available. You can use a bottom half for this instead of a special socket. Signaling a bottom half is async-signal- and

Re: [PATCH 2/2] vnc: don't mess up with iohandlers in the vnc thread

2011-03-10 Thread Anthony Liguori
On 03/10/2011 07:06 AM, Paolo Bonzini wrote: On 03/10/2011 01:59 PM, Corentin Chary wrote: Instead, we now store the data in a temporary buffer, and use a socket pair to notify the main thread that new data is available. You can use a bottom half for this instead of a special socket.

Re: [PATCH 2/2] vnc: don't mess up with iohandlers in the vnc thread

2011-03-10 Thread Peter Lieven
On 10.03.2011 13:59, Corentin Chary wrote: The threaded VNC servers messed up with QEMU fd handlers without any kind of locking, and that can cause some nasty race conditions. Using qemu_mutex_lock_iothread() won't work because vnc_dpy_cpy(), which will wait for the current job queue to finish,

Re: [PATCH 2/2] vnc: don't mess up with iohandlers in the vnc thread

2011-03-10 Thread Corentin Chary
On Thu, Mar 10, 2011 at 1:45 PM, Anthony Liguori aligu...@us.ibm.com wrote: On 03/10/2011 07:06 AM, Paolo Bonzini wrote: On 03/10/2011 01:59 PM, Corentin Chary wrote: Instead, we now store the data in a temporary buffer, and use a socket pair to notify the main thread that new data is

Re: [PATCH 2/2] vnc: don't mess up with iohandlers in the vnc thread

2011-03-10 Thread Paolo Bonzini
On 03/10/2011 02:45 PM, Anthony Liguori wrote: On 03/10/2011 07:06 AM, Paolo Bonzini wrote: On 03/10/2011 01:59 PM, Corentin Chary wrote: Instead, we now store the data in a temporary buffer, and use a socket pair to notify the main thread that new data is available. You can use a bottom

Re: [PATCH 2/2] vnc: don't mess up with iohandlers in the vnc thread

2011-03-10 Thread Paolo Bonzini
On 03/10/2011 02:54 PM, Corentin Chary wrote: You can use a bottom half for this instead of a special socket. Signaling a bottom half is async-signal- and thread-safe. Bottom halves are thread safe? I don't think so. The bottom halves API is not thread safe, but calling

[PATCH v5] vnc: don't mess up with iohandlers in the vnc thread

2011-03-10 Thread Corentin Chary
The threaded VNC servers messed up with QEMU fd handlers without any kind of locking, and that can cause some nasty race conditions. Using qemu_mutex_lock_iothread() won't work because vnc_dpy_cpy(), which will wait for the current job queue to finish, can be called with the iothread lock held.

Re: Network performance with small packets - continued

2011-03-10 Thread Tom Lendacky
On Thursday, March 10, 2011 12:54:58 am Michael S. Tsirkin wrote: On Wed, Mar 09, 2011 at 05:25:11PM -0600, Tom Lendacky wrote: As for which CPU the interrupt gets pinned to, that doesn't matter - see below. So what hurts us the most is that the IRQ jumps between the VCPUs? Yes, it

Re: Network performance with small packets - continued

2011-03-10 Thread Michael S. Tsirkin
On Thu, Mar 10, 2011 at 09:23:42AM -0600, Tom Lendacky wrote: On Thursday, March 10, 2011 12:54:58 am Michael S. Tsirkin wrote: On Wed, Mar 09, 2011 at 05:25:11PM -0600, Tom Lendacky wrote: As for which CPU the interrupt gets pinned to, that doesn't matter - see below. So what hurts

Re: Network performance with small packets - continued

2011-03-10 Thread Tom Lendacky
On Thursday, March 10, 2011 09:34:22 am Michael S. Tsirkin wrote: On Thu, Mar 10, 2011 at 09:23:42AM -0600, Tom Lendacky wrote: On Thursday, March 10, 2011 12:54:58 am Michael S. Tsirkin wrote: On Wed, Mar 09, 2011 at 05:25:11PM -0600, Tom Lendacky wrote: As for which CPU the interrupt

Re: VNC and SDL/VGA simultaneously?

2011-03-10 Thread Erik Rull
Avi Kivity wrote: On 03/09/2011 11:31 PM, Erik Rull wrote: Hi all, is it possible to parameterize qemu in a way where the VNC port and the VGA output is available in parallel? Not really, though it should be possible to do it with some effort. My system screen remains dark if I run it

virtio_net: remove recv refill work?

2011-03-10 Thread Shirley Ma
Hello Rusty, What's the reason to use refill work in receiving path? static void refill_work(struct work_struct *work) { struct virtnet_info *vi; bool still_empty; vi = container_of(work, struct virtnet_info, refill.work); napi_disable(vi-napi);

Re: virtio_net: remove recv refill work?

2011-03-10 Thread Shirley Ma
Never mind, the refill work is needed only for OOM. It can't be replaced. thanks Shirley -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: VNC and SDL/VGA simultaneously?

2011-03-10 Thread Bitman Zhou
We tried before with both spice(QXL) and VNC enabled at the same time for the same VM. It works a little bit, I mean VNC session can hold some time. I use gtk-vnc and it looks like qemu vnc implementation sends some special packets and cause gtk-vnc broken. BR Bitman Zhou 在 2011-03-10四的 20:15

Re: Network performance with small packets

2011-03-10 Thread Rusty Russell
On Tue, 08 Mar 2011 20:21:18 -0600, Andrew Theurer haban...@linux.vnet.ibm.com wrote: On Tue, 2011-03-08 at 13:57 -0800, Shirley Ma wrote: On Wed, 2011-02-09 at 11:07 +1030, Rusty Russell wrote: I've finally read this thread... I think we need to get more serious with our stats gathering

Re: [PATCH v2 4/4] KVM: MMU: cleanup pte write path

2011-03-10 Thread Xiao Guangrong
On 03/10/2011 06:21 PM, Avi Kivity wrote: On 03/09/2011 09:43 AM, Xiao Guangrong wrote: This patch does: - call vcpu-arch.mmu.update_pte directly - use gfn_to_pfn_atomic in update_pte path The suggestion is from Avi. -mmu_guess_page_from_pte_write(vcpu, gpa, gentry); +mmu_seq =

Re: [PATCH] kvm: ppc: Fix breakage of kvm_arch_pre_run/process_irqchip_events

2011-03-10 Thread Alexander Graf
On 17.02.2011, at 22:01, Jan Kiszka wrote: On 2011-02-07 12:19, Jan Kiszka wrote: We do not check them, and the only arch with non-empty implementations always returns 0 (this is also true for qemu-kvm). Signed-off-by: Jan Kiszka jan.kis...@siemens.com CC: Alexander Graf ag...@suse.de

Re: TX from KVM guest virtio_net to vhost issues

2011-03-10 Thread Rusty Russell
On Wed, 09 Mar 2011 13:46:36 -0800, Shirley Ma mashi...@us.ibm.com wrote: Since we have lots of performance discussions about virtio_net and vhost communication. I think it's better to have a common understandings of the code first, then we can seek the right directions to improve it. We also

Re: [PATCH] kvm: ppc: Fix breakage of kvm_arch_pre_run/process_irqchip_events

2011-03-10 Thread Stefan Hajnoczi
On Fri, Mar 11, 2011 at 5:55 AM, Alexander Graf ag...@suse.de wrote: On 17.02.2011, at 22:01, Jan Kiszka wrote: On 2011-02-07 12:19, Jan Kiszka wrote: We do not check them, and the only arch with non-empty implementations always returns 0 (this is also true for qemu-kvm). Signed-off-by:

Re: [Autotest] [PATCH 1/7] KVM test: Move test utilities to client/tools

2011-03-10 Thread Amos Kong
On Wed, Mar 09, 2011 at 06:21:04AM -0300, Lucas Meneghel Rodrigues wrote: The programs cd_hash, html_report, scan_results can be used by other users of autotest, so move them to the tools directory inside the client directory. Signed-off-by: Lucas Meneghel Rodrigues l...@redhat.com ---

Re: [PATCH 12/15] kvm: Align kvm_arch_handle_exit to kvm_cpu_exec changes

2011-03-10 Thread Alexander Graf
On 04.03.2011, at 11:20, Jan Kiszka wrote: Make the return code of kvm_arch_handle_exit directly usable for kvm_cpu_exec. This is straightforward for x86 and ppc, just s390 would require more work. Avoid this for now by pushing the return code translation logic into s390's

Re: [PATCH] kvm: ppc: Fix breakage of kvm_arch_pre_run/process_irqchip_events

2011-03-10 Thread Alexander Graf
On 11.03.2011, at 07:26, Stefan Hajnoczi wrote: On Fri, Mar 11, 2011 at 5:55 AM, Alexander Graf ag...@suse.de wrote: On 17.02.2011, at 22:01, Jan Kiszka wrote: On 2011-02-07 12:19, Jan Kiszka wrote: We do not check them, and the only arch with non-empty implementations always returns 0

Re: [PATCH 12/15] kvm: Align kvm_arch_handle_exit to kvm_cpu_exec changes

2011-03-10 Thread Jan Kiszka
On 2011-03-11 07:50, Alexander Graf wrote: On 04.03.2011, at 11:20, Jan Kiszka wrote: Make the return code of kvm_arch_handle_exit directly usable for kvm_cpu_exec. This is straightforward for x86 and ppc, just s390 would require more work. Avoid this for now by pushing the return code

Re: [PATCH 12/15] kvm: Align kvm_arch_handle_exit to kvm_cpu_exec changes

2011-03-10 Thread Alexander Graf
On 11.03.2011, at 08:13, Jan Kiszka wrote: On 2011-03-11 07:50, Alexander Graf wrote: On 04.03.2011, at 11:20, Jan Kiszka wrote: Make the return code of kvm_arch_handle_exit directly usable for kvm_cpu_exec. This is straightforward for x86 and ppc, just s390 would require more work.

Re: [PATCH 12/15] kvm: Align kvm_arch_handle_exit to kvm_cpu_exec changes

2011-03-10 Thread Jan Kiszka
On 2011-03-11 08:26, Alexander Graf wrote: On 11.03.2011, at 08:13, Jan Kiszka wrote: On 2011-03-11 07:50, Alexander Graf wrote: On 04.03.2011, at 11:20, Jan Kiszka wrote: Make the return code of kvm_arch_handle_exit directly usable for kvm_cpu_exec. This is straightforward for x86 and

Re: [PATCH 12/15] kvm: Align kvm_arch_handle_exit to kvm_cpu_exec changes

2011-03-10 Thread Alexander Graf
On 11.03.2011, at 08:13, Jan Kiszka wrote: On 2011-03-11 07:50, Alexander Graf wrote: On 04.03.2011, at 11:20, Jan Kiszka wrote: Make the return code of kvm_arch_handle_exit directly usable for kvm_cpu_exec. This is straightforward for x86 and ppc, just s390 would require more work.

Re: [PATCH 12/15] kvm: Align kvm_arch_handle_exit to kvm_cpu_exec changes

2011-03-10 Thread Alexander Graf
On 11.03.2011, at 08:33, Jan Kiszka wrote: On 2011-03-11 08:26, Alexander Graf wrote: On 11.03.2011, at 08:13, Jan Kiszka wrote: On 2011-03-11 07:50, Alexander Graf wrote: On 04.03.2011, at 11:20, Jan Kiszka wrote: Make the return code of kvm_arch_handle_exit directly usable for