On 03/10/2011 09:35 AM, Takuya Yoshikawa wrote:
x86_emulate_insn() is too long and has many confusing goto statements.
This patch is the first part of a work which tries to split it into
a few meaningful functions: just encapsulates the switch statement for
the one byte instruction emulation as
On 03/10/2011 12:36 AM, Nikola Ciprich wrote:
commit 387b9f97750444728962b236987fbe8ee8cc4f8c moved
kvm_request_guest_time_update(vcpu),
breaking 32bit SMP guests using kvm-clock. Fix this by moving (new) clock
update function
to proper place.
Applied, thanks.
--
error compiling
On 03/09/2011 07:11 PM, Michael Tokarev wrote:
09.03.2011 19:34, Avi Kivity wrote:
[]
Sorry, I misread. So kvm.git works, we just have to identify what
patch
fixed it.
2.6.38-rc8 does not work - verified by Dominik.
But kvm.git works.
There's some chance it's e5d135f80b98b0.
On Thu, 10 Mar 2011 11:05:38 +0200
Avi Kivity a...@redhat.com wrote:
On 03/10/2011 09:35 AM, Takuya Yoshikawa wrote:
x86_emulate_insn() is too long and has many confusing goto statements.
This patch is the first part of a work which tries to split it into
a few meaningful functions: just
On 03/10/2011 11:26 AM, Takuya Yoshikawa wrote:
The plan is to migrate all of the contents of the switch statements into
-execute() callbacks. This way, all of the information about an
instruction is present in the decode tables.
I see.
I'm looking forward to the completion of the plan!
On Thu, 10 Mar 2011 11:27:30 +0200
Avi Kivity a...@redhat.com wrote:
On 03/10/2011 11:26 AM, Takuya Yoshikawa wrote:
I don't know if anyone is working on it, so feel free to send patches!
Yes, I'm interested in it. So I will take a look and try!
I was doing some live migration tests using
On 03/09/2011 09:43 AM, Xiao Guangrong wrote:
This patch does:
- call vcpu-arch.mmu.update_pte directly
- use gfn_to_pfn_atomic in update_pte path
The suggestion is from Avi.
- mmu_guess_page_from_pte_write(vcpu, gpa, gentry);
+ mmu_seq = vcpu-kvm-mmu_notifier_seq;
+
On 03/09/2011 09:41 AM, Xiao Guangrong wrote:
fix:
[ 3494.671786] stack backtrace:
[ 3494.671789] Pid: 10527, comm: qemu-system-x86 Not tainted 2.6.38-rc6+ #23
[ 3494.671790] Call Trace:
[ 3494.671796] [] ? lockdep_rcu_dereference+0x9d/0xa5
[ 3494.671826] [] ? kvm_memslots+0x6b/0x73 [kvm]
[
On 03/09/2011 11:31 PM, Erik Rull wrote:
Hi all,
is it possible to parameterize qemu in a way where the VNC port and
the VGA output is available in parallel?
Not really, though it should be possible to do it with some effort.
My system screen remains dark if I run it with the -vnc :0
Signed-off-by: Corentin Chary corentin.ch...@gmail.com
---
osdep.c | 83 +
qemu_socket.h |1 +
2 files changed, 84 insertions(+), 0 deletions(-)
diff --git a/osdep.c b/osdep.c
index 327583b..93bfbe0 100644
--- a/osdep.c
+++
The threaded VNC servers messed up with QEMU fd handlers without
any kind of locking, and that can cause some nasty race conditions.
Using qemu_mutex_lock_iothread() won't work because vnc_dpy_cpy(),
which will wait for the current job queue to finish, can be called with
the iothread lock held.
On 03/10/2011 01:59 PM, Corentin Chary wrote:
Instead, we now store the data in a temporary buffer, and use a socket
pair to notify the main thread that new data is available.
You can use a bottom half for this instead of a special socket.
Signaling a bottom half is async-signal- and
On 03/10/2011 07:06 AM, Paolo Bonzini wrote:
On 03/10/2011 01:59 PM, Corentin Chary wrote:
Instead, we now store the data in a temporary buffer, and use a socket
pair to notify the main thread that new data is available.
You can use a bottom half for this instead of a special socket.
On 10.03.2011 13:59, Corentin Chary wrote:
The threaded VNC servers messed up with QEMU fd handlers without
any kind of locking, and that can cause some nasty race conditions.
Using qemu_mutex_lock_iothread() won't work because vnc_dpy_cpy(),
which will wait for the current job queue to finish,
On Thu, Mar 10, 2011 at 1:45 PM, Anthony Liguori aligu...@us.ibm.com wrote:
On 03/10/2011 07:06 AM, Paolo Bonzini wrote:
On 03/10/2011 01:59 PM, Corentin Chary wrote:
Instead, we now store the data in a temporary buffer, and use a socket
pair to notify the main thread that new data is
On 03/10/2011 02:45 PM, Anthony Liguori wrote:
On 03/10/2011 07:06 AM, Paolo Bonzini wrote:
On 03/10/2011 01:59 PM, Corentin Chary wrote:
Instead, we now store the data in a temporary buffer, and use a socket
pair to notify the main thread that new data is available.
You can use a bottom
On 03/10/2011 02:54 PM, Corentin Chary wrote:
You can use a bottom half for this instead of a special socket. Signaling
a bottom half is async-signal- and thread-safe.
Bottom halves are thread safe?
I don't think so.
The bottom halves API is not thread safe, but calling
The threaded VNC servers messed up with QEMU fd handlers without
any kind of locking, and that can cause some nasty race conditions.
Using qemu_mutex_lock_iothread() won't work because vnc_dpy_cpy(),
which will wait for the current job queue to finish, can be called with
the iothread lock held.
On Thursday, March 10, 2011 12:54:58 am Michael S. Tsirkin wrote:
On Wed, Mar 09, 2011 at 05:25:11PM -0600, Tom Lendacky wrote:
As for which CPU the interrupt gets pinned to, that doesn't matter - see
below.
So what hurts us the most is that the IRQ jumps between the VCPUs?
Yes, it
On Thu, Mar 10, 2011 at 09:23:42AM -0600, Tom Lendacky wrote:
On Thursday, March 10, 2011 12:54:58 am Michael S. Tsirkin wrote:
On Wed, Mar 09, 2011 at 05:25:11PM -0600, Tom Lendacky wrote:
As for which CPU the interrupt gets pinned to, that doesn't matter - see
below.
So what hurts
On Thursday, March 10, 2011 09:34:22 am Michael S. Tsirkin wrote:
On Thu, Mar 10, 2011 at 09:23:42AM -0600, Tom Lendacky wrote:
On Thursday, March 10, 2011 12:54:58 am Michael S. Tsirkin wrote:
On Wed, Mar 09, 2011 at 05:25:11PM -0600, Tom Lendacky wrote:
As for which CPU the interrupt
Avi Kivity wrote:
On 03/09/2011 11:31 PM, Erik Rull wrote:
Hi all,
is it possible to parameterize qemu in a way where the VNC port and
the VGA output is available in parallel?
Not really, though it should be possible to do it with some effort.
My system screen remains dark if I run it
Hello Rusty,
What's the reason to use refill work in receiving path?
static void refill_work(struct work_struct *work)
{
struct virtnet_info *vi;
bool still_empty;
vi = container_of(work, struct virtnet_info, refill.work);
napi_disable(vi-napi);
Never mind, the refill work is needed only for OOM. It can't be
replaced.
thanks
Shirley
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
We tried before with both spice(QXL) and VNC enabled at the same time
for the same VM. It works a little bit, I mean VNC session can hold some
time. I use gtk-vnc and it looks like qemu vnc implementation sends some
special packets and cause gtk-vnc broken.
BR
Bitman Zhou
在 2011-03-10四的 20:15
On Tue, 08 Mar 2011 20:21:18 -0600, Andrew Theurer
haban...@linux.vnet.ibm.com wrote:
On Tue, 2011-03-08 at 13:57 -0800, Shirley Ma wrote:
On Wed, 2011-02-09 at 11:07 +1030, Rusty Russell wrote:
I've finally read this thread... I think we need to get more serious
with our stats gathering
On 03/10/2011 06:21 PM, Avi Kivity wrote:
On 03/09/2011 09:43 AM, Xiao Guangrong wrote:
This patch does:
- call vcpu-arch.mmu.update_pte directly
- use gfn_to_pfn_atomic in update_pte path
The suggestion is from Avi.
-mmu_guess_page_from_pte_write(vcpu, gpa, gentry);
+mmu_seq =
On 17.02.2011, at 22:01, Jan Kiszka wrote:
On 2011-02-07 12:19, Jan Kiszka wrote:
We do not check them, and the only arch with non-empty implementations
always returns 0 (this is also true for qemu-kvm).
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
CC: Alexander Graf ag...@suse.de
On Wed, 09 Mar 2011 13:46:36 -0800, Shirley Ma mashi...@us.ibm.com wrote:
Since we have lots of performance discussions about virtio_net and vhost
communication. I think it's better to have a common understandings of
the code first, then we can seek the right directions to improve it. We
also
On Fri, Mar 11, 2011 at 5:55 AM, Alexander Graf ag...@suse.de wrote:
On 17.02.2011, at 22:01, Jan Kiszka wrote:
On 2011-02-07 12:19, Jan Kiszka wrote:
We do not check them, and the only arch with non-empty implementations
always returns 0 (this is also true for qemu-kvm).
Signed-off-by:
On Wed, Mar 09, 2011 at 06:21:04AM -0300, Lucas Meneghel Rodrigues wrote:
The programs cd_hash, html_report, scan_results can be
used by other users of autotest, so move them to the
tools directory inside the client directory.
Signed-off-by: Lucas Meneghel Rodrigues l...@redhat.com
---
On 04.03.2011, at 11:20, Jan Kiszka wrote:
Make the return code of kvm_arch_handle_exit directly usable for
kvm_cpu_exec. This is straightforward for x86 and ppc, just s390
would require more work. Avoid this for now by pushing the return code
translation logic into s390's
On 11.03.2011, at 07:26, Stefan Hajnoczi wrote:
On Fri, Mar 11, 2011 at 5:55 AM, Alexander Graf ag...@suse.de wrote:
On 17.02.2011, at 22:01, Jan Kiszka wrote:
On 2011-02-07 12:19, Jan Kiszka wrote:
We do not check them, and the only arch with non-empty implementations
always returns 0
On 2011-03-11 07:50, Alexander Graf wrote:
On 04.03.2011, at 11:20, Jan Kiszka wrote:
Make the return code of kvm_arch_handle_exit directly usable for
kvm_cpu_exec. This is straightforward for x86 and ppc, just s390
would require more work. Avoid this for now by pushing the return code
On 11.03.2011, at 08:13, Jan Kiszka wrote:
On 2011-03-11 07:50, Alexander Graf wrote:
On 04.03.2011, at 11:20, Jan Kiszka wrote:
Make the return code of kvm_arch_handle_exit directly usable for
kvm_cpu_exec. This is straightforward for x86 and ppc, just s390
would require more work.
On 2011-03-11 08:26, Alexander Graf wrote:
On 11.03.2011, at 08:13, Jan Kiszka wrote:
On 2011-03-11 07:50, Alexander Graf wrote:
On 04.03.2011, at 11:20, Jan Kiszka wrote:
Make the return code of kvm_arch_handle_exit directly usable for
kvm_cpu_exec. This is straightforward for x86 and
On 11.03.2011, at 08:13, Jan Kiszka wrote:
On 2011-03-11 07:50, Alexander Graf wrote:
On 04.03.2011, at 11:20, Jan Kiszka wrote:
Make the return code of kvm_arch_handle_exit directly usable for
kvm_cpu_exec. This is straightforward for x86 and ppc, just s390
would require more work.
On 11.03.2011, at 08:33, Jan Kiszka wrote:
On 2011-03-11 08:26, Alexander Graf wrote:
On 11.03.2011, at 08:13, Jan Kiszka wrote:
On 2011-03-11 07:50, Alexander Graf wrote:
On 04.03.2011, at 11:20, Jan Kiszka wrote:
Make the return code of kvm_arch_handle_exit directly usable for
38 matches
Mail list logo