From: Jan Kiszka jan.kis...@siemens.com
Provide arch-independent kvm_on_sigbus* stubs to remove the #ifdef'ery
from cpus.c. This patch also fixes --disable-kvm build by providing the
missing kvm_on_sigbus_vcpu kvm-stub.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
Reviewed-by: Paolo Bonzini
From: Jan Kiszka jan.kis...@siemens.com
Found by Stefan Hajnoczi: There is a race in kvm_cpu_exec between
checking for exit_request on vcpu entry and timer signals arriving
before KVM starts to catch them. Plug it by blocking both timer related
signals also on !CONFIG_IOTHREAD and process those
From: Jan Kiszka jan.kis...@siemens.com
A pending vmstop request is also a reason to leave the inner main loop.
So far we ignored it, and pending stop requests issued over VCPU threads
were simply ignored.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
---
vl.c | 14 +-
1 files
From: Jan Kiszka jan.kis...@siemens.com
Reported by Stefan Hajnoczi.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
---
target-i386/kvm.c |2 ++
1 files changed, 2 insertions(+), 0 deletions(-)
diff --git a/target-i386/kvm.c b/target-i386/kvm.c
index 8e8880a..05010bb 100644
---
From: Jan Kiszka jan.kis...@siemens.com
Currently, we only configure and process MCE-related SIGBUS events if
CONFIG_IOTHREAD is enabled. The groundwork is laid, we just need to
factor out the required handler registration and system configuration.
Signed-off-by: Jan Kiszka
From: Jan Kiszka jan.kis...@siemens.com
Pure interface cosmetics: Ensure that only kvm core services (as
declared in kvm.h) start with kvm_. Prepend qemu_ to those that
violate this rule in cpus.c. Also rename the corresponding tcg functions
for the sake of consistency.
Signed-off-by: Jan Kiszka
From: Jan Kiszka jan.kis...@siemens.com
Introduce qemu_cpu_kick_self to send SIG_IPI to the calling VCPU
context. First user will be kvm.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
---
cpus.c| 21 +
qemu-common.h |1 +
2 files changed, 22 insertions(+), 0
From: Jan Kiszka jan.kis...@siemens.com
We do not use the timeout, so drop its logic. As we always poll our
signals, we do not need to drop the global lock. Removing those calls
allows some further simplifications. Also fix the error processing of
sigpending at this chance.
Signed-off-by: Jan
From: Jan Kiszka jan.kis...@siemens.com
KVM requires to reenter the kernel after IO exits in order to complete
instruction emulation. Failing to do so will leave the kernel state
inconsistently behind. To ensure that we will get back ASAP, we issue a
self-signal that will cause KVM_RUN to return
On Tue, Feb 01, 2011 at 12:25:08PM -0800, Shirley Ma wrote:
On Tue, 2011-02-01 at 22:17 +0200, Michael S. Tsirkin wrote:
On Tue, Feb 01, 2011 at 12:09:03PM -0800, Shirley Ma wrote:
On Tue, 2011-02-01 at 19:23 +0200, Michael S. Tsirkin wrote:
On Thu, Jan 27, 2011 at 01:30:38PM -0800,
On Tue, Feb 01, 2011 at 01:09:45PM -0800, Shirley Ma wrote:
On Mon, 2011-01-31 at 17:30 -0800, Sridhar Samudrala wrote:
Yes. It definitely should be 'out'. 'in' should be 0 in the tx path.
I tried a simpler version of this patch without any tunables by
delaying the signaling until we
On Tue, 2011-02-01 at 23:21 +0200, Michael S. Tsirkin wrote:
Confused. We compare capacity to skb frags, no?
That's sg I think ...
Current guest kernel use indirect buffers, num_free returns how many
available descriptors not skb frags. So it's wrong here.
Shirley
--
To unsubscribe from this
On Tue, 2011-02-01 at 23:24 +0200, Michael S. Tsirkin wrote:
My theory is that the issue is not signalling.
Rather, our queue fills up, then host handles
one packet and sends an interrupt, and we
immediately wake the queue. So the vq
once it gets full, stays full.
From the printk debugging
On Tue, Feb 01, 2011 at 01:28:45PM -0800, Shirley Ma wrote:
On Tue, 2011-02-01 at 23:21 +0200, Michael S. Tsirkin wrote:
Confused. We compare capacity to skb frags, no?
That's sg I think ...
Current guest kernel use indirect buffers, num_free returns how many
available descriptors not skb
On Tue, Feb 01, 2011 at 01:32:35PM -0800, Shirley Ma wrote:
On Tue, 2011-02-01 at 23:24 +0200, Michael S. Tsirkin wrote:
My theory is that the issue is not signalling.
Rather, our queue fills up, then host handles
one packet and sends an interrupt, and we
immediately wake the queue. So
On Tue, 2011-02-01 at 23:42 +0200, Michael S. Tsirkin wrote:
On Tue, Feb 01, 2011 at 01:32:35PM -0800, Shirley Ma wrote:
On Tue, 2011-02-01 at 23:24 +0200, Michael S. Tsirkin wrote:
My theory is that the issue is not signalling.
Rather, our queue fills up, then host handles
one packet
On Tue, 2011-02-01 at 23:56 +0200, Michael S. Tsirkin wrote:
There are flags for bytes, buffers and packets.
Try playing with any one of them :)
Just be sure to use v2.
I would like to change it to
half of the ring size instead for signaling. Is that OK?
Shirley
Sure that
On Tue, 2011-02-01 at 17:52 +0200, Michael S. Tsirkin wrote:
OK, so thinking about it more, maybe the issue is this:
tx becomes full. We process one request and interrupt the guest,
then it adds one request and the queue is full again.
Maybe the following will help it stabilize? By default
On Tue, 2011-02-01 at 15:07 -0800, Sridhar Samudrala wrote:
I think the counters that exceed the limits need to be reset to 0
here.
Otherwise we keep signaling for every buffer once we hit this
condition.
I will modify the patch to rerun the test to see the difference.
Shirley
--
To
On Tue, Feb 01, 2011 at 03:07:38PM -0800, Sridhar Samudrala wrote:
On Tue, 2011-02-01 at 17:52 +0200, Michael S. Tsirkin wrote:
OK, so thinking about it more, maybe the issue is this:
tx becomes full. We process one request and interrupt the guest,
then it adds one request and the queue is
Michael S. Tsirkin m...@redhat.com 02/02/2011 03:11 AM
On Tue, Feb 01, 2011 at 01:28:45PM -0800, Shirley Ma wrote:
On Tue, 2011-02-01 at 23:21 +0200, Michael S. Tsirkin wrote:
Confused. We compare capacity to skb frags, no?
That's sg I think ...
Current guest kernel use indirect
On Tue, Feb 01, 2011 at 02:59:57PM -0800, Shirley Ma wrote:
On Tue, 2011-02-01 at 23:56 +0200, Michael S. Tsirkin wrote:
There are flags for bytes, buffers and packets.
Try playing with any one of them :)
Just be sure to use v2.
I would like to change it to
half of the ring size
On Wed, Feb 02, 2011 at 10:09:18AM +0530, Krishna Kumar2 wrote:
Michael S. Tsirkin m...@redhat.com 02/02/2011 03:11 AM
On Tue, Feb 01, 2011 at 01:28:45PM -0800, Shirley Ma wrote:
On Tue, 2011-02-01 at 23:21 +0200, Michael S. Tsirkin wrote:
Confused. We compare capacity to skb frags,
On Wed, 2011-02-02 at 06:40 +0200, Michael S. Tsirkin wrote:
ust tweak the parameters with sysfs, you do not have to edit the code:
echo 64 /sys/module/vhost_net/parameters/tx_bufs_coalesce
Or in a similar way for tx_packets_coalesce (since we use indirect,
packets will typically use 1
On Tue, 2011-02-01 at 22:05 -0800, Shirley Ma wrote:
The way I am changing is only when netif queue has stopped, then we
start to count num_free descriptors to send the signal to wake netif
queue.
I forgot to mention, the code change I am making is in guest kernel, in
xmit call back only
On Tue, Feb 01, 2011 at 10:19:09PM -0800, Shirley Ma wrote:
On Tue, 2011-02-01 at 22:05 -0800, Shirley Ma wrote:
The way I am changing is only when netif queue has stopped, then we
start to count num_free descriptors to send the signal to wake netif
queue.
I forgot to mention, the
On Tue, 2011-02-01 at 22:05 -0800, Shirley Ma wrote:
The way I am changing is only when netif queue has stopped, then we
start to count num_free descriptors to send the signal to wake netif
queue.
I forgot to mention, the code change I am making is in guest kernel, in
xmit call back
On Wed, 2011-02-02 at 12:04 +0530, Krishna Kumar2 wrote:
On Tue, 2011-02-01 at 22:05 -0800, Shirley Ma wrote:
The way I am changing is only when netif queue has stopped, then
we
start to count num_free descriptors to send the signal to wake
netif
queue.
I forgot to mention, the
On Wed, 2011-02-02 at 08:29 +0200, Michael S. Tsirkin wrote:
On Tue, Feb 01, 2011 at 10:19:09PM -0800, Shirley Ma wrote:
On Tue, 2011-02-01 at 22:05 -0800, Shirley Ma wrote:
The way I am changing is only when netif queue has stopped, then
we
start to count num_free descriptors to
On Tue, 2011-02-01 at 23:14 -0800, Shirley Ma wrote:
w/i guest change, I played around the parameters,for example: I could
get 3.7Gb/s with 42% CPU BW increasing from 2.5Gb/s for 1K message
size,
w/i dropping packet, I was able to get up to 6.2Gb/s with similar CPU
usage.
I meant w/o guest
Shirley Ma mashi...@us.ibm.com wrote:
I have tried this before. There are a couple of issues:
1. the free count will not reduce until you run free_old_xmit_skbs,
which will not run anymore since the tx queue is stopped.
2. You cannot call free_old_xmit_skbs directly as it races with
101 - 131 of 131 matches
Mail list logo