On Thu, 14 Apr 2011 19:03:59 +0300, Michael S. Tsirkin m...@redhat.com
wrote:
On Thu, Apr 14, 2011 at 08:58:41PM +0930, Rusty Russell wrote:
They have to offer the feature, so if the have some way of allocating
non-page-aligned amounts of memory, they'll have to add those extra 2
bytes.
On Tue, 12 Apr 2011 23:01:12 +0300, Michael S. Tsirkin m...@redhat.com
wrote:
On Thu, Mar 10, 2011 at 12:19:42PM +1030, Rusty Russell wrote:
Here's an old patch where I played with implementing this:
...
virtio: put last_used and last_avail index into ring itself.
Generally, the
On Thu, Apr 14, 2011 at 08:58:41PM +0930, Rusty Russell wrote:
On Tue, 12 Apr 2011 23:01:12 +0300, Michael S. Tsirkin m...@redhat.com
wrote:
On Thu, Mar 10, 2011 at 12:19:42PM +1030, Rusty Russell wrote:
Here's an old patch where I played with implementing this:
...
virtio:
On Thu, Mar 10, 2011 at 12:19:42PM +1030, Rusty Russell wrote:
Here's an old patch where I played with implementing this:
...
virtio: put last_used and last_avail index into ring itself.
Generally, the other end of the virtio ring doesn't need to see where
you're up to in consuming the
On Thursday, March 10, 2011 12:54:58 am Michael S. Tsirkin wrote:
On Wed, Mar 09, 2011 at 05:25:11PM -0600, Tom Lendacky wrote:
As for which CPU the interrupt gets pinned to, that doesn't matter - see
below.
So what hurts us the most is that the IRQ jumps between the VCPUs?
Yes, it
On Thu, Mar 10, 2011 at 09:23:42AM -0600, Tom Lendacky wrote:
On Thursday, March 10, 2011 12:54:58 am Michael S. Tsirkin wrote:
On Wed, Mar 09, 2011 at 05:25:11PM -0600, Tom Lendacky wrote:
As for which CPU the interrupt gets pinned to, that doesn't matter - see
below.
So what hurts
On Thursday, March 10, 2011 09:34:22 am Michael S. Tsirkin wrote:
On Thu, Mar 10, 2011 at 09:23:42AM -0600, Tom Lendacky wrote:
On Thursday, March 10, 2011 12:54:58 am Michael S. Tsirkin wrote:
On Wed, Mar 09, 2011 at 05:25:11PM -0600, Tom Lendacky wrote:
As for which CPU the interrupt
On Tue, 08 Mar 2011 20:21:18 -0600, Andrew Theurer
haban...@linux.vnet.ibm.com wrote:
On Tue, 2011-03-08 at 13:57 -0800, Shirley Ma wrote:
On Wed, 2011-02-09 at 11:07 +1030, Rusty Russell wrote:
I've finally read this thread... I think we need to get more serious
with our stats gathering
On Tue, 2011-03-08 at 20:21 -0600, Andrew Theurer wrote:
Tom L has started using Rusty's patches and found some interesting
results, sent yesterday:
http://marc.info/?l=kvmm=129953710930124w=2
Thanks. Very good experimental. I have been struggling with guest/vhost
optimization work for a
On Wed, 2011-03-09 at 09:15 +0200, Michael S. Tsirkin wrote:
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 82dba5a..ebe3337 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -514,11 +514,11 @@ static unsigned int free_old_xmit_skbs(struct
On Wednesday, March 09, 2011 01:15:58 am Michael S. Tsirkin wrote:
On Mon, Mar 07, 2011 at 04:31:41PM -0600, Tom Lendacky wrote:
We've been doing some more experimenting with the small packet network
performance problem in KVM. I have a different setup than what Steve D.
was using so I
On Wed, Mar 09, 2011 at 07:45:43AM -0800, Shirley Ma wrote:
On Wed, 2011-03-09 at 09:15 +0200, Michael S. Tsirkin wrote:
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 82dba5a..ebe3337 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@
On Wednesday, March 09, 2011 01:17:44 am Michael S. Tsirkin wrote:
On Mon, Mar 07, 2011 at 04:31:41PM -0600, Tom Lendacky wrote:
I used the uperf tool to do this after verifying the results against
netperf. Uperf allows the specification of the number of connections as
a parameter in an XML
On Wed, 2011-03-09 at 10:09 -0600, Tom Lendacky wrote:
This spread out the kick_notify but still resulted in alot of
them. I
decided to build on the delayed Tx buffer freeing and code up an
ethtool like coalescing patch in order to delay the kick_notify
until
there were at least
On Wed, 2011-03-09 at 18:10 +0200, Michael S. Tsirkin wrote:
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 82dba5a..4477b9a 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -514,11 +514,11 @@ static unsigned int free_old_xmit_skbs(struct
On Wed, Mar 09, 2011 at 10:09:26AM -0600, Tom Lendacky wrote:
On Wednesday, March 09, 2011 01:15:58 am Michael S. Tsirkin wrote:
On Mon, Mar 07, 2011 at 04:31:41PM -0600, Tom Lendacky wrote:
We've been doing some more experimenting with the small packet network
performance problem in KVM.
On Wed, Mar 09, 2011 at 08:25:34AM -0800, Shirley Ma wrote:
On Wed, 2011-03-09 at 18:10 +0200, Michael S. Tsirkin wrote:
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 82dba5a..4477b9a 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@
On Wed, 2011-03-09 at 18:32 +0200, Michael S. Tsirkin wrote:
I think your issues are with TX overrun.
Besides delaying IRQ on TX, I don't have many ideas.
The one interesting thing is that you see better speed
if you drop packets. netdev crowd says this should not happen,
so could be an
On Wed, 2011-03-09 at 10:09 -0600, Tom Lendacky wrote:
Vhost is receiving a lot of notifications for packets that are to
be
transmitted (over 60% of the packets generate a kick_notify).
This is guest TX send notification when vhost enables notification.
In TCP_STREAM test, vhost exits
On Wed, Mar 09, 2011 at 08:51:33AM -0800, Shirley Ma wrote:
On Wed, 2011-03-09 at 10:09 -0600, Tom Lendacky wrote:
Vhost is receiving a lot of notifications for packets that are to
be
transmitted (over 60% of the packets generate a kick_notify).
This is guest TX send notification
On Wed, 2011-03-09 at 19:16 +0200, Michael S. Tsirkin wrote:
On Wed, Mar 09, 2011 at 08:51:33AM -0800, Shirley Ma wrote:
On Wed, 2011-03-09 at 10:09 -0600, Tom Lendacky wrote:
Vhost is receiving a lot of notifications for packets that are
to
be
transmitted (over 60% of the packets
Here are the results again with the addition of the interrupt rate that
occurred on the guest virtio_net device:
Here is the KVM baseline (average of six runs):
Txn Rate: 87,070.34 Txn/Sec, Pkt Rate: 172,992 Pkts/Sec
Exits: 148,444.58 Exits/Sec
TxCPU: 2.40% RxCPU: 99.35%
Virtio1-input
On Wed, Mar 09, 2011 at 02:11:07PM -0600, Tom Lendacky wrote:
Here are the results again with the addition of the interrupt rate that
occurred on the guest virtio_net device:
Here is the KVM baseline (average of six runs):
Txn Rate: 87,070.34 Txn/Sec, Pkt Rate: 172,992 Pkts/Sec
Exits:
Hello Tom,
Do you also have Rusty's virtio stat patch results for both send queue
and recv queue to share here?
Thanks
Shirley
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at
On Wednesday, March 09, 2011 10:09:26 am Tom Lendacky wrote:
On Wednesday, March 09, 2011 01:15:58 am Michael S. Tsirkin wrote:
On Mon, Mar 07, 2011 at 04:31:41PM -0600, Tom Lendacky wrote:
We've been doing some more experimenting with the small packet network
performance problem in KVM.
On Wednesday, March 09, 2011 04:45:12 pm Shirley Ma wrote:
Hello Tom,
Do you also have Rusty's virtio stat patch results for both send queue
and recv queue to share here?
Let me see what I can do about getting the data extracted, averaged and in a
form that I can put in an email.
Thanks
On Wednesday, March 09, 2011 03:56:15 pm Michael S. Tsirkin wrote:
On Wed, Mar 09, 2011 at 02:11:07PM -0600, Tom Lendacky wrote:
Here are the results again with the addition of the interrupt rate that
occurred on the guest virtio_net device:
Here is the KVM baseline (average of six
On Wed, 2011-03-09 at 23:56 +0200, Michael S. Tsirkin wrote:
Txn Rate: 153,696.59 Txn/Sec, Pkt Rate: 305,358 Pkgs/Sec
Exits: 62,603.37 Exits/Sec
TxCPU: 3.73% RxCPU: 98.52%
Virtio1-input Interrupts/Sec (CPU0/CPU1): 11,564/0
Virtio1-output Interrupts/Sec (CPU0/CPU1): 0/0
On Wed, 2011-03-09 at 16:59 -0800, Shirley Ma wrote:
In theory, for lots of TCP_RR streams, the guest should be able to keep
sending xmit skbs to send vq, so vhost should be able to disable
notification most of the time, then number of guest exits should be
significantly reduced? Why we saw
On Wed, Mar 09, 2011 at 05:25:11PM -0600, Tom Lendacky wrote:
As for which CPU the interrupt gets pinned to, that doesn't matter - see
below.
So what hurts us the most is that the IRQ jumps between the VCPUs?
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a
On Wed, 2011-02-09 at 11:07 +1030, Rusty Russell wrote:
I've finally read this thread... I think we need to get more serious
with our stats gathering to diagnose these kind of performance issues.
This is a start; it should tell us what is actually happening to the
virtio ring(s) without
On Tue, 2011-03-08 at 13:57 -0800, Shirley Ma wrote:
On Wed, 2011-02-09 at 11:07 +1030, Rusty Russell wrote:
I've finally read this thread... I think we need to get more serious
with our stats gathering to diagnose these kind of performance issues.
This is a start; it should tell us what
Hi Tom,
My two cents. Please look for [Chaks]
snip
Comparing the transmit path to the receive path, the guest disables
notifications after the first kick and vhost re-enables notifications
after
completing processing of the tx ring. Can a similar thing be done for
the
receive path? Once vhost
On Mon, Mar 07, 2011 at 04:31:41PM -0600, Tom Lendacky wrote:
We've been doing some more experimenting with the small packet network
performance problem in KVM. I have a different setup than what Steve D. was
using so I re-baselined things on the kvm.git kernel on both the host and
guest
On Mon, Mar 07, 2011 at 04:31:41PM -0600, Tom Lendacky wrote:
I used the uperf tool to do this after verifying the results against netperf.
Uperf allows the specification of the number of connections as a parameter in
an XML file as opposed to launching, in this case, 100 separate
On Wed, 2 Feb 2011 03:12:22 pm Michael S. Tsirkin wrote:
On Wed, Feb 02, 2011 at 10:09:18AM +0530, Krishna Kumar2 wrote:
Michael S. Tsirkin m...@redhat.com 02/02/2011 03:11 AM
On Tue, Feb 01, 2011 at 01:28:45PM -0800, Shirley Ma wrote:
On Tue, 2011-02-01 at 23:21 +0200, Michael S.
On Wed, Feb 09, 2011 at 11:07:20AM +1030, Rusty Russell wrote:
On Wed, 2 Feb 2011 03:12:22 pm Michael S. Tsirkin wrote:
On Wed, Feb 02, 2011 at 10:09:18AM +0530, Krishna Kumar2 wrote:
Michael S. Tsirkin m...@redhat.com 02/02/2011 03:11 AM
On Tue, Feb 01, 2011 at 01:28:45PM -0800,
On Wed, 9 Feb 2011 11:23:45 am Michael S. Tsirkin wrote:
On Wed, Feb 09, 2011 at 11:07:20AM +1030, Rusty Russell wrote:
On Wed, 2 Feb 2011 03:12:22 pm Michael S. Tsirkin wrote:
On Wed, Feb 02, 2011 at 10:09:18AM +0530, Krishna Kumar2 wrote:
Michael S. Tsirkin m...@redhat.com 02/02/2011
On Wed, Feb 09, 2011 at 12:09:35PM +1030, Rusty Russell wrote:
On Wed, 9 Feb 2011 11:23:45 am Michael S. Tsirkin wrote:
On Wed, Feb 09, 2011 at 11:07:20AM +1030, Rusty Russell wrote:
On Wed, 2 Feb 2011 03:12:22 pm Michael S. Tsirkin wrote:
On Wed, Feb 02, 2011 at 10:09:18AM +0530,
On Wed, Feb 9, 2011 at 1:55 AM, Michael S. Tsirkin m...@redhat.com wrote:
On Wed, Feb 09, 2011 at 12:09:35PM +1030, Rusty Russell wrote:
On Wed, 9 Feb 2011 11:23:45 am Michael S. Tsirkin wrote:
On Wed, Feb 09, 2011 at 11:07:20AM +1030, Rusty Russell wrote:
On Wed, 2 Feb 2011 03:12:22 pm
On Thu, 2011-02-03 at 08:13 +0200, Michael S. Tsirkin wrote:
Initial TCP_STREAM performance results I got for guest to local
host
4.2Gb/s for 1K message size, (vs. 2.5Gb/s)
6.2Gb/s for 2K message size, and (vs. 3.8Gb/s)
9.8Gb/s for 4K message size. (vs.5.xGb/s)
What is the average
On Thu, Feb 03, 2011 at 07:58:00AM -0800, Shirley Ma wrote:
On Thu, 2011-02-03 at 08:13 +0200, Michael S. Tsirkin wrote:
Initial TCP_STREAM performance results I got for guest to local
host
4.2Gb/s for 1K message size, (vs. 2.5Gb/s)
6.2Gb/s for 2K message size, and (vs. 3.8Gb/s)
On Thu, 2011-02-03 at 18:20 +0200, Michael S. Tsirkin wrote:
Just a thought: does it help to make tx queue len of the
virtio device smaller?
Yes, that what I did before, reducing txqueuelen will cause qdisc dropp
the packet early, But it's hard to control by using tx queuelen for
performance
On Wed, 2011-02-02 at 12:48 +0200, Michael S. Tsirkin wrote:
Yes, I think doing this in the host is much simpler,
just send an interrupt after there's a decent amount
of space in the queue.
Having said that the simple heuristic that I coded
might be a bit too simple.
From the debugging out
On Wed, 2011-02-02 at 12:49 +0200, Michael S. Tsirkin wrote:
On Tue, Feb 01, 2011 at 11:33:49PM -0800, Shirley Ma wrote:
On Tue, 2011-02-01 at 23:14 -0800, Shirley Ma wrote:
w/i guest change, I played around the parameters,for example: I
could
get 3.7Gb/s with 42% CPU BW increasing from
On Wed, Feb 02, 2011 at 07:39:45AM -0800, Shirley Ma wrote:
On Wed, 2011-02-02 at 12:48 +0200, Michael S. Tsirkin wrote:
Yes, I think doing this in the host is much simpler,
just send an interrupt after there's a decent amount
of space in the queue.
Having said that the simple
On Wed, Feb 02, 2011 at 07:42:51AM -0800, Shirley Ma wrote:
On Wed, 2011-02-02 at 12:49 +0200, Michael S. Tsirkin wrote:
On Tue, Feb 01, 2011 at 11:33:49PM -0800, Shirley Ma wrote:
On Tue, 2011-02-01 at 23:14 -0800, Shirley Ma wrote:
w/i guest change, I played around the parameters,for
On Wed, 2011-02-02 at 17:47 +0200, Michael S. Tsirkin wrote:
On Wed, Feb 02, 2011 at 07:39:45AM -0800, Shirley Ma wrote:
On Wed, 2011-02-02 at 12:48 +0200, Michael S. Tsirkin wrote:
Yes, I think doing this in the host is much simpler,
just send an interrupt after there's a decent amount
On Wed, 2011-02-02 at 17:48 +0200, Michael S. Tsirkin wrote:
And this is with sndbuf=0 in host, yes?
And do you see a lot of tx interrupts?
How packets per interrupt?
Nope, sndbuf doens't matter since I never hit reaching sock wmem
condition in vhost. I am still playing around, let me know
On Wed, Feb 02, 2011 at 09:10:35AM -0800, Shirley Ma wrote:
On Wed, 2011-02-02 at 17:47 +0200, Michael S. Tsirkin wrote:
On Wed, Feb 02, 2011 at 07:39:45AM -0800, Shirley Ma wrote:
On Wed, 2011-02-02 at 12:48 +0200, Michael S. Tsirkin wrote:
Yes, I think doing this in the host is much
On Wed, 2011-02-02 at 19:32 +0200, Michael S. Tsirkin wrote:
OK, but this should have no effect with a vhost patch
which should ensure that we don't get an interrupt
until the queue is at least half empty.
Right?
There should be some coordination between guest and vhost. We shouldn't
count
On Wed, Feb 02, 2011 at 07:42:51AM -0800, Shirley Ma wrote:
On Wed, 2011-02-02 at 12:49 +0200, Michael S. Tsirkin wrote:
On Tue, Feb 01, 2011 at 11:33:49PM -0800, Shirley Ma wrote:
On Tue, 2011-02-01 at 23:14 -0800, Shirley Ma wrote:
w/i guest change, I played around the parameters,for
On Wed, 2011-02-02 at 20:20 +0200, Michael S. Tsirkin wrote:
How many packets and bytes per interrupt are sent?
Also, what about other values for the counters and other counters?
What does your patch do? Just drop packets instead of
stopping the interface?
To have an understanding when
On Wed, Feb 02, 2011 at 10:11:51AM -0800, Shirley Ma wrote:
On Wed, 2011-02-02 at 19:32 +0200, Michael S. Tsirkin wrote:
OK, but this should have no effect with a vhost patch
which should ensure that we don't get an interrupt
until the queue is at least half empty.
Right?
There should
On Tue, Jan 25, 2011 at 03:09:34PM -0600, Steve Dobbelstein wrote:
I am working on a KVM network performance issue found in our lab running
the DayTrader benchmark. The benchmark throughput takes a significant hit
when running the application server in a KVM guest verses on bare metal.
We
Michael S. Tsirkin m...@redhat.com wrote on 02/02/2011 12:38:47 PM:
On Tue, Jan 25, 2011 at 03:09:34PM -0600, Steve Dobbelstein wrote:
I am working on a KVM network performance issue found in our lab
running
the DayTrader benchmark. The benchmark throughput takes a significant
hit
when
On Wed, 2011-02-02 at 20:27 +0200, Michael S. Tsirkin wrote:
On Wed, Feb 02, 2011 at 10:11:51AM -0800, Shirley Ma wrote:
On Wed, 2011-02-02 at 19:32 +0200, Michael S. Tsirkin wrote:
OK, but this should have no effect with a vhost patch
which should ensure that we don't get an interrupt
On Wed, Feb 02, 2011 at 11:29:35AM -0800, Shirley Ma wrote:
On Wed, 2011-02-02 at 20:27 +0200, Michael S. Tsirkin wrote:
On Wed, Feb 02, 2011 at 10:11:51AM -0800, Shirley Ma wrote:
On Wed, 2011-02-02 at 19:32 +0200, Michael S. Tsirkin wrote:
OK, but this should have no effect with a
On Wed, 2011-02-02 at 22:17 +0200, Michael S. Tsirkin wrote:
Well, this is also the only case where the queue is stopped, no?
Yes. I got some debugging data, I saw that sometimes there were so many
packets were waiting for free in guest between vhost_signal guest xmit
callback. Looks like the
On Wed, Feb 02, 2011 at 01:03:05PM -0800, Shirley Ma wrote:
On Wed, 2011-02-02 at 22:17 +0200, Michael S. Tsirkin wrote:
Well, this is also the only case where the queue is stopped, no?
Yes. I got some debugging data, I saw that sometimes there were so many
packets were waiting for free in
On Wed, 2011-02-02 at 23:20 +0200, Michael S. Tsirkin wrote:
On Wed, 2011-02-02 at 22:17 +0200, Michael S. Tsirkin wrote:
Well, this is also the only case where the queue is stopped, no?
Yes. I got some debugging data, I saw that sometimes there were so
many
packets were waiting for free
On Wed, 2011-02-02 at 23:20 +0200, Michael S. Tsirkin wrote:
I think I need to define the test matrix to collect data for TX xmit
from guest to host here for different tests.
Data to be collected:
-
1. kvm_stat for VM, I/O exits
2. cpu utilization for both guest
On Wed, Feb 02, 2011 at 01:41:33PM -0800, Shirley Ma wrote:
On Wed, 2011-02-02 at 23:20 +0200, Michael S. Tsirkin wrote:
On Wed, 2011-02-02 at 22:17 +0200, Michael S. Tsirkin wrote:
Well, this is also the only case where the queue is stopped, no?
Yes. I got some debugging data, I saw
On Thu, 2011-02-03 at 07:59 +0200, Michael S. Tsirkin wrote:
Let's look at the sequence here:
guest start_xmit()
xmit_skb()
if ring is full,
enable_cb()
guest skb_xmit_done()
disable_cb,
printk free_old_xmit_skbs -- it was between more
On Wed, Feb 02, 2011 at 09:05:56PM -0800, Shirley Ma wrote:
On Wed, 2011-02-02 at 23:20 +0200, Michael S. Tsirkin wrote:
I think I need to define the test matrix to collect data for TX xmit
from guest to host here for different tests.
Data to be collected:
-
On Wed, Feb 02, 2011 at 10:09:14PM -0800, Shirley Ma wrote:
On Thu, 2011-02-03 at 07:59 +0200, Michael S. Tsirkin wrote:
Let's look at the sequence here:
guest start_xmit()
xmit_skb()
if ring is full,
enable_cb()
guest skb_xmit_done()
On Tue, 2011-02-01 at 22:17 +0200, Michael S. Tsirkin wrote:
On Tue, Feb 01, 2011 at 12:09:03PM -0800, Shirley Ma wrote:
On Tue, 2011-02-01 at 19:23 +0200, Michael S. Tsirkin wrote:
On Thu, Jan 27, 2011 at 01:30:38PM -0800, Shirley Ma wrote:
On Thu, 2011-01-27 at 13:02 -0800, David
On Mon, 2011-01-31 at 17:30 -0800, Sridhar Samudrala wrote:
Yes. It definitely should be 'out'. 'in' should be 0 in the tx path.
I tried a simpler version of this patch without any tunables by
delaying the signaling until we come out of the for loop.
It definitely reduced the number of
On Tue, Feb 01, 2011 at 12:25:08PM -0800, Shirley Ma wrote:
On Tue, 2011-02-01 at 22:17 +0200, Michael S. Tsirkin wrote:
On Tue, Feb 01, 2011 at 12:09:03PM -0800, Shirley Ma wrote:
On Tue, 2011-02-01 at 19:23 +0200, Michael S. Tsirkin wrote:
On Thu, Jan 27, 2011 at 01:30:38PM -0800,
On Tue, Feb 01, 2011 at 01:09:45PM -0800, Shirley Ma wrote:
On Mon, 2011-01-31 at 17:30 -0800, Sridhar Samudrala wrote:
Yes. It definitely should be 'out'. 'in' should be 0 in the tx path.
I tried a simpler version of this patch without any tunables by
delaying the signaling until we
On Tue, 2011-02-01 at 23:21 +0200, Michael S. Tsirkin wrote:
Confused. We compare capacity to skb frags, no?
That's sg I think ...
Current guest kernel use indirect buffers, num_free returns how many
available descriptors not skb frags. So it's wrong here.
Shirley
--
To unsubscribe from this
On Tue, 2011-02-01 at 23:24 +0200, Michael S. Tsirkin wrote:
My theory is that the issue is not signalling.
Rather, our queue fills up, then host handles
one packet and sends an interrupt, and we
immediately wake the queue. So the vq
once it gets full, stays full.
From the printk debugging
On Tue, Feb 01, 2011 at 01:28:45PM -0800, Shirley Ma wrote:
On Tue, 2011-02-01 at 23:21 +0200, Michael S. Tsirkin wrote:
Confused. We compare capacity to skb frags, no?
That's sg I think ...
Current guest kernel use indirect buffers, num_free returns how many
available descriptors not skb
On Tue, Feb 01, 2011 at 01:32:35PM -0800, Shirley Ma wrote:
On Tue, 2011-02-01 at 23:24 +0200, Michael S. Tsirkin wrote:
My theory is that the issue is not signalling.
Rather, our queue fills up, then host handles
one packet and sends an interrupt, and we
immediately wake the queue. So
On Tue, 2011-02-01 at 23:42 +0200, Michael S. Tsirkin wrote:
On Tue, Feb 01, 2011 at 01:32:35PM -0800, Shirley Ma wrote:
On Tue, 2011-02-01 at 23:24 +0200, Michael S. Tsirkin wrote:
My theory is that the issue is not signalling.
Rather, our queue fills up, then host handles
one packet
On Tue, 2011-02-01 at 23:56 +0200, Michael S. Tsirkin wrote:
There are flags for bytes, buffers and packets.
Try playing with any one of them :)
Just be sure to use v2.
I would like to change it to
half of the ring size instead for signaling. Is that OK?
Shirley
Sure that
Michael S. Tsirkin m...@redhat.com 02/02/2011 03:11 AM
On Tue, Feb 01, 2011 at 01:28:45PM -0800, Shirley Ma wrote:
On Tue, 2011-02-01 at 23:21 +0200, Michael S. Tsirkin wrote:
Confused. We compare capacity to skb frags, no?
That's sg I think ...
Current guest kernel use indirect
On Tue, Feb 01, 2011 at 02:59:57PM -0800, Shirley Ma wrote:
On Tue, 2011-02-01 at 23:56 +0200, Michael S. Tsirkin wrote:
There are flags for bytes, buffers and packets.
Try playing with any one of them :)
Just be sure to use v2.
I would like to change it to
half of the ring size
On Wed, Feb 02, 2011 at 10:09:18AM +0530, Krishna Kumar2 wrote:
Michael S. Tsirkin m...@redhat.com 02/02/2011 03:11 AM
On Tue, Feb 01, 2011 at 01:28:45PM -0800, Shirley Ma wrote:
On Tue, 2011-02-01 at 23:21 +0200, Michael S. Tsirkin wrote:
Confused. We compare capacity to skb frags,
On Wed, 2011-02-02 at 06:40 +0200, Michael S. Tsirkin wrote:
ust tweak the parameters with sysfs, you do not have to edit the code:
echo 64 /sys/module/vhost_net/parameters/tx_bufs_coalesce
Or in a similar way for tx_packets_coalesce (since we use indirect,
packets will typically use 1
On Tue, 2011-02-01 at 22:05 -0800, Shirley Ma wrote:
The way I am changing is only when netif queue has stopped, then we
start to count num_free descriptors to send the signal to wake netif
queue.
I forgot to mention, the code change I am making is in guest kernel, in
xmit call back only
On Tue, Feb 01, 2011 at 10:19:09PM -0800, Shirley Ma wrote:
On Tue, 2011-02-01 at 22:05 -0800, Shirley Ma wrote:
The way I am changing is only when netif queue has stopped, then we
start to count num_free descriptors to send the signal to wake netif
queue.
I forgot to mention, the
On Tue, 2011-02-01 at 22:05 -0800, Shirley Ma wrote:
The way I am changing is only when netif queue has stopped, then we
start to count num_free descriptors to send the signal to wake netif
queue.
I forgot to mention, the code change I am making is in guest kernel, in
xmit call back
On Wed, 2011-02-02 at 12:04 +0530, Krishna Kumar2 wrote:
On Tue, 2011-02-01 at 22:05 -0800, Shirley Ma wrote:
The way I am changing is only when netif queue has stopped, then
we
start to count num_free descriptors to send the signal to wake
netif
queue.
I forgot to mention, the
On Wed, 2011-02-02 at 08:29 +0200, Michael S. Tsirkin wrote:
On Tue, Feb 01, 2011 at 10:19:09PM -0800, Shirley Ma wrote:
On Tue, 2011-02-01 at 22:05 -0800, Shirley Ma wrote:
The way I am changing is only when netif queue has stopped, then
we
start to count num_free descriptors to
On Tue, 2011-02-01 at 23:14 -0800, Shirley Ma wrote:
w/i guest change, I played around the parameters,for example: I could
get 3.7Gb/s with 42% CPU BW increasing from 2.5Gb/s for 1K message
size,
w/i dropping packet, I was able to get up to 6.2Gb/s with similar CPU
usage.
I meant w/o guest
Shirley Ma mashi...@us.ibm.com wrote:
I have tried this before. There are a couple of issues:
1. the free count will not reduce until you run free_old_xmit_skbs,
which will not run anymore since the tx queue is stopped.
2. You cannot call free_old_xmit_skbs directly as it races with
Michael S. Tsirkin m...@redhat.com wrote on 01/28/2011 06:16:16 AM:
OK, so thinking about it more, maybe the issue is this:
tx becomes full. We process one request and interrupt the guest,
then it adds one request and the queue is full again.
Maybe the following will help it stabilize?
By
On Mon, 2011-01-31 at 18:24 -0600, Steve Dobbelstein wrote:
Michael S. Tsirkin m...@redhat.com wrote on 01/28/2011 06:16:16 AM:
OK, so thinking about it more, maybe the issue is this:
tx becomes full. We process one request and interrupt the guest,
then it adds one request and the queue
On Mon, Jan 31, 2011 at 06:24:34PM -0600, Steve Dobbelstein wrote:
Michael S. Tsirkin m...@redhat.com wrote on 01/28/2011 06:16:16 AM:
OK, so thinking about it more, maybe the issue is this:
tx becomes full. We process one request and interrupt the guest,
then it adds one request and the
On Mon, Jan 31, 2011 at 05:30:38PM -0800, Sridhar Samudrala wrote:
On Mon, 2011-01-31 at 18:24 -0600, Steve Dobbelstein wrote:
Michael S. Tsirkin m...@redhat.com wrote on 01/28/2011 06:16:16 AM:
OK, so thinking about it more, maybe the issue is this:
tx becomes full. We process one
On Thu, Jan 27, 2011 at 01:30:38PM -0800, Shirley Ma wrote:
On Thu, 2011-01-27 at 13:02 -0800, David Miller wrote:
Interesting. Could this is be a variant of the now famuous
bufferbloat then?
Sigh, bufferbloat is the new global warming... :-/
Yep, some places become colder, some
mashi...@linux.vnet.ibm.com wrote on 01/27/2011 02:15:05 PM:
On Thu, 2011-01-27 at 22:05 +0200, Michael S. Tsirkin wrote:
One simple theory is that guest net stack became faster
and so the host can't keep up.
Yes, that's what I think here. Some qdisc code has been changed
recently.
I ran
ste...@us.ibm.com wrote on 01/28/2011 12:29:37 PM:
On Thu, 2011-01-27 at 22:05 +0200, Michael S. Tsirkin wrote:
One simple theory is that guest net stack became faster
and so the host can't keep up.
Yes, that's what I think here. Some qdisc code has been changed
recently.
I ran a
On Wed, 2011-01-26 at 17:17 +0200, Michael S. Tsirkin wrote:
I am seeing a similar problem, and am trying to fix that.
My current theory is that this is a variant of a receive livelock:
if the application isn't fast enough to process
incoming data, the guest net stack switches
from prequeue
On Thu, Jan 27, 2011 at 10:44:34AM -0800, Shirley Ma wrote:
On Wed, 2011-01-26 at 17:17 +0200, Michael S. Tsirkin wrote:
I am seeing a similar problem, and am trying to fix that.
My current theory is that this is a variant of a receive livelock:
if the application isn't fast enough to
On Thu, 2011-01-27 at 21:00 +0200, Michael S. Tsirkin wrote:
Interesting. In particular running vhost and the transmitting guest
on the same host would have the effect of slowing down TX.
Does it double the BW for you too?
Running vhost and TX guest on the same host seems not good enough to
On Thu, Jan 27, 2011 at 11:09:00AM -0800, Shirley Ma wrote:
On Thu, 2011-01-27 at 21:00 +0200, Michael S. Tsirkin wrote:
Interesting. In particular running vhost and the transmitting guest
on the same host would have the effect of slowing down TX.
Does it double the BW for you too?
On Thu, 2011-01-27 at 21:31 +0200, Michael S. Tsirkin wrote:
Well slowing down the guest does not sound hard - for example we can
request guest notifications, or send extra interrupts :)
A slightly more sophisticated thing to try is to
poll the vq a bit more aggressively.
For example if we
On Thu, Jan 27, 2011 at 11:45:47AM -0800, Shirley Ma wrote:
On Thu, 2011-01-27 at 21:31 +0200, Michael S. Tsirkin wrote:
Well slowing down the guest does not sound hard - for example we can
request guest notifications, or send extra interrupts :)
A slightly more sophisticated thing to try
1 - 100 of 104 matches
Mail list logo