On Wed, Nov 03, 2010 at 10:38:46PM -0700, Shirley Ma wrote:
On Wed, 2010-11-03 at 12:48 +0200, Michael S. Tsirkin wrote:
I mean in practice, you see a benefit from this patch?
Yes, I tested it. It does benefit the performance.
My concern here is whether checking only in set up would be
On Thu, 2010-11-04 at 11:30 +0200, Michael S. Tsirkin wrote:
One thing to note is that deferred signalling needs to be
benchmarked with old guests which don't orphan skbs on xmit
(or disable orphaning in both networking stack and virtio-net).
Yes, we need run more test.
OK, so I guess I'll
On Mon, Nov 01, 2010 at 01:17:53PM -0700, Shirley Ma wrote:
On Sat, 2010-10-30 at 22:06 +0200, Michael S. Tsirkin wrote:
On Fri, Oct 29, 2010 at 08:43:08AM -0700, Shirley Ma wrote:
On Fri, 2010-10-29 at 10:10 +0200, Michael S. Tsirkin wrote:
Hmm. I don't yet understand. We are still
On Wed, 2010-11-03 at 12:48 +0200, Michael S. Tsirkin wrote:
I mean in practice, you see a benefit from this patch?
Yes, I tested it. It does benefit the performance.
My concern here is whether checking only in set up would be
sufficient
for security?
It better be sufficient because the
On Sat, 2010-10-30 at 22:06 +0200, Michael S. Tsirkin wrote:
On Fri, Oct 29, 2010 at 08:43:08AM -0700, Shirley Ma wrote:
On Fri, 2010-10-29 at 10:10 +0200, Michael S. Tsirkin wrote:
Hmm. I don't yet understand. We are still doing copies into the
per-vq
buffer, and the data copied is
On Fri, Oct 29, 2010 at 08:43:08AM -0700, Shirley Ma wrote:
On Fri, 2010-10-29 at 10:10 +0200, Michael S. Tsirkin wrote:
Hmm. I don't yet understand. We are still doing copies into the per-vq
buffer, and the data copied is really small. Is it about cache line
bounces? Could you try
On Thu, Oct 28, 2010 at 12:32:35PM -0700, Shirley Ma wrote:
On Thu, 2010-10-28 at 07:20 +0200, Michael S. Tsirkin wrote:
My concern is this can delay signalling for unlimited time.
Could you pls test this with guests that do not have
2b5bbe3b8bee8b38bdc27dd9c0270829b6eb7eeb
On Thu, Oct 28, 2010 at 10:14:22AM -0700, Shirley Ma wrote:
Two ideas:
1. How about writing out used, just delaying the signal?
This way we don't have to queue separately.
This improves some performance, but not as good as delaying
both used and signal. Since delaying used buffers
On Thu, Oct 28, 2010 at 02:40:50PM -0700, Shirley Ma wrote:
On Thu, 2010-10-28 at 14:04 -0700, Sridhar Samudrala wrote:
It would be some change in virtio-net driver that may have improved
the
latency of small messages which in turn would have reduced the
bandwidth
as TCP could not
On Thu, Oct 28, 2010 at 01:13:55PM -0700, Shirley Ma wrote:
On Thu, 2010-10-28 at 12:32 -0700, Shirley Ma wrote:
Also I found a big TX regression for old guest and new guest. For old
guest, I am able to get almost 11Gb/s for 2K message size, but for the
new guest kernel, I can only get 3.5
On Fri, 2010-10-29 at 10:10 +0200, Michael S. Tsirkin wrote:
Hmm. I don't yet understand. We are still doing copies into the per-vq
buffer, and the data copied is really small. Is it about cache line
bounces? Could you try figuring it out?
per-vq buffer is much less expensive than 3
On Thu, 2010-10-28 at 07:20 +0200, Michael S. Tsirkin wrote:
My concern is this can delay signalling for unlimited time.
Could you pls test this with guests that do not have
2b5bbe3b8bee8b38bdc27dd9c0270829b6eb7eeb
b0c39dbdc204006ef3558a66716ff09797619778
that is 2.6.31 and older?
I will
On Thu, 2010-10-28 at 07:20 +0200, Michael S. Tsirkin wrote:
My concern is this can delay signalling for unlimited time.
Could you pls test this with guests that do not have
2b5bbe3b8bee8b38bdc27dd9c0270829b6eb7eeb
b0c39dbdc204006ef3558a66716ff09797619778
that is 2.6.31 and older?
The patch
On Thu, 2010-10-28 at 12:32 -0700, Shirley Ma wrote:
Also I found a big TX regression for old guest and new guest. For old
guest, I am able to get almost 11Gb/s for 2K message size, but for the
new guest kernel, I can only get 3.5 Gb/s with the patch and same
host.
I will dig it why.
The
On Thu, 2010-10-28 at 13:13 -0700, Shirley Ma wrote:
On Thu, 2010-10-28 at 12:32 -0700, Shirley Ma wrote:
Also I found a big TX regression for old guest and new guest. For old
guest, I am able to get almost 11Gb/s for 2K message size, but for the
new guest kernel, I can only get 3.5 Gb/s
On Thu, 2010-10-28 at 14:04 -0700, Sridhar Samudrala wrote:
It would be some change in virtio-net driver that may have improved
the
latency of small messages which in turn would have reduced the
bandwidth
as TCP could not accumulate and send large packets.
I will check out any latency
This patch changes vhost TX used buffer guest signal from one by
one to 3/4 of vring size. This change improves vhost TX transmission
both bandwidth and CPU utilization performance for 256 to 8K messages s
ize without inducing any regression.
Signed-off-by: Shirley Ma x...@us.ibm.com
---
Resubmit this patch for fixing some minor error (white space, typo).
Signed-off-by: Shirley Ma x...@us.ibm.com
---
drivers/vhost/net.c | 20 +++-
drivers/vhost/vhost.c | 32
drivers/vhost/vhost.h |3 +++
3 files changed, 54
On Wed, Oct 27, 2010 at 09:40:04PM -0700, Shirley Ma wrote:
Resubmit this patch for fixing some minor error (white space, typo).
Signed-off-by: Shirley Ma x...@us.ibm.com
My concern is this can delay signalling for unlimited time.
Could you pls test this with guests that do not have
19 matches
Mail list logo