On Thursday, March 10, 2011 12:54:58 am Michael S. Tsirkin wrote:
On Wed, Mar 09, 2011 at 05:25:11PM -0600, Tom Lendacky wrote:
As for which CPU the interrupt gets pinned to, that doesn't matter - see
below.
So what hurts us the most is that the IRQ jumps between the VCPUs?
Yes, it
On Thu, Mar 10, 2011 at 09:23:42AM -0600, Tom Lendacky wrote:
On Thursday, March 10, 2011 12:54:58 am Michael S. Tsirkin wrote:
On Wed, Mar 09, 2011 at 05:25:11PM -0600, Tom Lendacky wrote:
As for which CPU the interrupt gets pinned to, that doesn't matter - see
below.
So what hurts
On Thursday, March 10, 2011 09:34:22 am Michael S. Tsirkin wrote:
On Thu, Mar 10, 2011 at 09:23:42AM -0600, Tom Lendacky wrote:
On Thursday, March 10, 2011 12:54:58 am Michael S. Tsirkin wrote:
On Wed, Mar 09, 2011 at 05:25:11PM -0600, Tom Lendacky wrote:
As for which CPU the interrupt
On Wed, 2011-03-09 at 09:15 +0200, Michael S. Tsirkin wrote:
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 82dba5a..ebe3337 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -514,11 +514,11 @@ static unsigned int free_old_xmit_skbs(struct
On Wednesday, March 09, 2011 01:15:58 am Michael S. Tsirkin wrote:
On Mon, Mar 07, 2011 at 04:31:41PM -0600, Tom Lendacky wrote:
We've been doing some more experimenting with the small packet network
performance problem in KVM. I have a different setup than what Steve D.
was using so I
On Wed, Mar 09, 2011 at 07:45:43AM -0800, Shirley Ma wrote:
On Wed, 2011-03-09 at 09:15 +0200, Michael S. Tsirkin wrote:
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 82dba5a..ebe3337 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@
On Wednesday, March 09, 2011 01:17:44 am Michael S. Tsirkin wrote:
On Mon, Mar 07, 2011 at 04:31:41PM -0600, Tom Lendacky wrote:
I used the uperf tool to do this after verifying the results against
netperf. Uperf allows the specification of the number of connections as
a parameter in an XML
On Wed, 2011-03-09 at 10:09 -0600, Tom Lendacky wrote:
This spread out the kick_notify but still resulted in alot of
them. I
decided to build on the delayed Tx buffer freeing and code up an
ethtool like coalescing patch in order to delay the kick_notify
until
there were at least
On Wed, 2011-03-09 at 18:10 +0200, Michael S. Tsirkin wrote:
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 82dba5a..4477b9a 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -514,11 +514,11 @@ static unsigned int free_old_xmit_skbs(struct
On Wed, Mar 09, 2011 at 10:09:26AM -0600, Tom Lendacky wrote:
On Wednesday, March 09, 2011 01:15:58 am Michael S. Tsirkin wrote:
On Mon, Mar 07, 2011 at 04:31:41PM -0600, Tom Lendacky wrote:
We've been doing some more experimenting with the small packet network
performance problem in KVM.
On Wed, Mar 09, 2011 at 08:25:34AM -0800, Shirley Ma wrote:
On Wed, 2011-03-09 at 18:10 +0200, Michael S. Tsirkin wrote:
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 82dba5a..4477b9a 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@
On Wed, 2011-03-09 at 18:32 +0200, Michael S. Tsirkin wrote:
I think your issues are with TX overrun.
Besides delaying IRQ on TX, I don't have many ideas.
The one interesting thing is that you see better speed
if you drop packets. netdev crowd says this should not happen,
so could be an
On Wed, 2011-03-09 at 10:09 -0600, Tom Lendacky wrote:
Vhost is receiving a lot of notifications for packets that are to
be
transmitted (over 60% of the packets generate a kick_notify).
This is guest TX send notification when vhost enables notification.
In TCP_STREAM test, vhost exits
On Wed, Mar 09, 2011 at 08:51:33AM -0800, Shirley Ma wrote:
On Wed, 2011-03-09 at 10:09 -0600, Tom Lendacky wrote:
Vhost is receiving a lot of notifications for packets that are to
be
transmitted (over 60% of the packets generate a kick_notify).
This is guest TX send notification
On Wed, 2011-03-09 at 19:16 +0200, Michael S. Tsirkin wrote:
On Wed, Mar 09, 2011 at 08:51:33AM -0800, Shirley Ma wrote:
On Wed, 2011-03-09 at 10:09 -0600, Tom Lendacky wrote:
Vhost is receiving a lot of notifications for packets that are
to
be
transmitted (over 60% of the packets
Here are the results again with the addition of the interrupt rate that
occurred on the guest virtio_net device:
Here is the KVM baseline (average of six runs):
Txn Rate: 87,070.34 Txn/Sec, Pkt Rate: 172,992 Pkts/Sec
Exits: 148,444.58 Exits/Sec
TxCPU: 2.40% RxCPU: 99.35%
Virtio1-input
On Wed, Mar 09, 2011 at 02:11:07PM -0600, Tom Lendacky wrote:
Here are the results again with the addition of the interrupt rate that
occurred on the guest virtio_net device:
Here is the KVM baseline (average of six runs):
Txn Rate: 87,070.34 Txn/Sec, Pkt Rate: 172,992 Pkts/Sec
Exits:
Hello Tom,
Do you also have Rusty's virtio stat patch results for both send queue
and recv queue to share here?
Thanks
Shirley
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at
On Wednesday, March 09, 2011 10:09:26 am Tom Lendacky wrote:
On Wednesday, March 09, 2011 01:15:58 am Michael S. Tsirkin wrote:
On Mon, Mar 07, 2011 at 04:31:41PM -0600, Tom Lendacky wrote:
We've been doing some more experimenting with the small packet network
performance problem in KVM.
On Wednesday, March 09, 2011 04:45:12 pm Shirley Ma wrote:
Hello Tom,
Do you also have Rusty's virtio stat patch results for both send queue
and recv queue to share here?
Let me see what I can do about getting the data extracted, averaged and in a
form that I can put in an email.
Thanks
On Wednesday, March 09, 2011 03:56:15 pm Michael S. Tsirkin wrote:
On Wed, Mar 09, 2011 at 02:11:07PM -0600, Tom Lendacky wrote:
Here are the results again with the addition of the interrupt rate that
occurred on the guest virtio_net device:
Here is the KVM baseline (average of six
On Wed, 2011-03-09 at 23:56 +0200, Michael S. Tsirkin wrote:
Txn Rate: 153,696.59 Txn/Sec, Pkt Rate: 305,358 Pkgs/Sec
Exits: 62,603.37 Exits/Sec
TxCPU: 3.73% RxCPU: 98.52%
Virtio1-input Interrupts/Sec (CPU0/CPU1): 11,564/0
Virtio1-output Interrupts/Sec (CPU0/CPU1): 0/0
On Wed, 2011-03-09 at 16:59 -0800, Shirley Ma wrote:
In theory, for lots of TCP_RR streams, the guest should be able to keep
sending xmit skbs to send vq, so vhost should be able to disable
notification most of the time, then number of guest exits should be
significantly reduced? Why we saw
On Wed, Mar 09, 2011 at 05:25:11PM -0600, Tom Lendacky wrote:
As for which CPU the interrupt gets pinned to, that doesn't matter - see
below.
So what hurts us the most is that the IRQ jumps between the VCPUs?
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a
Hi Tom,
My two cents. Please look for [Chaks]
snip
Comparing the transmit path to the receive path, the guest disables
notifications after the first kick and vhost re-enables notifications
after
completing processing of the tx ring. Can a similar thing be done for
the
receive path? Once vhost
On Mon, Mar 07, 2011 at 04:31:41PM -0600, Tom Lendacky wrote:
We've been doing some more experimenting with the small packet network
performance problem in KVM. I have a different setup than what Steve D. was
using so I re-baselined things on the kvm.git kernel on both the host and
guest
On Mon, Mar 07, 2011 at 04:31:41PM -0600, Tom Lendacky wrote:
I used the uperf tool to do this after verifying the results against netperf.
Uperf allows the specification of the number of connections as a parameter in
an XML file as opposed to launching, in this case, 100 separate
We've been doing some more experimenting with the small packet network
performance problem in KVM. I have a different setup than what Steve D. was
using so I re-baselined things on the kvm.git kernel on both the host and
guest with a 10GbE adapter. I also made use of the virtio-stats patch.
28 matches
Mail list logo