On Wed, 2011-02-02 at 23:20 +0200, Michael S. Tsirkin wrote:
> > On Wed, 2011-02-02 at 22:17 +0200, Michael S. Tsirkin wrote:
> > > Well, this is also the only case where the queue is stopped, no?
> > Yes. I got some debugging data, I saw that sometimes there were so
> many
> > packets were waiting for free in guest between vhost_signal & guest
> xmit
> > callback.
>
> What does this mean?
Let's look at the sequence here:
guest start_xmit()
xmit_skb()
if ring is full,
enable_cb()
guest skb_xmit_done()
disable_cb,
printk free_old_xmit_skbs <-- it was between more than 1/2 to
full ring size
printk vq->num_free
vhost handle_tx()
if (guest interrupt is enabled)
signal guest to free xmit buffers
So between guest queue full/stopped queue/enable call back to guest
receives the callback from host to free_old_xmit_skbs, there were about
1/2 to full ring size descriptors available. I thought there were only a
few. (I disabled your vhost patch for this test.)
> > Looks like the time spent too long from vhost_signal to guest
> > xmit callback?
>
>
>
> > > > I tried to accumulate multiple guest to host notifications for
> TX
> > > xmits,
> > > > it did help multiple streams TCP_RR results;
> > > I don't see a point to delay used idx update, do you?
> >
> > It might cause per vhost handle_tx processed more packets.
>
> I don't understand. It's a couple of writes - what is the issue?
Oh, handle_tx could process more packets per loop for multiple streams
TCP_RR case. I need to print out the data rate per loop to confirm this.
Shirley
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html