On Tue, 2011-11-29 at 16:58 +0200, Avi Kivity wrote:
> On 11/29/2011 04:54 PM, Michael S. Tsirkin wrote:
> > > 
> > > Which is actually strange, weren't indirect buffers introduced to make
> > > the performance *better*? From what I see it's pretty much the
> > > same/worse for virtio-blk.
> >
> > I know they were introduced to allow adding very large bufs.
> > See 9fa29b9df32ba4db055f3977933cd0c1b8fe67cd
> > Mark, you wrote the patch, could you tell us which workloads
> > benefit the most from indirect bufs?
> >
> 
> Indirects are really for block devices with many spindles, since there
> the limiting factor is the number of requests in flight.  Network
> interfaces are limited by bandwidth, it's better to increase the ring
> size and use direct buffers there (so the ring size more or less
> corresponds to the buffer size).
> 

I did some testing of indirect descriptors under different workloads.

All tests were on a 2 vcpu guest with vhost on. Simple TCP_STREAM using
netperf.

Indirect desc off:
guest -> host, 1 stream: ~4600mb/s
host -> guest, 1 stream: ~5900mb/s
guest -> host, 8 streams: ~620mb/s (on average)
host -> guest, 8 stream: ~600mb/s (on average)

Indirect desc on:
guest -> host, 1 stream: ~4900mb/s
host -> guest, 1 stream: ~5400mb/s
guest -> host, 8 streams: ~620mb/s (on average)
host -> guest, 8 stream: ~600mb/s (on average)

Which means that for one stream, guest to host gets faster while host to
guest gets slower when indirect descriptors are on.

-- 

Sasha.

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Reply via email to