> 
> > > Can you comment on that? Can a user also reduce the problem by
> > > configuring
> > > a) a larger virtio Tx queue size (up to 1K) in Qemu, or
> >
> > Is this possible right now without modifying QEMU src? I think the size is
> hardcoded to 256 at the moment although it may become
> > configurable in the future. If/when it does, we can test and update the
> docs if it does solve the problem. I don’t think we should suggest
> > modifying the QEMU src as a workaround now.
> 
> The possibility to configure the tx queue size has been upstreamed in Qemu
> 2.10:
> 
> commit 9b02e1618cf26aa52cf786f215d757506dda14f8
> Author: Wei Wang <[email protected]>
> Date:   Wed Jun 28 10:37:59 2017 +0800
> 
>     virtio-net: enable configurable tx queue size
> 
>     This patch enables the virtio-net tx queue size to be configurable
>     between 256 (the default queue size) and 1024 by the user when the
>     vhost-user backend is used....
> 
> So you should be able to test larger tx queue sizes with Qemu 2.10.

That's good news, thanks for sharing the details.
I tested with tx_queue_size=1024 and it didn't resolve the issue completely, 
but allowed for a greater number of txq descriptors for the NIC:
For default QEMU VQ size = 256, max n_txq_desc value = 256
For QEMY VQ size = 1024, max n_txq_desc value = 512

> 
> >
> > > b) a larger mempool for packets in Tx direction inside the guest (driver?)
> >
> > Using the DPDK driver in the guest & generating traffic via testpmd I
> modified the number of descriptors given to the virtio device from
> > 512 (default) to 2048 & 4096 but it didn't resolve the issue unfortunately.
> 
> I re-read the virtio 1.0 spec and it states that the total number of virtio
> descriptors per virtqueue equals the size of the virtqueue. Descriptors just
> point to guest mbufs. The mempool the guest driver uses for mbufs is
> irrelevant. OVS as virtio device needs to return the virtio descriptors to the
> guest driver. That means the virtio queue size sets the limit on the packets 
> in
> flight in OVS and physical NICs.
> 
> I would like to add a statement in the documentation that explains this
> dependency between Qemu Tx queue size and maximum physical NIC Tx
> queue size when using the vhost zero copy feature on a port.

I will put my findings above in the documentation.

> 
> > > > > And what about increased packet drop risk due to shortened tx
> queues?
> > > >
> > > > I guess this could be an issue. If I had some data to back this up I 
> > > > would
> > > include it in the documentation and mention the risk.
> > > > If the risk is unacceptable to the user they may choose to not enable
> the
> > > feature. It's disabled by default so shouldn't introduce an issue for
> > > > the standard case.
> > >
> > > Yes, but it would be good to understand the potential drawback for a
> better
> > > judgement of the trade-off between better raw throughput and higher
> loss
> > > risk.
> >
> > I ran RFC2544 0% packet loss tests for ZC on & off (64B PVP) and observed
> the following:
> >
> > Max rate (Mpps) with 0% loss
> > ZC Off 2599518
> > ZC On  1678758
> >
> > As you suspected, there is a trade-off. I can mention this in the docs.
> 
> That degradation looks severe.
> It would be cool if you could re-run the test with a 1K queue size configured
> in Qemu 2.10 and NIC

I ran a couple of configurations, again 64B RFC2544 PVP:

NIC-TXD    Virtio-TXD    ZC        Mpps
2048            256                 off        2.105    # default case
128              256                 off        2.162    # checking effect of 
modifying NIC TXD (positive)
2048            1024               off        2.455   # checking effect of 
modifying Virtio TXD (positive)
128              256                 on        1.587    # default zero copy case
512              1024               on        0.321    # checking effect of 
modifying NIC & Virtio TXD (negative)

For the default non-zero copy case, it seems increasing the virtio queue size 
in the guest has a positive effect wrt packet loss, but has the opposite effect 
for the zero copy case.
It looks like the zero copy feature may increase the likelihood of packet loss, 
which I guess is a tradeoff for the increase pps you get with the feature.

Thanks,
Ciara


> 
> Regards,
> Jan
_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to