Hi Ilya,

> >
> > With a simple pvp setup of mine.
> > 1c/2t poll two physical ports.
> > 1c/2t poll four vhost ports with 16 queues each.
> >   Only one queue is enabled on each virtio device attached by the guest.
> >   The first two virtio devices are bound to the virtio kmod.
> >   The last two virtio devices are bound to vfio-pci and used to forward
> incoming traffic with testpmd.
> >
> > The forwarding zeroloss rate goes from 5.2Mpps (polling all 64 vhost queues)
> to 6.2Mpps (polling only the 4 enabled vhost queues).
> 
> That's interesting. However, this doesn't look like a realistic scenario.
> In practice you'll need much more PMD threads to handle so many queues.
> If you'll add more threads, zeroloss test could show even worse results if 
> one of
> idle VMs will periodically change the number of queues. Periodic latency 
> spikes
> will cause queue overruns and subsequent packet drops on hot Rx queues. This
> could be partially solved by allowing n_rxq to grow only.
> However, I'd be happy to have different solution that will not hide number of
> queues from the datapath.
> 

I am afraid it is not a valid assumption that there will be similarly large 
number of OVS PMD threads as there are queues. 

In OpenStack deployments the OVS is typically statically configured to use a 
few dedicated host CPUs for PMDs (perhaps 2-8).

Typical Telco VNF VMs, on the other hand, are very large (12-20 vCPUs or even 
more). If they enable an instance for multi-queue in Nova, Nova (in its eternal 
wisdom) will set up every vhostuser port with #vCPU queue pairs. A (real world) 
VM with 20 vCPUs and 6 ports would have 120 queue pairs, even if only one or 
two high-traffic ports can actually profit from multi-queue. Even on those 
ports is it unlikely that the application will use all 16 queues. And often 
there would be another such VM on the second NUMA node.

So, as soon as a VNF enables MQ in OpenStack, there will typically be a vast 
number of un-used queue pairs in OVS and it makes a lot of sense to minimize 
the run-time impact of having these around. 

We have had discussion earlier with RedHat as to how a vhostuser backend like 
OVS could negotiate the number of queue pairs with Qemu down to a reasonable 
value (e.g. the number PMDs available for polling) *before* Qemu would actually 
start the guest. The guest would then not have to guess on the optimal number 
of queue pairs to actually activate.

BR, Jan
_______________________________________________
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to