On 10.04.2019 16:12, Jan Scheurich wrote:
> Hi Ilya,
> 
>>>
>>> With a simple pvp setup of mine.
>>> 1c/2t poll two physical ports.
>>> 1c/2t poll four vhost ports with 16 queues each.
>>>   Only one queue is enabled on each virtio device attached by the guest.
>>>   The first two virtio devices are bound to the virtio kmod.
>>>   The last two virtio devices are bound to vfio-pci and used to forward
>> incoming traffic with testpmd.
>>>
>>> The forwarding zeroloss rate goes from 5.2Mpps (polling all 64 vhost queues)
>> to 6.2Mpps (polling only the 4 enabled vhost queues).
>>
>> That's interesting. However, this doesn't look like a realistic scenario.
>> In practice you'll need much more PMD threads to handle so many queues.
>> If you'll add more threads, zeroloss test could show even worse results if 
>> one of
>> idle VMs will periodically change the number of queues. Periodic latency 
>> spikes
>> will cause queue overruns and subsequent packet drops on hot Rx queues. This
>> could be partially solved by allowing n_rxq to grow only.
>> However, I'd be happy to have different solution that will not hide number of
>> queues from the datapath.
>>
> 
> I am afraid it is not a valid assumption that there will be similarly large 
> number of OVS PMD threads as there are queues. 
> 
> In OpenStack deployments the OVS is typically statically configured to use a 
> few dedicated host CPUs for PMDs (perhaps 2-8).
> 
> Typical Telco VNF VMs, on the other hand, are very large (12-20 vCPUs or even 
> more). If they enable an instance for multi-queue in Nova, Nova (in its 
> eternal wisdom) will set up every vhostuser port with #vCPU queue pairs.

For me, it's an issue of Nova. It's pretty easy to limit the maximum number of 
queue pairs
to some sane value (the value that could be handled by your number of available 
PMD threads).
It'll be a one config and a small patch to nova-compute. With a bit more work 
you could make
this per-port configurable and finally stop wasting HW resources.

> A (real world) VM with 20 vCPUs and 6 ports would have 120 queue pairs, even 
> if only one or two high-traffic ports can actually profit from multi-queue. 
> Even on those ports is it unlikely that the application will use all 16 
> queues. And often there would be another such VM on the second NUMA node.

With limiting the number of queues in Nova (like I described above) to 4 you'll 
have just
24 queues for 6 ports. If you'll make it per-port, you'll be able to limit this 
to even
more sane values.

> 
> So, as soon as a VNF enables MQ in OpenStack, there will typically be a vast 
> number of un-used queue pairs in OVS and it makes a lot of sense to minimize 
> the run-time impact of having these around. 

For me it seems like not an OVS, DPDK or QEMU issue. The orchestrator should 
configure sane
values first of all. It's totally unclear why we're changing OVS instead of 
changing Nova.

> 
> We have had discussion earlier with RedHat as to how a vhostuser backend like 
> OVS could negotiate the number of queue pairs with Qemu down to a reasonable 
> value (e.g. the number PMDs available for polling) *before* Qemu would 
> actually start the guest. The guest would then not have to guess on the 
> optimal number of queue pairs to actually activate.
> 
> BR, Jan
> 
_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to