Hello,
On Wed, May 15, 2019 at 12:04 PM Ilya Maximets <[email protected]>
wrote:
> On 25.04.2019 18:22, David Marchand wrote:
> > We currently poll all available queues based on the max queue count
> > exchanged with the vhost peer and rely on the vhost library in DPDK to
> > check the vring status beneath.
> > This can lead to some overhead when we have a lot of unused queues.
> >
> > To enhance the situation, we can skip the disabled queues.
> > On rxq notifications, we make use of the netdev's change_seq number so
> > that the pmd thread main loop can cache the queue state periodically.
> >
> > $ ovs-appctl dpif-netdev/pmd-rxq-show
> > pmd thread numa_id 0 core_id 1:
> > isolated : true
> > port: dpdk0 queue-id: 0 (enabled) pmd usage: 0 %
> > pmd thread numa_id 0 core_id 2:
> > isolated : true
> > port: vhost1 queue-id: 0 (enabled) pmd usage: 0 %
> > port: vhost3 queue-id: 0 (enabled) pmd usage: 0 %
> > pmd thread numa_id 0 core_id 15:
> > isolated : true
> > port: dpdk1 queue-id: 0 (enabled) pmd usage: 0 %
> > pmd thread numa_id 0 core_id 16:
> > isolated : true
> > port: vhost0 queue-id: 0 (enabled) pmd usage: 0 %
> > port: vhost2 queue-id: 0 (enabled) pmd usage: 0 %
> >
> > $ while true; do
> > ovs-appctl dpif-netdev/pmd-rxq-show |awk '
> > /port: / {
> > tot++;
> > if ($5 == "(enabled)") {
> > en++;
> > }
> > }
> > END {
> > print "total: " tot ", enabled: " en
> > }'
> > sleep 1
> > done
> >
> > total: 6, enabled: 2
> > total: 6, enabled: 2
> > ...
> >
> > # Started vm, virtio devices are bound to kernel driver which enables
> > # F_MQ + all queue pairs
> > total: 6, enabled: 2
> > total: 66, enabled: 66
> > ...
> >
> > # Unbound vhost0 and vhost1 from the kernel driver
> > total: 66, enabled: 66
> > total: 66, enabled: 34
> > ...
> >
> > # Configured kernel bound devices to use only 1 queue pair
> > total: 66, enabled: 34
> > total: 66, enabled: 19
> > total: 66, enabled: 4
> > ...
> >
> > # While rebooting the vm
> > total: 66, enabled: 4
> > total: 66, enabled: 2
> > ...
> > total: 66, enabled: 66
> > ...
> >
> > # After shutting down the vm
> > total: 66, enabled: 66
> > total: 66, enabled: 2
> >
> > Signed-off-by: David Marchand <[email protected]>
> > ---
> >
> > Changes since v2:
> > - Ilya comments
> > - Kevin comments on "dpif-netdev/pmd-rxq-show" output
> > - Updated unit tests accordingly
> >
> > Changes since v1:
> > - only indicate disabled queues in dpif-netdev/pmd-rxq-show output
> > - Ilya comments
> > - no need for a struct as we only need a boolean per rxq
> > - "rx_q" is generic, while we only care for this in vhost case,
> > renamed as "vhost_rxq_enabled",
> > - add missing rte_free on allocation error,
> > - vhost_rxq_enabled is freed in vhost destruct only,
> > - rxq0 is enabled at the virtio device activation to accomodate
> > legacy implementations which would not report per queue states
> > later,
> > - do not mix boolean with integer,
> > - do not use bit operand on boolean,
> >
> > ---
>
> Hi.
>
> I performed some tests on my usual setup (PVP with bonded phy) without
> any disabled queues and see no valuable performance difference. So, it's
> OK for me.
>
> I have one style comment inline (which probably could be fixed while
> applying the patch).
>
> Beside that:
>
> Acked-by: Ilya Maximets <[email protected]>
>
Thanks Ilya.
What is the next step?
Are we taking the 3 patches once 18.11.2 is out?
I can send a n+1 patchset with the fix on the style.
--
David Marchand
_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev