Hi Wan Junjie,

Not a full code review, but comments on the feature below.

On 02/03/2022 10:59, Wan Junjie wrote:
     A pmd would poll all rxqs with no weight. When a pmd has one rxq
     from phy port and several from vhu port, and high loads run for
     both rx and tx, then the vhu can get polled more. This will cause
     the polling for rx of phy port much less than the vhu port. The
     loads for tx/rx will lose balance. With traffic to both directions,
     rx will be limited to a very low rate as phy port get polled less.


That's an interesting observation.

     For example, originally poll list for each pmd is like below:
     pmd 0  phy0_0 vhu0_0 vhu0_4 vhu0_8 vhu0_12
     pmd 1  phy0_1 vhu0_1 vhu0_5 vhu0_9 vhu0_13
     pmd 2  phy0_2 vhu0_2 vhu0_6 vhu0_10 vhu0_14
     pmd 3  phy0_3 vhu0_3 vhu0_7 vhu0_11 vhu0_15

     With traffic to both directions, rx will be limited to 2Mpps and
     tx is 9Mpps.


Can you explain this a bit more.

Are you saying you have 2 independent paths ?
- an ingress path of phy->ovs->vm, 2 Mpps
- an separate egress path of vm->ovs->phy, 9 Mpps

     This patch provide an option to reinforce the phy port polling.
     Add a configuration for rxq schedule, which will try to balance
     the poll for phy and vhu port. It will increase the poll times
     for a phy port, and interlace the phy rxq and vhu rxq in the poll
     list.

     scale the rxq poll list:
     pmd 0  phy0_0 vhu0_0 phy0_0 vhu0_4 phy0_0 vhu0_8 phy0_0 vhu0_12
     pmd 1  phy0_1 vhu0_1 phy0_1 vhu0_5 phy0_1 vhu0_9 phy0_1 vhu0_13
     pmd 2  phy0_2 vhu0_2 phy0_2 vhu0_6 phy0_2 vhu0_10 phy0_2 vhu0_14
     pmd 3  phy0_3 vhu0_3 phy0_3 vhu0_7 phy0_3 vhu0_11 phy0_3 vhu0_15


This looks a custom scheme that might be suited to only a very limited config, but it is not clear what that config needs to be, or how it should work with a different config. The scheme should be more generic and clear what the behavior will be with any other config (ports/rxqs/core/pinning etc), or forbidden with other configs.

I did some testing and changed a few config items and it stopped polling several rxqs which results in no traffic being passed. My testing notes are below [0].

You mentioned rxq weights was missing in the commit message above and it sounded like allowing the user to set port weights would be a more generic version of what you are proposing. However, weights would only be relevant to the rxqs on a particular pmd, and not between rxqs on different pmds, so i'm not sure it would work.

thanks,
Kevin.

     to enable it, run:
     'ovs-vsctl set open . other_config:pmd-rxq-schedule=scaling'
     to disable it, remove the setting or set it to 'single'
     And it works fairly well when n_rxq of dpdk phy port equals to
     number of dpdk pmds, which means one n_rxq for one pmd

Signed-off-by: Wan Junjie <[email protected]>
Reviewed-by: He Peng <[email protected]>
---
  lib/dpif-netdev.c | 133 ++++++++++++++++++++++++++++++++++++++++++----
  1 file changed, 122 insertions(+), 11 deletions(-)


[0]
- enable 'scale' amd add myport dpdk phy nic and vhost port

pmd thread numa_id 0 core_id 8:
  isolated : false
  port: dpdkvhost0        queue-id:  0 (enabled)   pmd usage:  0 %
  port: dpdkvhost0        queue-id:  1 (enabled)   pmd usage:  0 %
  port: dpdkvhost0        queue-id:  2 (enabled)   pmd usage:  0 %
  port: dpdkvhost0        queue-id:  3 (enabled)   pmd usage:  0 %
  port: myport            queue-id:  0 (rescaled)  pmd usage:  0 %


dpif_netdev(pmd-c08/id:8)|DBG|PMD 8: 1 rxq dpdkvhost0 2
dpif_netdev(pmd-c08/id:8)|DBG|PMD 8: 2 rxq myport 0
dpif_netdev(pmd-c08/id:8)|DBG|PMD 8: 3 rxq dpdkvhost0 1
dpif_netdev(pmd-c08/id:8)|DBG|PMD 8: 4 rxq myport 0
dpif_netdev(pmd-c08/id:8)|DBG|PMD 8: 5 rxq dpdkvhost0 3
dpif_netdev(pmd-c08/id:8)|DBG|PMD 8: 6 rxq myport 0
dpif_netdev(pmd-c08/id:8)|DBG|PMD 8: 7 rxq dpdkvhost0 0


- Add another phy nic

pmd thread numa_id 0 core_id 8:
  isolated : false
  port: dpdkvhost0        queue-id:  0 (enabled)   pmd usage:  0 %
  port: dpdkvhost0        queue-id:  1 (enabled)   pmd usage:  0 %
  port: dpdkvhost0        queue-id:  2 (enabled)   pmd usage:  0 %
  port: dpdkvhost0        queue-id:  3 (enabled)   pmd usage:  0 %
  port: myport            queue-id:  0 (rescaled)  pmd usage:  0 %
  port: urport            queue-id:  0 (rescaled)  pmd usage:  0 %

dpif_netdev(pmd-c08/id:8)|DBG|PMD 8: 0 rxq urport 0
dpif_netdev(pmd-c08/id:8)|DBG|PMD 8: 1 rxq dpdkvhost0 2
dpif_netdev(pmd-c08/id:8)|DBG|PMD 8: 2 rxq myport 0
dpif_netdev(pmd-c08/id:8)|DBG|PMD 8: 3 rxq dpdkvhost0 1
dpif_netdev(pmd-c08/id:8)|DBG|PMD 8: 4 rxq urport 0
dpif_netdev(pmd-c08/id:8)|DBG|PMD 8: 5 rxq dpdkvhost0 3
dpif_netdev(pmd-c08/id:8)|DBG|PMD 8: 6 rxq myport 0
dpif_netdev(pmd-c08/id:8)|DBG|PMD 8: 7 rxq dpdkvhost0 0


- change number of rxqs for myport, n_rxq=4

pmd thread numa_id 0 core_id 8:
  isolated : false
  port: dpdkvhost0        queue-id:  0 (enabled)   pmd usage:  0 %
  port: dpdkvhost0        queue-id:  1 (enabled)   pmd usage:  0 %
  port: dpdkvhost0        queue-id:  2 (enabled)   pmd usage:  0 %
  port: dpdkvhost0        queue-id:  3 (enabled)   pmd usage:  0 %
  port: myport            queue-id:  0 (rescaled)  pmd usage:  0 %
  port: myport            queue-id:  1 (rescaled)  pmd usage:  0 %
  port: myport            queue-id:  2 (rescaled)  pmd usage:  0 %
  port: myport            queue-id:  3 (rescaled)  pmd usage:  0 %
  port: urport            queue-id:  0 (rescaled)  pmd usage:  0 %


dpif_netdev(pmd-c08/id:8)|DBG|PMD 8: 0 rxq myport 1
dpif_netdev(pmd-c08/id:8)|DBG|PMD 8: 1 rxq dpdkvhost0 2
dpif_netdev(pmd-c08/id:8)|DBG|PMD 8: 2 rxq urport 0
dpif_netdev(pmd-c08/id:8)|DBG|PMD 8: 3 rxq dpdkvhost0 1
dpif_netdev(pmd-c08/id:8)|DBG|PMD 8: 4 rxq myport 2
dpif_netdev(pmd-c08/id:8)|DBG|PMD 8: 5 rxq dpdkvhost0 0
dpif_netdev(pmd-c08/id:8)|DBG|PMD 8: 6 rxq myport 0
dpif_netdev(pmd-c08/id:8)|DBG|PMD 8: 7 rxq dpdkvhost0 3

*queue 3 for myport is not polled


- change number of rxq for urport, n_rxq=4

pmd thread numa_id 0 core_id 8:
  isolated : false
  port: dpdkvhost0        queue-id:  0 (enabled)   pmd usage:  0 %
  port: dpdkvhost0        queue-id:  1 (enabled)   pmd usage:  0 %
  port: dpdkvhost0        queue-id:  2 (enabled)   pmd usage:  0 %
  port: dpdkvhost0        queue-id:  3 (enabled)   pmd usage:  0 %
  port: myport            queue-id:  0 (rescaled)  pmd usage:  0 %
  port: myport            queue-id:  1 (rescaled)  pmd usage:  0 %
  port: myport            queue-id:  2 (rescaled)  pmd usage:  0 %
  port: myport            queue-id:  3 (rescaled)  pmd usage:  0 %
  port: urport            queue-id:  0 (rescaled)  pmd usage:  0 %
  port: urport            queue-id:  1 (rescaled)  pmd usage:  0 %
  port: urport            queue-id:  2 (rescaled)  pmd usage:  0 %
  port: urport            queue-id:  3 (rescaled)  pmd usage:  0 %


dpif_netdev(pmd-c08/id:8)|DBG|PMD 8: 0 rxq urport 1
dpif_netdev(pmd-c08/id:8)|DBG|PMD 8: 1 rxq dpdkvhost0 2
dpif_netdev(pmd-c08/id:8)|DBG|PMD 8: 2 rxq myport 1
dpif_netdev(pmd-c08/id:8)|DBG|PMD 8: 3 rxq dpdkvhost0 1
dpif_netdev(pmd-c08/id:8)|DBG|PMD 8: 4 rxq urport 2
dpif_netdev(pmd-c08/id:8)|DBG|PMD 8: 5 rxq dpdkvhost0 0
dpif_netdev(pmd-c08/id:8)|DBG|PMD 8: 6 rxq urport 3
dpif_netdev(pmd-c08/id:8)|DBG|PMD 8: 7 rxq dpdkvhost0 3

*myport queues 0,2,3 not polled, urport queues 0 not polled


- shutdown vm

pmd thread numa_id 0 core_id 8:
  isolated : false
  port: dpdkvhost0        queue-id:  0 (enabled)   pmd usage:  0 %
  port: myport            queue-id:  0 (rescaled)  pmd usage:  0 %
  port: myport            queue-id:  1 (rescaled)  pmd usage:  0 %
  port: myport            queue-id:  2 (rescaled)  pmd usage:  0 %
  port: myport            queue-id:  3 (rescaled)  pmd usage:  0 %
  port: urport            queue-id:  0 (rescaled)  pmd usage:  0 %
  port: urport            queue-id:  1 (rescaled)  pmd usage:  0 %
  port: urport            queue-id:  2 (rescaled)  pmd usage:  0 %
  port: urport            queue-id:  3 (rescaled)  pmd usage:  0 %


dpif_netdev(pmd-c08/id:8)|DBG|PMD 8: 0 rxq urport 1
dpif_netdev(pmd-c08/id:8)|DBG|PMD 8: 1 rxq myport 1
dpif_netdev(pmd-c08/id:8)|DBG|PMD 8: 2 rxq urport 2
dpif_netdev(pmd-c08/id:8)|DBG|PMD 8: 3 rxq urport 3
dpif_netdev(pmd-c08/id:8)|DBG|PMD 8: 4 rxq urport 0
dpif_netdev(pmd-c08/id:8)|DBG|PMD 8: 5 rxq myport 2
dpif_netdev(pmd-c08/id:8)|DBG|PMD 8: 6 rxq myport 0
dpif_netdev(pmd-c08/id:8)|DBG|PMD 8: 7 rxq myport 3

*dpdkvhost0 no longer polled

- change vm to 1 rxq and start vm

pmd thread numa_id 0 core_id 8:
  isolated : false
  port: dpdkvhost0        queue-id:  0 (enabled)   pmd usage:  0 %
  port: dpdkvhost0        queue-id:  1 (enabled)   pmd usage:  0 %
  port: dpdkvhost0        queue-id:  2 (enabled)   pmd usage:  0 %
  port: myport            queue-id:  0 (rescaled)  pmd usage:  0 %
  port: myport            queue-id:  1 (rescaled)  pmd usage:  0 %
  port: myport            queue-id:  2 (rescaled)  pmd usage:  0 %
  port: myport            queue-id:  3 (rescaled)  pmd usage:  0 %
  port: urport            queue-id:  0 (rescaled)  pmd usage:  0 %
  port: urport            queue-id:  1 (rescaled)  pmd usage:  0 %
  port: urport            queue-id:  2 (rescaled)  pmd usage:  0 %
  port: urport            queue-id:  3 (rescaled)  pmd usage:  0 %


no debug as same vm rxq as when vm shutdown.

*rxq for vhost not polled


- change vm to 3 rxq

pmd thread numa_id 0 core_id 8:
  isolated : false
  port: dpdkvhost0        queue-id:  0 (enabled)   pmd usage:  0 %
  port: dpdkvhost0        queue-id:  1 (enabled)   pmd usage:  0 %
  port: dpdkvhost0        queue-id:  2 (enabled)   pmd usage:  0 %
  port: myport            queue-id:  0 (rescaled)  pmd usage:  0 %
  port: myport            queue-id:  1 (rescaled)  pmd usage:  0 %
  port: myport            queue-id:  2 (rescaled)  pmd usage:  0 %
  port: myport            queue-id:  3 (rescaled)  pmd usage:  0 %
  port: urport            queue-id:  0 (rescaled)  pmd usage:  0 %
  port: urport            queue-id:  1 (rescaled)  pmd usage:  0 %
  port: urport            queue-id:  2 (rescaled)  pmd usage:  0 %
  port: urport            queue-id:  3 (rescaled)  pmd usage:  0 %


dpif_netdev(pmd-c08/id:8)|DBG|PMD 8: 0 rxq urport 1
dpif_netdev(pmd-c08/id:8)|DBG|PMD 8: 1 rxq dpdkvhost0 2
dpif_netdev(pmd-c08/id:8)|DBG|PMD 8: 2 rxq myport 1
dpif_netdev(pmd-c08/id:8)|DBG|PMD 8: 3 rxq dpdkvhost0 1
dpif_netdev(pmd-c08/id:8)|DBG|PMD 8: 4 rxq urport 2
dpif_netdev(pmd-c08/id:8)|DBG|PMD 8: 5 rxq dpdkvhost0 0


*myport queues 0,2,3 not polled, urport queues 0,3 not polled

- Remove urport

pmd thread numa_id 0 core_id 8:
  isolated : false
  port: dpdkvhost0        queue-id:  0 (enabled)   pmd usage:  0 %
  port: myport            queue-id:  0 (rescaled)  pmd usage:  0 %
  port: myport            queue-id:  1 (rescaled)  pmd usage:  0 %
  port: myport            queue-id:  2 (rescaled)  pmd usage:  0 %
  port: myport            queue-id:  3 (rescaled)  pmd usage:  0 %


dpif_netdev(pmd-c08/id:8)|DBG|PMD 8: 0 rxq myport 1
dpif_netdev(pmd-c08/id:8)|DBG|PMD 8: 1 rxq dpdkvhost0 0

*myport queues 0,2,3 not being polled

Maybe just debug issue? chck with gdb

(gdb) p poll_cnt
$31 = 2
(gdb) p poll_list[0]->rxq->rx->netdev->name
$32 = 0x3bfade0 "myport"
(gdb) p poll_list[0]->rxq->rx->queue_id
$33 = 1
(gdb) p poll_list[1]->rxq->rx->netdev->name
$34 = 0x3bd6ce0 "dpdkvhost0"
(gdb) p poll_list[1]->rxq->rx->queue_id
$35 = 0

_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to