On 22/03/2022 15:35, David Marchand wrote:
On Fri, Mar 11, 2022 at 6:06 PM Kevin Traynor <[email protected]> wrote:

Ensure that if there are no local numa pmd cores
available that pmd cores from all other non-local
numas will be used.

This could be squashed with patch 2.



I had found bugs/tested fixes without UTs originally, so they came later.

I am ok with squashing if Ilya wants to do that at merge time but I have a slight preference for separate commits as it makes it quicker to test that the UT produces a failure without the code fix. You can just reorder the commits and checkout, rather than having to breakup a single commit.



Signed-off-by: Kevin Traynor <[email protected]>
---
  tests/pmd.at | 62 +++++++++++++++++++++++++++++++++++++++++++++++++++-
  1 file changed, 61 insertions(+), 1 deletion(-)

diff --git a/tests/pmd.at b/tests/pmd.at
index a2f9d34a2..a5b0a2523 100644
--- a/tests/pmd.at
+++ b/tests/pmd.at
@@ -10,4 +10,11 @@ parse_pmd_rxq_show () {
  }

+# Given the output of `ovs-appctl dpif-netdev/pmd-rxq-show`,
+# prints the first rxq on each pmd in the form:
+# 'port:' port_name 'queue_id:' rxq_id
+parse_pmd_rxq_show_first_rxq () {
+   awk '/isolated/ {print  $4, $5, $6, $7}' | sort
+}
+
  # Given the output of `ovs-appctl dpif-netdev/pmd-rxq-show`,
  # and with queues for each core on one line, prints the rxqs
@@ -200,5 +207,5 @@ OVS_VSWITCHD_STOP
  AT_CLEANUP

-AT_SETUP([PMD - pmd-cpu-mask - NUMA])
+AT_SETUP([PMD - pmd-cpu-mask - dual NUMA])
  OVS_VSWITCHD_START([add-port br0 p0 -- set Interface p0 type=dummy-pmd 
options:n_rxq=8 options:numa_id=1 -- set Open_vSwitch . 
other_config:pmd-cpu-mask=1],
                     [], [], [--dummy-numa 1,1,0,0])
@@ -360,4 +367,57 @@ OVS_VSWITCHD_STOP
  AT_CLEANUP

+AT_SETUP([PMD - pmd-cpu-mask - multi NUMA])
+OVS_VSWITCHD_START([add-port br0 p0 \
+                    -- set Interface p0 type=dummy-pmd options:n_rxq=4 \
+                    -- set Interface p0 options:numa_id=0 \
+                    -- set Open_vSwitch . other_config:pmd-cpu-mask=0xf \
+                    -- set open_vswitch . other_config:pmd-rxq-assign=cycles],
+                   [], [], [--dummy-numa 1,2,1,2])
+
+TMP=$(($(cat ovs-vswitchd.log | wc -l | tr -d [[:blank:]])+1))
+AT_CHECK([ovs-vsctl set Open_vSwitch . other_config:pmd-rxq-assign=group])
+
+OVS_WAIT_UNTIL([tail -n +$TMP ovs-vswitchd.log | grep "Performing pmd to rx queue 
assignment using group algorithm"])
+OVS_WAIT_UNTIL([tail -n +$TMP ovs-vswitchd.log | grep "There's no available 
(non-isolated) pmd thread on numa node 0."])
+
+# check all pmds from both non-local numas are assigned an rxq
+AT_CHECK([ovs-appctl dpif-netdev/pmd-rxq-show | awk '/false$/ { printf("%s\t", 
$0); next } 1' | parse_pmd_rxq_show_first_rxq], [0], [dnl
+port: p0 queue-id: 0
+port: p0 queue-id: 1
+port: p0 queue-id: 2
+port: p0 queue-id: 3
+])

As stated in the comment, we only need to make sure we have a number
of pmd polling some rxq.

yep

This can be done with an existing helper (which, btw, could also embed
the filter on AVAIL$ rather than duplicate in all callers):
AT_CHECK([test `ovs-appctl dpif-netdev/pmd-rxq-show | awk '/AVAIL$/ {
printf("%s\t", $0); next } 1' | parse_pmd_rxq_show_group | wc -l` -eq
4])


WDYT?


Fine to reuse that helper like this, thanks.

_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to