On 10/19/2017 05:45 PM, Alin Gabriel Serdean wrote: > Hi, > > > > Currently the test “pmd-cpu-mask/distribution of rx queues” is failing > on Windows. I’m trying to figure out what we are missing on the Windows > environment. Any help is welcomed 😊. > >
Hi Alin, the queues are sorted by measured rxq cycles since 79da1e411ba5. In this test case the rxq cycles are equal for all queues. On Linux, the sort result is 0,1,3,4,5,6,7,8,9 whereas you are seeing 1,3,4,5,6,7,8,9,0. This impacts the distribution to PMDs and that's causing the test fail. Currently the comparison function (rxq_cycle_sort) selects a winner when they are equal. I have submitted a patch which (amongst other things) changes that so it will just report they are equal. I've sent a v2, https://patchwork.ozlabs.org/patch/835385/ can you try that and see if it fixes the issue on Windows? thanks, Kevin. > > ## ------------------------------ ## > > ## openvswitch 2.8.90 test suite. ## > > ## ------------------------------ ## > > 1007. pmd.at:106: testing PMD - pmd-cpu-mask/distribution of rx queues ... > > ./pmd.at:107: ovsdb-tool create conf.db > $abs_top_srcdir/vswitchd/vswitch.ovsschema > > ./pmd.at:107: ovsdb-server --detach --no-chdir --pidfile --log-file > --remote=punix:$OVS_RUNDIR/db.sock > > stderr: > > ./pmd.at:107: sed < stderr ' > > /vlog|INFO|opened log file/d > > /ovsdb_server|INFO|ovsdb-server (Open vSwitch)/d' > > ./pmd.at:107: ovs-vsctl --no-wait init > > ./pmd.at:107: ovs-vswitchd --enable-dummy --disable-system > --dummy-numa="0,0,0,0" --detach --no-chdir --pidfile --log-file -vvconn > -vofproto_dpif -vunixctl > > stderr: > > ./pmd.at:107: sed < stderr ' > > /ovs_numa|INFO|Discovered /d > > /vlog|INFO|opened log file/d > > /vswitchd|INFO|ovs-vswitchd (Open vSwitch)/d > > /reconnect|INFO|/d > > /ofproto|INFO|using datapath ID/d > > /netdev_linux|INFO|.*device has unknown hardware address family/d > > /ofproto|INFO|datapath ID changed to fedcba9876543210/d > > /dpdk|INFO|DPDK Disabled - Use other_config:dpdk-init to enable/d > > /netdev: Flow API/d > > /tc: Using policy/d' > > ./pmd.at:107: add_of_br 0 add-port br0 p0 -- set Interface p0 > type=dummy-pmd options:n_rxq=8 > > 2017-10-19T16:31:19.771Z|00003|ovs_numa|INFO|Discovered 1 NUMA nodes and > 4 CPU cores > > 2017-10-19T16:31:20.030Z|00024|dpif_netdev|INFO|There are 1 pmd threads > on numa node 0 > > ./pmd.at:111: test "$N_THREADS" -gt 0 > > ./pmd.at:113: ovs-appctl dpif/show | sed > 's/\(tx_queues=\)[0-9]*/\1<cleared>/g' > > ./pmd.at:120: ovs-appctl dpif-netdev/pmd-rxq-show | sed "s/\(numa_id > \)[0-9]*\( core_id \)[0-9]*:/\1<cleared>\2<cleared>:/" > > ./pmd.at:127: ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=0x3 > > 2017-10-19T16:31:20.618Z|00041|dpif_netdev|INFO|There are 2 pmd threads > on numa node 0 > > ./pmd.at:128: test "$N_THREADS" -eq "2" > > ./pmd.at:130: ovs-appctl dpif-netdev/pmd-rxq-show | sed "s/\(numa_id > \)[0-9]*\( core_id \)[0-9]*:/\1<cleared>\2<cleared>:/;s/\(queue-id: \)1 > 2 5 6/\1<cleared>/;s/\(queue-id: \)0 3 4 7/\1<cleared>/" > > --- - 2017-10-19 19:31:20 +0300 > > +++ /c/_2017/october/19/ovs/tests/testsuite.dir/at-groups/1007/stdout > 2017-10-19 19:31:21 +0300 > > @@ -1,7 +1,7 @@ > > pmd thread numa_id <cleared> core_id <cleared>: > > isolated : false > > - port: p0 queue-id: <cleared> > > + port: p0 queue-id: 0 1 4 5 > > pmd thread numa_id <cleared> core_id <cleared>: > > isolated : false > > - port: p0 queue-id: <cleared> > > + port: p0 queue-id: 2 3 6 7 > > > > ovsdb-server.log: > >> 2017-10-19T16:31:19.440Z|00001|vlog|INFO|opened log file > c:/_2017/october/19/ovs/tests/testsuite.dir/1007/ovsdb-server.log > >> 2017-10-19T16:31:19.446Z|00002|ovsdb_server|INFO|ovsdb-server (Open > vSwitch) 2.8.90 > > ovs-vswitchd.log: > >> 2017-10-19T16:31:19.769Z|00001|vlog|INFO|opened log file > c:/_2017/october/19/ovs/tests/testsuite.dir/1007/ovs-vswitchd.log > >> 2017-10-19T16:31:19.771Z|00002|ovs_numa|INFO|Discovered 4 CPU cores on > NUMA node 0 > >> 2017-10-19T16:31:19.771Z|00003|ovs_numa|INFO|Discovered 1 NUMA nodes > and 4 CPU cores > >> > 2017-10-19T16:31:19.771Z|00004|reconnect|INFO|unix:c:/_2017/october/19/ovs/tests/testsuite.dir/1007/db.sock: > connecting... > >> > 2017-10-19T16:31:19.771Z|00005|reconnect|INFO|unix:c:/_2017/october/19/ovs/tests/testsuite.dir/1007/db.sock: > connected > >> 2017-10-19T16:31:19.774Z|00006|bridge|INFO|ovs-vswitchd (Open vSwitch) > 2.8.90 > >> 2017-10-19T16:31:20.028Z|00007|ofproto_dpif|INFO|dummy@ovs-dummy: > Datapath supports recirculation > >> 2017-10-19T16:31:20.028Z|00008|ofproto_dpif|INFO|dummy@ovs-dummy: VLAN > header stack length probed as 1 > >> 2017-10-19T16:31:20.028Z|00009|ofproto_dpif|INFO|dummy@ovs-dummy: MPLS > label stack length probed as 3 > >> 2017-10-19T16:31:20.028Z|00010|ofproto_dpif|INFO|dummy@ovs-dummy: > Datapath supports truncate action > >> 2017-10-19T16:31:20.028Z|00011|ofproto_dpif|INFO|dummy@ovs-dummy: > Datapath supports unique flow ids > >> 2017-10-19T16:31:20.028Z|00012|ofproto_dpif|INFO|dummy@ovs-dummy: > Datapath supports clone action > >> 2017-10-19T16:31:20.028Z|00013|ofproto_dpif|INFO|dummy@ovs-dummy: Max > sample nesting level probed as 10 > >> 2017-10-19T16:31:20.028Z|00014|ofproto_dpif|INFO|dummy@ovs-dummy: > Datapath supports eventmask in conntrack action > >> 2017-10-19T16:31:20.028Z|00015|ofproto_dpif|INFO|dummy@ovs-dummy: > Datapath supports ct_state > >> 2017-10-19T16:31:20.029Z|00016|ofproto_dpif|INFO|dummy@ovs-dummy: > Datapath supports ct_zone > >> 2017-10-19T16:31:20.029Z|00017|ofproto_dpif|INFO|dummy@ovs-dummy: > Datapath supports ct_mark > >> 2017-10-19T16:31:20.029Z|00018|ofproto_dpif|INFO|dummy@ovs-dummy: > Datapath supports ct_label > >> 2017-10-19T16:31:20.029Z|00019|ofproto_dpif|INFO|dummy@ovs-dummy: > Datapath supports ct_state_nat > >> 2017-10-19T16:31:20.029Z|00020|ofproto_dpif|INFO|dummy@ovs-dummy: > Datapath supports ct_orig_tuple > >> 2017-10-19T16:31:20.029Z|00021|ofproto_dpif|INFO|dummy@ovs-dummy: > Datapath supports ct_orig_tuple6 > >> 2017-10-19T16:31:20.029Z|00022|bridge|INFO|bridge br0: added interface > br0 on port 65534 > >> 2017-10-19T16:31:20.030Z|00023|dpif_netdev|INFO|PMD thread on numa_id: > 0, core id: 0 created. > >> 2017-10-19T16:31:20.030Z|00024|dpif_netdev|INFO|There are 1 pmd > threads on numa node 0 > >> 2017-10-19T16:31:20.030Z|00025|dpif_netdev|INFO|Core 0 on numa node 0 > assigned port 'p0' rx queue 1 (measured processing cycles 0). > >> 2017-10-19T16:31:20.030Z|00026|dpif_netdev|INFO|Core 0 on numa node 0 > assigned port 'p0' rx queue 2 (measured processing cycles 0). > >> 2017-10-19T16:31:20.030Z|00027|dpif_netdev|INFO|Core 0 on numa node 0 > assigned port 'p0' rx queue 3 (measured processing cycles 0). > >> 2017-10-19T16:31:20.030Z|00028|dpif_netdev|INFO|Core 0 on numa node 0 > assigned port 'p0' rx queue 4 (measured processing cycles 0). > >> 2017-10-19T16:31:20.030Z|00029|dpif_netdev|INFO|Core 0 on numa node 0 > assigned port 'p0' rx queue 5 (measured processing cycles 0). > >> 2017-10-19T16:31:20.030Z|00030|dpif_netdev|INFO|Core 0 on numa node 0 > assigned port 'p0' rx queue 6 (measured processing cycles 0). > >> 2017-10-19T16:31:20.030Z|00031|dpif_netdev|INFO|Core 0 on numa node 0 > assigned port 'p0' rx queue 7 (measured processing cycles 0). > >> 2017-10-19T16:31:20.030Z|00032|dpif_netdev|INFO|Core 0 on numa node 0 > assigned port 'p0' rx queue 0 (measured processing cycles 0). > >> 2017-10-19T16:31:20.031Z|00033|bridge|INFO|bridge br0: added interface > p0 on port 1 > >> 2017-10-19T16:31:20.031Z|00034|bridge|INFO|bridge br0: using datapath > ID fedcba9876543210 > >> 2017-10-19T16:31:20.031Z|00035|connmgr|INFO|br0: added service > controller "punix:c:/_2017/october/19/ovs/tests/testsuite.dir/1007/br0.mgmt" > >> 2017-10-19T16:31:20.309Z|00036|unixctl|DBG|received request > dpif/show[], id=0 > >> 2017-10-19T16:31:20.309Z|00037|unixctl|DBG|replying with success, > id=0: "dummy@ovs-dummy: hit:0 missed:0 > >> br0: > >> br0 65534/100: (dummy-internal) > >> p0 1/1: (dummy-pmd: configured_rx_queues=8, > configured_tx_queues=1, requested_rx_queues=8, requested_tx_queues=1) > >> " > >> 2017-10-19T16:31:20.416Z|00038|unixctl|DBG|received request > dpif-netdev/pmd-rxq-show[], id=0 > >> 2017-10-19T16:31:20.417Z|00039|unixctl|DBG|replying with success, > id=0: "pmd thread numa_id 0 core_id 0: > >> isolated : false > >> port: p0 queue-id: 0 1 2 3 4 5 6 7 <sip:01234567> > >> " > >> 2017-10-19T16:31:20.617Z|00040|dpif_netdev|INFO|PMD thread on numa_id: > 0, core id: 1 created. > >> 2017-10-19T16:31:20.618Z|00041|dpif_netdev|INFO|There are 2 pmd > threads on numa node 0 > >> 2017-10-19T16:31:20.618Z|00042|dpif_netdev|INFO|Core 0 on numa node 0 > assigned port 'p0' rx queue 1 (measured processing cycles 0). > >> 2017-10-19T16:31:20.618Z|00043|dpif_netdev|INFO|Core 1 on numa node 0 > assigned port 'p0' rx queue 2 (measured processing cycles 0). > >> 2017-10-19T16:31:20.618Z|00044|dpif_netdev|INFO|Core 1 on numa node 0 > assigned port 'p0' rx queue 3 (measured processing cycles 0). > >> 2017-10-19T16:31:20.618Z|00045|dpif_netdev|INFO|Core 0 on numa node 0 > assigned port 'p0' rx queue 4 (measured processing cycles 0). > >> 2017-10-19T16:31:20.618Z|00046|dpif_netdev|INFO|Core 0 on numa node 0 > assigned port 'p0' rx queue 5 (measured processing cycles 0). > >> 2017-10-19T16:31:20.618Z|00047|dpif_netdev|INFO|Core 1 on numa node 0 > assigned port 'p0' rx queue 6 (measured processing cycles 0). > >> 2017-10-19T16:31:20.618Z|00048|dpif_netdev|INFO|Core 1 on numa node 0 > assigned port 'p0' rx queue 7 (measured processing cycles 0). > >> 2017-10-19T16:31:20.618Z|00049|dpif_netdev|INFO|Core 0 on numa node 0 > assigned port 'p0' rx queue 0 (measured processing cycles 0). > >> 2017-10-19T16:31:20.805Z|00050|unixctl|DBG|received request > dpif-netdev/pmd-rxq-show[], id=0 > >> 2017-10-19T16:31:20.805Z|00051|unixctl|DBG|replying with success, > id=0: "pmd thread numa_id 0 core_id 0: > >> isolated : false > >> port: p0 queue-id: 0 1 4 5 > >> pmd thread numa_id 0 core_id 1: > >> isolated : false > >> port: p0 queue-id: 2 3 6 7 > >> " > > 1007. pmd.at:106: FAILED (pmd.at:130) > > > > > > > > > > > > Thanks, > > Alin. > > > > _______________________________________________ > discuss mailing list > [email protected] > https://mail.openvswitch.org/mailman/listinfo/ovs-discuss > _______________________________________________ discuss mailing list [email protected] https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
