> -----Original Message-----
> From: Ilya Maximets <[email protected]>
> Sent: Wednesday, July 17, 2024 10:43 AM
> To: Phelan, Michael <[email protected]>
> Cc: [email protected]; ovs-dev <[email protected]>
> Subject: Re: [ovs-build] |fail| pw1960612 [ovs-dev,2/2] Prepare for post-
> 3.4.0 (3.4.90).
> 
> On 7/17/24 10:36, Phelan, Michael wrote:
> >
> >> -----Original Message-----
> >> From: Ilya Maximets <[email protected]>
> >> Sent: Tuesday, July 16, 2024 12:25 PM
> >> To: Phelan, Michael <[email protected]>
> >> Cc: [email protected]; ovs-dev <[email protected]>
> >> Subject: Re: [ovs-build] |fail| pw1960612 [ovs-dev,2/2] Prepare for
> >> post-
> >> 3.4.0 (3.4.90).
> >>
> >> On 7/15/24 22:23, [email protected] wrote:
> >>> Test-Label: intel-ovs-compilation
> >>> Test-Status: fail
> >>> http://patchwork.ozlabs.org/api/patches/1960612/
> >>>
> >>> AVX-512_compilation: failed
> >>> DPLCS Test: fail
> >>> DPIF Test: fail
> >>> MFEX Test: fail
> >>> Actions Test: fail
> >>> Errors in DPCLS test:
> >>> make check-dpdk
> >>> make  all-am
> >>> make[1]: Entering directory '/root/ovs-dev'
> >>> make[1]: Leaving directory '/root/ovs-dev'
> >>> set /bin/bash './tests/system-dpdk-testsuite' -C tests
> >>> AUTOTEST_PATH='utilities:vswitchd:ovsdb:vtep:tests:ipsec::'; \ "$@"
> >>> -j1 || (test X'' = Xyes && "$@" --recheck) ##
> >>> ------------------------------ ## ## openvswitch 3.4.90 test suite.
> >>> ## ## ------------------------------ ##
> >>>
> >>> OVS-DPDK unit tests
> >>>
> >>>   1: OVS-DPDK - EAL init                             ok
> >>>   2: OVS-DPDK - add standard DPDK port               ok
> >>>   3: OVS-DPDK - add vhost-user-client port           ok
> >>>   4: OVS-DPDK - ping vhost-user ports                FAILED 
> >>> (ovs-macros.at:242)
> >>>   5: OVS-DPDK - ping vhost-user-client ports         FAILED (ovs-
> macros.at:242)
> >>>   6: OVS-DPDK - Ingress policing create delete phy port ok
> >>>   7: OVS-DPDK - Ingress policing create delete vport port ok
> >>>   8: OVS-DPDK - Ingress policing no policing rate    ok
> >>>   9: OVS-DPDK - Ingress policing no policing burst   ok
> >>>  10: OVS-DPDK - QoS create delete phy port           ok
> >>>  11: OVS-DPDK - QoS create delete vport port         ok
> >>>  12: OVS-DPDK - QoS no cir                           ok
> >>>  13: OVS-DPDK - QoS no cbs                           ok
> >>>  14: OVS-DPDK - MTU increase phy port                ok
> >>>  15: OVS-DPDK - MTU decrease phy port                ok
> >>>  16: OVS-DPDK - MTU increase vport port              FAILED (ovs-
> macros.at:242)
> >>>  17: OVS-DPDK - MTU decrease vport port              FAILED (ovs-
> >> macros.at:242)
> >>>  18: OVS-DPDK - MTU upper bound phy port             ok
> >>>  19: OVS-DPDK - MTU lower bound phy port             ok
> >>>  20: OVS-DPDK - MTU upper bound vport port           FAILED (ovs-
> >> macros.at:242)
> >>>  21: OVS-DPDK - MTU lower bound vport port           FAILED (ovs-
> >> macros.at:242)
> >>>  22: OVS-DPDK - user configured mempool              ok
> >>
> >> Hi, Michael.  Could you, please, check the reason why these tests are
> failing?
> >> The logs are truncated a little too much, so it's hard to tell what went
> wrong.
> > Hi Ilya,
> > The output in the logs is:
> >
> > ./system-dpdk.at:109: ovs-vsctl add-br br10 -- set bridge br10
> > datapath_type=netdev
> > ./system-dpdk.at:110: ovs-vsctl add-port br10 dpdkvhostuser0 -- set
> > Interface dpdkvhostuser0 type=dpdkvhostuser
> > stderr:
> > stdout:
> > system-dpdk.at:110: waiting until grep "VHOST_CONFIG:
> ($OVS_RUNDIR/dpdkvhostuser0) vhost-user server: socket created" ovs-
> vswitchd.log...
> > 2024-07-17T08:32:51.860Z|00062|dpdk|INFO|VHOST_CONFIG:
> > (/root/ovs-dev/tests/system-dpdk-testsuite.dir/004/dpdkvhostuser0)
> > vhost-user server: socket created, fd: 95
> > system-dpdk.at:110: wait succeeded immediately
> > system-dpdk.at:110: waiting until grep "Socket
> $OVS_RUNDIR/dpdkvhostuser0 created for vhost-user port dpdkvhostuser0"
> ovs-vswitchd.log...
> > 2024-07-17T08:32:51.860Z|00063|netdev_dpdk|INFO|Socket
> > /root/ovs-dev/tests/system-dpdk-testsuite.dir/004/dpdkvhostuser0
> > created for vhost-user port dpdkvhostuser0
> > system-dpdk.at:110: wait succeeded immediately
> > system-dpdk.at:110: waiting until grep "VHOST_CONFIG:
> ($OVS_RUNDIR/dpdkvhostuser0) binding succeeded" ovs-vswitchd.log...
> > 2024-07-17T08:32:51.861Z|00064|dpdk|INFO|VHOST_CONFIG:
> > (/root/ovs-dev/tests/system-dpdk-testsuite.dir/004/dpdkvhostuser0)
> > binding succeeded
> > system-dpdk.at:110: wait succeeded immediately
> > ./system-dpdk.at:111: ovs-vsctl show
> > stdout:
> > 45be2775-e3c2-45bc-9d5e-2eb502748c78
> >     Bridge br10
> >         datapath_type: netdev
> >         Port br10
> >             Interface br10
> >                 type: internal
> >         Port dpdkvhostuser0
> >             Interface dpdkvhostuser0
> >                 type: dpdkvhostuser
> > Cannot remove namespace file "/run/netns/ns1": No such file or
> > directory
> > ./system-dpdk.at:114: ip netns add ns1 || return 77
> > net.netfilter.nf_conntrack_helper = 0 Cannot remove namespace file
> > "/run/netns/ns2": No such file or directory
> > ./system-dpdk.at:114: ip netns add ns2 || return 77
> > net.netfilter.nf_conntrack_helper = 0
> > ./system-dpdk.at:117: ip link add tap1 type veth peer name ovs-tap1 ||
> > return 77
> > ./system-dpdk.at:117: ethtool -K tap1 tx off
> > stderr:
> > stdout:
> > Actual changes:
> > tx-checksumming: off
> >     tx-checksum-ip-generic: off
> >     tx-checksum-sctp: off
> > tcp-segmentation-offload: off
> >     tx-tcp-segmentation: off [requested on]
> >     tx-tcp-ecn-segmentation: off [requested on]
> >     tx-tcp-mangleid-segmentation: off [requested on]
> >     tx-tcp6-segmentation: off [requested on]
> > ./system-dpdk.at:117: ethtool -K tap1 txvlan off
> > stderr:
> > stdout:
> > ./system-dpdk.at:117: ip link set tap1 netns ns2
> > ./system-dpdk.at:117: ip link set dev ovs-tap1 up
> > ./system-dpdk.at:117: ovs-vsctl add-port br10 ovs-tap1 -- \
> >                 set interface ovs-tap1 external-ids:iface-id="tap1" -- \
> >                 set interface ovs-tap1 type=dpdk -- \
> >                 set interface ovs-tap1
> > options:dpdk-devargs=net_af_xdptap1,iface=ovs-tap1
> > ./system-dpdk.at:117: ip netns exec ns2 sh << NS_EXEC_HEREDOC ip addr
> > add "172.31.110.12/24" dev tap1 NS_EXEC_HEREDOC
> > ./system-dpdk.at:117: ip netns exec ns2 sh << NS_EXEC_HEREDOC ip link
> > set dev tap1 up NS_EXEC_HEREDOC
> > ./system-dpdk.at:119: lscpu
> > stdout:
> > Architecture:                       x86_64
> > CPU op-mode(s):                     32-bit, 64-bit
> > Byte Order:                         Little Endian
> > Address sizes:                      46 bits physical, 57 bits virtual
> > CPU(s):                             96
> > On-line CPU(s) list:                0-95
> > Thread(s) per core:                 2
> > Core(s) per socket:                 24
> > Socket(s):                          2
> > NUMA node(s):                       2
> > Vendor ID:                          GenuineIntel
> > CPU family:                         6
> > Model:                              106
> > Model name:                         Intel(R) Xeon(R) Gold 6336Y CPU @ 
> > 2.40GHz
> > Stepping:                           6
> > CPU MHz:                            800.138
> > CPU max MHz:                        3600.0000
> > CPU min MHz:                        800.0000
> > BogoMIPS:                           4800.00
> > Virtualization:                     VT-x
> > L1d cache:                          2.3 MiB
> > L1i cache:                          1.5 MiB
> > L2 cache:                           60 MiB
> > L3 cache:                           72 MiB
> > NUMA node0 CPU(s):                  0-23,48-71
> > NUMA node1 CPU(s):                  24-47,72-95
> > Vulnerability Gather data sampling: Mitigation; Microcode
> > Vulnerability Itlb multihit:        Not affected
> > Vulnerability L1tf:                 Not affected
> > Vulnerability Mds:                  Not affected
> > Vulnerability Meltdown:             Not affected
> > Vulnerability Mmio stale data:      Mitigation; Clear CPU buffers; SMT
> vulnerable
> > Vulnerability Retbleed:             Not affected
> > Vulnerability Spec store bypass:    Mitigation; Speculative Store Bypass
> disabled via prctl and seccomp
> > Vulnerability Spectre v1:           Mitigation; usercopy/swapgs barriers and
> __user pointer sanitization
> > Vulnerability Spectre v2:           Mitigation; Enhanced / Automatic IBRS; 
> > IBPB
> conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI Vulnerable, KVM SW
> loop
> > Vulnerability Srbds:                Not affected
> > Vulnerability Tsx async abort:      Not affected
> > Flags:                              fpu vme de pse tsc msr pae mce cx8 apic 
> > sep mtrr pge
> mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx
> pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl
> xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor
> ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1
> sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand
> lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba
> ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad
> fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f
> avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd
> sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc
> cqm_occup_llc cqm_mbm_total cqm_mbm_local wbnoinvd dtherm ida arat
> pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku
> ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme
> avx512_vpopcntdq rdpid md_clear pconfig flush_l1d arch_capabilities
> > ./system-dpdk.at:119: cat stdout | grep "NUMA node(s)" | awk '{c=1;
> > while (c++<$(3)) {printf "512,"}; print "512"}' > NUMA_NODE
> > system-dpdk.at:122: waiting until grep "virtio is now ready for processing"
> ovs-vswitchd.log...
> > 2024-07-17T08:32:52.948Z|00067|dpdk|INFO|VHOST_CONFIG:
> (/root/ovs-dev/tests/system-dpdk-testsuite.dir/004/dpdkvhostuser0) virtio
> is now ready for processing.
> > system-dpdk.at:122: wait succeeded after 1 seconds
> > system-dpdk.at:123: waiting until ip link show dev tap0 | grep -qw
> LOWER_UP...
> > Device "tap0" does not exist.
> > Device "tap0" does not exist.
> > Device "tap0" does not exist.
> > Device "tap0" does not exist.
> > Device "tap0" does not exist.
> > Device "tap0" does not exist.
> > Device "tap0" does not exist.
> > Device "tap0" does not exist.
> > Device "tap0" does not exist.
> > Device "tap0" does not exist.
> > Device "tap0" does not exist.
> > Device "tap0" does not exist.
> > Device "tap0" does not exist.
> > Device "tap0" does not exist.
> > Device "tap0" does not exist.
> > Device "tap0" does not exist.
> > Device "tap0" does not exist.
> > Device "tap0" does not exist.
> > Device "tap0" does not exist.
> > Device "tap0" does not exist.
> > Device "tap0" does not exist.
> > Device "tap0" does not exist.
> > Device "tap0" does not exist.
> > Device "tap0" does not exist.
> > Device "tap0" does not exist.
> > Device "tap0" does not exist.
> > Device "tap0" does not exist.
> > system-dpdk.at:123: wait failed after 30 seconds
> > ./ovs-macros.at:242: hard failure
> > /root/ovs-dev/tests/system-dpdk-testsuite.dir/004/cleanup: line 1:
> > kill: (155084) - No such process 4. system-dpdk.at:102: 4. OVS-DPDK -
> > ping vhost-user ports (system-dpdk.at:102): FAILED (ovs-macros.at:242)
> >
> > Do you see an obvious problem here?
> 
> Looks like testpmd wasn't able to create the tap0 interface.
> Are there any errors in testpmd.log ?

The ouput in the testpmd.log is:

EAL: Detected 96 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Selected IOVA mode 'VA'
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: No legacy callbacks, legacy socket not created
Auto-start selected
Warning: NUMA should be configured manually by using --port-numa-config and 
--ring-numa-config parameters along with --numa.
testpmd: create a new mbuf pool <mb_pool_0>: n=907456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
EAL: Error - exiting with code: 1
  Cause: Creation of mbuf pool for socket 0 failed: Cannot allocate memory

> 
> >>
> >>> 2024-07-15T19:14:11Z|00007|dpdk|INFO|Using DPDK 23.11.0
> >>
> >> I remember that there were some vhost issues in 23.11.0, we may need
> >> to upgrade to 23.11.1 here.  We use it in GitHub Actions as well.
> >
> > We can upgrade to 23.11.1 no problem if it we need to.
> 
> I think we should do that, since that's the version we officially recommend
> now.
Sure, I'll do that now.
> 
> >
> > Thanks,
> > Michael.
> >>
> >> Best regards, Ilya Maximets.

_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to