On Mon, Sep 4, 2023 at 10:04 PM <[email protected]> wrote:
> > 2023-09-04T19:18:23.154Z|00024|dpif_netdev_impl|INFO|Default DPIF 
> > implementation is dpif_scalar.
> > 2023-09-04T19:18:23.166Z|00025|ofproto_dpif|INFO|netdev@ovs-netdev: 
> > Datapath supports recirculation
> > 2023-09-04T19:18:23.166Z|00026|ofproto_dpif|INFO|netdev@ovs-netdev: VLAN 
> > header stack length probed as 1
> > 2023-09-04T19:18:23.167Z|00027|ofproto_dpif|INFO|netdev@ovs-netdev: MPLS 
> > label stack length probed as 3
> > 2023-09-04T19:18:23.167Z|00028|ofproto_dpif|INFO|netdev@ovs-netdev: 
> > Datapath supports truncate action
> > 2023-09-04T19:18:23.167Z|00029|ofproto_dpif|INFO|netdev@ovs-netdev: 
> > Datapath supports unique flow ids
> > 2023-09-04T19:18:23.167Z|00030|ofproto_dpif|INFO|netdev@ovs-netdev: 
> > Datapath supports clone action
> > 2023-09-04T19:18:23.167Z|00031|ofproto_dpif|INFO|netdev@ovs-netdev: Max 
> > sample nesting level probed as 10
> > 2023-09-04T19:18:23.167Z|00032|ofproto_dpif|INFO|netdev@ovs-netdev: 
> > Datapath supports eventmask in conntrack action
> > 2023-09-04T19:18:23.167Z|00033|ofproto_dpif|INFO|netdev@ovs-netdev: 
> > Datapath supports ct_clear action
> > 2023-09-04T19:18:23.167Z|00034|ofproto_dpif|INFO|netdev@ovs-netdev: Max 
> > dp_hash algorithm probed to be 1
> > 2023-09-04T19:18:23.167Z|00035|ofproto_dpif|INFO|netdev@ovs-netdev: 
> > Datapath supports check_pkt_len action
> > 2023-09-04T19:18:23.167Z|00036|ofproto_dpif|INFO|netdev@ovs-netdev: 
> > Datapath supports timeout policy in conntrack action
> > 2023-09-04T19:18:23.167Z|00037|ofproto_dpif|INFO|netdev@ovs-netdev: 
> > Datapath supports ct_zero_snat
> > 2023-09-04T19:18:23.167Z|00038|ofproto_dpif|INFO|netdev@ovs-netdev: 
> > Datapath supports add_mpls action
> > 2023-09-04T19:18:23.167Z|00039|ofproto_dpif|INFO|netdev@ovs-netdev: 
> > Datapath supports ct_state
> > 2023-09-04T19:18:23.167Z|00040|ofproto_dpif|INFO|netdev@ovs-netdev: 
> > Datapath supports ct_zone
> > 2023-09-04T19:18:23.167Z|00041|ofproto_dpif|INFO|netdev@ovs-netdev: 
> > Datapath supports ct_mark
> > 2023-09-04T19:18:23.167Z|00042|ofproto_dpif|INFO|netdev@ovs-netdev: 
> > Datapath supports ct_label
> > 2023-09-04T19:18:23.167Z|00043|ofproto_dpif|INFO|netdev@ovs-netdev: 
> > Datapath supports ct_state_nat
> > 2023-09-04T19:18:23.167Z|00044|ofproto_dpif|INFO|netdev@ovs-netdev: 
> > Datapath supports ct_orig_tuple
> > 2023-09-04T19:18:23.168Z|00045|ofproto_dpif|INFO|netdev@ovs-netdev: 
> > Datapath supports ct_orig_tuple6
> > 2023-09-04T19:18:23.168Z|00046|ofproto_dpif|INFO|netdev@ovs-netdev: 
> > Datapath supports IPv6 ND Extensions
> > 2023-09-04T19:18:23.168Z|00047|ofproto_dpif_upcall|INFO|Setting 
> > n-handler-threads to 71, setting n-revalidator-threads to 25
> > 2023-09-04T19:18:23.168Z|00048|ofproto_dpif_upcall|INFO|Starting 96 threads
> > 2023-09-04T19:18:23.224Z|00049|dpif_netdev|INFO|pmd-rxq-affinity isolates 
> > PMD core
> > 2023-09-04T19:18:23.224Z|00050|dpif_netdev|INFO|PMD auto load balance 
> > interval set to 1 mins
> > 2023-09-04T19:18:23.224Z|00051|dpif_netdev|INFO|PMD auto load balance 
> > improvement threshold set to 25%
> > 2023-09-04T19:18:23.225Z|00052|dpif_netdev|INFO|PMD auto load balance load 
> > threshold set to 95%
> > 2023-09-04T19:18:23.225Z|00053|dpif_netdev|INFO|PMD auto load balance is 
> > disabled.
> > 2023-09-04T19:18:23.225Z|00054|dpif_netdev|INFO|PMD max sleep request is 0 
> > usecs.
> > 2023-09-04T19:18:23.225Z|00055|dpif_netdev|INFO|PMD load based sleeps are 
> > disabled.
> > 2023-09-04T19:18:23.232Z|00056|bridge|INFO|bridge br0: added interface br0 
> > on port 65534
> > 2023-09-04T19:18:23.232Z|00057|bridge|INFO|bridge br0: using datapath ID 
> > 0000361cb25c9446
> > 2023-09-04T19:18:23.232Z|00058|connmgr|INFO|br0: added service controller 
> > "punix:/root/ovs-dev/tests/system-dpdk-testsuite.dir/026/br0.mgmt"
> > 2023-09-04T19:18:23.254Z|00059|vconn|DBG|unix#0: sent (Success): OFPT_HELLO 
> > (OF1.5) (xid=0x1):
> >  version bitmap: 0x01, 0x02, 0x03, 0x04, 0x05, 0x06
> > 2023-09-04T19:18:23.254Z|00060|vconn|DBG|unix#0: received: OFPT_HELLO 
> > (xid=0x1):
> >  version bitmap: 0x01
> > 2023-09-04T19:18:23.254Z|00061|vconn|DBG|unix#0: negotiated OpenFlow 
> > version 0x01 (we support version 0x06 and earlier, peer supports version 
> > 0x01)
> > 2023-09-04T19:18:23.254Z|00062|vconn|DBG|unix#0: received: OFPST_TABLE 
> > request (xid=0x2):
> > 2023-09-04T19:18:23.255Z|00063|vconn|DBG|unix#0: sent (Success): 
> > OFPST_TABLE reply (xid=0x2):
> >   table 0:
> >     active=0, lookup=0, matched=0
> >     max_entries=1000000
> >     matching:
> >       exact match or wildcard: in_port eth_{src,dst,type} vlan_{vid,pcp} 
> > ip_{src,dst} nw_{proto,tos} tcp_{src,dst}
> >
> >   tables 1...253: ditto
> > 2023-09-04T19:18:23.259Z|00064|vconn|DBG|unix#1: sent (Success): OFPT_HELLO 
> > (OF1.5) (xid=0x2):
> >  version bitmap: 0x01, 0x02, 0x03, 0x04, 0x05, 0x06
> > 2023-09-04T19:18:23.259Z|00065|vconn|DBG|unix#1: received: OFPT_HELLO 
> > (xid=0x3):
> >  version bitmap: 0x01
> > 2023-09-04T19:18:23.259Z|00066|vconn|DBG|unix#1: negotiated OpenFlow 
> > version 0x01 (we support version 0x06 and earlier, peer supports version 
> > 0x01)
> > 2023-09-04T19:18:23.259Z|00067|vconn|DBG|unix#1: received: 
> > OFPT_FEATURES_REQUEST (xid=0x4):
> > 2023-09-04T19:18:23.260Z|00068|vconn|DBG|unix#1: sent (Success): 
> > OFPT_FEATURES_REPLY (xid=0x4): dpid:0000361cb25c9446
> > n_tables:254, n_buffers:0
> > capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
> > actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src 
> > mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
> >  LOCAL(br0): addr:36:1c:b2:5c:94:46
> >      config:     PORT_DOWN
> >      state:      LINK_DOWN
> >      current:    10MB-FD COPPER
> >      speed: 10 Mbps now, 0 Mbps max
> > 2023-09-04T19:18:23.260Z|00069|vconn|DBG|unix#2: sent (Success): OFPT_HELLO 
> > (OF1.5) (xid=0x3):
> >  version bitmap: 0x01, 0x02, 0x03, 0x04, 0x05, 0x06
> > 2023-09-04T19:18:23.260Z|00070|vconn|DBG|unix#2: received: OFPT_HELLO 
> > (xid=0x5):
> >  version bitmap: 0x01
> > 2023-09-04T19:18:23.260Z|00071|vconn|DBG|unix#2: negotiated OpenFlow 
> > version 0x01 (we support version 0x06 and earlier, peer supports version 
> > 0x01)
> > 2023-09-04T19:18:23.261Z|00072|vconn|DBG|unix#2: received: OFPT_FLOW_MOD 
> > (xid=0x6): ADD actions=NORMAL
> > 2023-09-04T19:18:23.261Z|00073|vconn|DBG|unix#2: received: 
> > OFPT_BARRIER_REQUEST (xid=0x7):
> > 2023-09-04T19:18:23.261Z|00074|vconn|DBG|unix#2: sent (Success): 
> > OFPT_BARRIER_REPLY (xid=0x7):
> > 2023-09-04T19:18:23.261Z|00075|connmgr|INFO|br0<->unix#2: 1 flow_mods in 
> > the last 0 s (1 adds)
> > 2023-09-04T19:18:23.419Z|00076|netdev_dpdk|INFO|Device 
> > 'net_af_xdpp0,iface=ovs-p0' attached to DPDK
> > 2023-09-04T19:18:23.432Z|00077|dpif_netdev|INFO|PMD thread on numa_id: 0, 
> > core id: 21 created.
> > 2023-09-04T19:18:23.441Z|00001|dpdk(pmd-c21/id:102)|INFO|PMD thread uses 
> > DPDK lcore 1.
> > 2023-09-04T19:18:23.442Z|00078|dpif_netdev|INFO|PMD thread on numa_id: 1, 
> > core id: 88 created.
> > 2023-09-04T19:18:23.442Z|00079|dpif_netdev|INFO|There are 1 pmd threads on 
> > numa node 1
> > 2023-09-04T19:18:23.442Z|00080|dpif_netdev|INFO|There are 1 pmd threads on 
> > numa node 0
> > 2023-09-04T19:18:23.442Z|00081|dpdk|INFO|Device with port_id=0 already 
> > stopped
> > 2023-09-04T19:18:23.443Z|00001|dpdk(pmd-c88/id:103)|INFO|PMD thread uses 
> > DPDK lcore 2.
> > 2023-09-04T19:18:23.599Z|00082|netdev_dpdk|WARN|Rx checksum offload is not 
> > supported on port 0
> > 2023-09-04T19:18:23.605Z|00083|netdev_afxdp|WARN|libbpf: can't get next 
> > link: Invalid argument
> > 2023-09-04T19:18:23.607Z|00084|netdev_dpdk|INFO|Port 0: ce:04:db:c7:09:5d
> > 2023-09-04T19:18:23.607Z|00085|netdev_dpdk|INFO|ovs-p0: rx-steering: 
> > default rss
> > 2023-09-04T19:18:23.607Z|00086|dpif_netdev|INFO|Performing pmd to rx queue 
> > assignment using cycles algorithm.
> > 2023-09-04T19:18:23.607Z|00087|dpif_netdev|INFO|Core 21 on numa node 0 
> > assigned port 'ovs-p0' rx queue 0 (measured processing cycles 0).
> > 2023-09-04T19:18:23.607Z|00088|bridge|INFO|bridge br0: added interface 
> > ovs-p0 on port 1
> > 2023-09-04T19:18:23.742Z|00089|netdev_dpdk|INFO|Device 
> > 'net_af_xdpp1,iface=ovs-p1' attached to DPDK
> > 2023-09-04T19:18:23.742Z|00090|dpdk|INFO|Device with port_id=1 already 
> > stopped
> > 2023-09-04T19:18:23.742Z|00091|netdev_dpdk|WARN|Rx checksum offload is not 
> > supported on port 1
> > 2023-09-04T19:18:23.751Z|00092|netdev_afxdp|WARN|libbpf: can't get next 
> > link: Invalid argument

A bit surprising (seeing the netdev_afxdp log prefix, while no AFXDP
netdev is used in this test), but it comes from:
https://github.com/openvswitch/ovs/blob/master/lib/netdev-afxdp.c#L1198

I did not reproduce this warning neither on my fedora 37, nor in GHA
Ubuntu 20.04 (I ran a DPDK job with --enable-afxdp).
All I see are some INFO level logs.

DPDK net/af_xdp driver does not call libbpf_set_print, so this warning
may be an existing unknown issue.
On the other hand, I would expect a real issue would result in the
net/af_xdp port not initialising but as I can't reproduce this warning
I can't tell if packets are flowing from this log above.

For now, I would go with moving the call to libbpf_set_print() in
netdev_afxdp_construct() with a once check, this will at least avoid
the confusion when OVS netdev-afxdp is not used.
Opinions?

> > 2023-09-04T19:18:23.754Z|00093|netdev_dpdk|INFO|Port 1: 32:72:f3:28:96:b4
> > 2023-09-04T19:18:23.754Z|00094|netdev_dpdk|INFO|ovs-p1: rx-steering: 
> > default rss
> > 2023-09-04T19:18:23.754Z|00095|dpif_netdev|INFO|Performing pmd to rx queue 
> > assignment using cycles algorithm.
> > 2023-09-04T19:18:23.754Z|00096|dpif_netdev|INFO|Core 21 on numa node 0 
> > assigned port 'ovs-p0' rx queue 0 (measured processing cycles 0).
> > 2023-09-04T19:18:23.754Z|00097|dpif_netdev|INFO|Core 21 on numa node 0 
> > assigned port 'ovs-p1' rx queue 0 (measured processing cycles 0).
> > 2023-09-04T19:18:23.754Z|00098|bridge|INFO|bridge br0: added interface 
> > ovs-p1 on port 2
> > 2023-09-04T19:18:23.798Z|00002|dpif_lookup_avx512_gather(pmd-c21/id:102)|INFO|Using
> >  non-specialized AVX512 lookup for subtable (5,0) and possibly others.
> 26. system-traffic.at:3: 26. datapath - ping between two ports 
> (system-traffic.at:3): FAILED (system-traffic.at:23)


-- 
David Marchand

_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to