On 9/26/23 16:06, David Marchand wrote: > Since v23.07, DPDK provides a per packet flag that indicates if a call > to the optional rte_flow_restore_tunnel_info() is necessary. > There is, then, no need at runtime to discover if a driver supports > this feature.
Hi, David. Thanks for the patch. Do you have some performance data comparing the previous and the new code for non-offloaded traffic? Currently we have one atomic read per packet arriving from a port that doesn't support tunnel offloading. With this change we have two atomic reads and an indirect function call per packet. That might have some measurable performance impact. One more comment below. > > Link: https://git.dpdk.org/dpdk/commit/?id=fca8cba4f1f1 > Signed-off-by: David Marchand <[email protected]> > --- > lib/dpif-netdev.c | 15 +++++---------- > lib/netdev-dpdk.c | 10 ++++++++++ > lib/netdev-offload.c | 18 +----------------- > lib/netdev-offload.h | 1 - > lib/netdev.c | 1 - > 5 files changed, 16 insertions(+), 29 deletions(-) > > diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c > index 157694bcf0..aa51e0a67d 100644 > --- a/lib/dpif-netdev.c > +++ b/lib/dpif-netdev.c > @@ -8207,16 +8207,11 @@ dp_netdev_hw_flow(const struct dp_netdev_pmd_thread > *pmd, > #ifdef ALLOW_EXPERIMENTAL_API /* Packet restoration API required. */ > /* Restore the packet if HW processing was terminated before completion. > */ > struct dp_netdev_rxq *rxq = pmd->ctx.last_rxq; > - bool miss_api_supported; > - > - atomic_read_relaxed(&rxq->port->netdev->hw_info.miss_api_supported, > - &miss_api_supported); > - if (miss_api_supported) { > - int err = netdev_hw_miss_packet_recover(rxq->port->netdev, packet); > - if (err && err != EOPNOTSUPP) { > - COVERAGE_INC(datapath_drop_hw_miss_recover); > - return -1; > - } > + int err = netdev_hw_miss_packet_recover(rxq->port->netdev, packet); > + > + if (err && err != EOPNOTSUPP) { > + COVERAGE_INC(datapath_drop_hw_miss_recover); > + return -1; > } > #endif > > diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c > index 55700250df..379c50399d 100644 > --- a/lib/netdev-dpdk.c > +++ b/lib/netdev-dpdk.c > @@ -1223,6 +1223,10 @@ dpdk_eth_flow_ctrl_setup(struct netdev_dpdk *dev) > OVS_REQUIRES(dev->mutex) > } > } > > +#ifdef ALLOW_EXPERIMENTAL_API > +static uint64_t netdev_dpdk_restore_info_flag; > +#endif > + > static void > dpdk_eth_dev_init_rx_metadata(struct netdev_dpdk *dev) > { > @@ -1251,6 +1255,8 @@ dpdk_eth_dev_init_rx_metadata(struct netdev_dpdk *dev) > if (!(rx_metadata & RTE_ETH_RX_METADATA_TUNNEL_ID)) { > VLOG_DBG("%s: The NIC will not provide per-packet TUNNEL_ID", > netdev_get_name(&dev->up)); > + } else if (!netdev_dpdk_restore_info_flag) { > + netdev_dpdk_restore_info_flag = rte_flow_restore_info_dynflag(); > } > #endif /* ALLOW_EXPERIMENTAL_API */ > } else { > @@ -6230,6 +6236,10 @@ netdev_dpdk_rte_flow_get_restore_info(struct netdev > *netdev, > return -1; > } > > + if (!(p->mbuf.ol_flags & netdev_dpdk_restore_info_flag)) { > + return -EOPNOTSUPP; I'm not sure this is a good error code for the situation. Can we find a good way to check the flags higher up the stack? Logically, the recovery API should not be even called if the flags are not set. It seems tricky to do with dynamic flags though. Best regards, Ilya Maximets. _______________________________________________ dev mailing list [email protected] https://mail.openvswitch.org/mailman/listinfo/ovs-dev
