> -----Original Message----- > From: Yuanhan Liu [mailto:y...@fridaylinux.org] > Sent: Friday, January 26, 2018 12:48 PM > To: d...@openvswitch.org > Cc: Stokes, Ian <ian.sto...@intel.com>; Yuanhan Liu <y...@fridaylinux.org> > Subject: [PATCH v6 0/6] OVS-DPDK flow offload with rte_flow > > Hi, > > Here is a joint work from Mellanox and Napatech, to enable the flow hw > offload with the DPDK generic flow interface (rte_flow). >
Thanks for this Yuanhan. I'll try and get to reviewing it, unfortunately there is a separate pressing issue related to the Mempool for netdev-dpdk that I have to investigate. I'll try and post feedback on this series when I get a chance. > The basic idea is to associate the flow with a mark id (a unit32_t > number). > Later, we then get the flow directly from the mark id, which could bypass > some heavy CPU operations, including but not limiting to mini flow > extract, emc lookup, dpcls lookup, etc. > > The association is done with CMAP in patch 1. The CPU workload bypassing > is done in patch 2. The flow offload is done in patch 3, which mainly does > two things: > > - translate the ovs match to DPDK rte flow patterns > - bind those patterns with a RSS + MARK action. > > Patch 5 makes the offload work happen in another thread, for leaving the > datapath as light as possible. > > A PHY-PHY forwarding with 1000 mega flows (udp,tp_src=1000-1999) and 1 > million streams (tp_src=1000-1999, tp_dst=2000-2999) show more than 260% > performance boost. > > Note that it's disabled by default, which can be enabled by: > > $ ovs-vsctl set Open_vSwitch . other_config:hw-offload=true > > v6: - fixed a sparse warning > - added documentation > - used hash_int to compute mark to flow hash > - added more comments > - added lock for pot lookup > - rebased on top of the latest code > > v5: - fixed an issue that it took too long if we do flow add/remove > repeatedly. > - removed an unused mutex lock > - turned most of the log level to DBG > - rebased on top of the latest code > > v4: - use RSS action instead of QUEUE action with MARK > - make it work with multiple queue (see patch 1) > - rebased on top of latest code > > v3: - The mark and id association is done with array instead of CMAP. > - Added a thread to do hw offload operations > - Removed macros completely > - dropped the patch to set FDIR_CONF, which is a workround some > Intel NICs. > - Added a debug patch to show all flow patterns we have created. > - Misc fixes > > v2: - workaround the queue action issue > - fixed the tcp_flags being skipped issue, which also fixed the > build warnings > - fixed l2 patterns for Intel nic > - Converted some macros to functions > - did not hardcode the max number of flow/action > - rebased on top of the latest code > > Thanks. > > --yliu > > --- > Finn Christensen (1): > netdev-dpdk: implement flow offload with rte flow > > Yuanhan Liu (5): > dpif-netdev: associate flow with a mark id > dpif-netdev: retrieve flow directly from the flow mark > netdev-dpdk: add debug for rte flow patterns > dpif-netdev: do hw flow offload in a thread > Documentation: document ovs-dpdk flow offload > > Documentation/howto/dpdk.rst | 17 + > NEWS | 1 + > lib/dp-packet.h | 13 + > lib/dpif-netdev.c | 494 ++++++++++++++++++++++++++++- > lib/flow.c | 155 +++++++-- > lib/flow.h | 1 + > lib/netdev-dpdk.c | 732 > ++++++++++++++++++++++++++++++++++++++++++- > lib/netdev.h | 6 + > 8 files changed, 1380 insertions(+), 39 deletions(-) > > -- > 2.7.4 _______________________________________________ dev mailing list d...@openvswitch.org https://mail.openvswitch.org/mailman/listinfo/ovs-dev