Hi Ian, Thanks for the review. My responses are inline.
> -----Original Message----- > From: Stokes, Ian <[email protected]> > Sent: Tuesday 1 June 2021 19:59 > To: Ferriter, Cian <[email protected]>; [email protected]; Finn, > Emma <[email protected]>; Van Haaren, Harry > <[email protected]> > Cc: [email protected] > Subject: RE: [ovs-dev] [v12 05/16] dpif-avx512: Add HWOL support to avx512 > dpif. > > > From: Harry van Haaren <[email protected]> > > > > Partial hardware offload is implemented in a very similar way to the > > scalar dpif. > > > > Signed-off-by: Harry van Haaren <[email protected]> > > Thanks for the patch Harry/Cian. > > Given its size I wonder would it make sense to merge with patch 4 of the > series? Was the intention to do this eventually but just > separate it here for reviewing purposes? > We added support for HWOL after adding support for DPCLS, EMC and SMC, and left it as a separate commit because it seemed logically separate, however it might make sense to combine it with patch 4, especially since patch 4 references hwol with the "hwol_emc_smc_hitmask" variable. I'll squash this patch into patch 04 in the next version. > I'd like to get a test run on this also going forward on my side so not fully > finished on review. > > @Finn, Emma would you be able to give this a run when you have a chance? > > Minor comments below. > > > > --- > > lib/dpif-netdev-avx512.c | 28 +++++++++++++++++++++++++--- > > 1 file changed, 25 insertions(+), 3 deletions(-) > > > > diff --git a/lib/dpif-netdev-avx512.c b/lib/dpif-netdev-avx512.c > > index 91f51c479..98638de15 100644 > > --- a/lib/dpif-netdev-avx512.c > > +++ b/lib/dpif-netdev-avx512.c > > @@ -27,6 +27,7 @@ > > #include "dpif-netdev-private-dpcls.h" > > #include "dpif-netdev-private-flow.h" > > #include "dpif-netdev-private-thread.h" > > +#include "dpif-netdev-private-hwol.h" > > > > #include "dp-packet.h" > > #include "netdev.h" > > @@ -112,9 +113,32 @@ dp_netdev_input_outer_avx512(struct > > dp_netdev_pmd_thread *pmd, > > uint32_t i = __builtin_ctz(iter); > > iter = _blsr_u64(iter); > > > > - /* Initialize packet md and do miniflow extract. */ > > + /* Get packet pointer from bitmask and packet md. */ > > struct dp_packet *packet = packets->packets[i]; > > pkt_metadata_init(&packet->md, in_port); > > + > > + struct dp_netdev_flow *f = NULL; > > + > > + /* Check for partial hardware offload mark. */ > > + uint32_t mark; > > + if (dp_packet_has_flow_mark(packet, &mark)) { > > + f = mark_to_flow_find(pmd, mark); > > + if (f) { > > + rules[i] = &f->cr; > > + > > + /* This is nasty - instead of using the HWOL provided flow, > > + * parse the packet data anyway to find the location of > > the TCP > > + * header to extract the TCP flags for the rule. > > + */ > > Is it a case that this info is just not available in the HWOL rule provided? > Yes, it's not provided by the hardware, and would usually be done in miniflow_extract, so we have to call "parse_tcp_flags()" to get the tcp_flags in software. > This overhead would have an effect on performance but only for the AVX512 > DIPF implantation correct? No effect on existing HWOL > and scalar? > This overhead is present for both AVX512 and scalar DPIFs. I think I'll remove the comment, since it's calling out an issue which isn't specific to the AVX512 DPIF and the comment might be misleading. > BR > Ian > > + pkt_meta[i].tcp_flags = parse_tcp_flags(packet); > > + > > + pkt_meta[i].bytes = dp_packet_size(packet); > > + hwol_emc_smc_hitmask |= (1 << i); > > + continue; > > + } > > + } > > + > > + /* Do miniflow extract into keys. */ > > struct netdev_flow_key *key = &keys[i]; > > miniflow_extract(packet, &key->mf); > > > > @@ -125,8 +149,6 @@ dp_netdev_input_outer_avx512(struct > > dp_netdev_pmd_thread *pmd, > > key->len = netdev_flow_key_size(miniflow_n_values(&key->mf)); > > key->hash = dpif_netdev_packet_get_rss_hash_orig_pkt(packet, &key- > > >mf); > > > > - struct dp_netdev_flow *f = NULL; > > - > > if (emc_enabled) { > > f = emc_lookup(&cache->emc_cache, key); > > > > -- > > 2.31.1 > > > > _______________________________________________ > > dev mailing list > > [email protected] > > https://mail.openvswitch.org/mailman/listinfo/ovs-dev _______________________________________________ dev mailing list [email protected] https://mail.openvswitch.org/mailman/listinfo/ovs-dev
