Thank you, I will address the additional patch 3/4 and patch 4/4 comments
in a future cleanup patch.

On Wed, Mar 4, 2026 at 7:59 AM Stephen Hemminger <[email protected]>
wrote:

> On Wed,  4 Mar 2026 04:50:29 +0000
> "Jasper Tran O'Leary" <[email protected]> wrote:
>
> > This patch series adds flow steering support to the Google Virtual
> > Ethernet (gve) driver. This functionality allows traffic to be directed
> > to specific receive queues based on user-specified flow patterns.
> >
> > The series includes foundational support for extended admin queue
> > commands needed to handle flow rules, the specific adminqueue commands
> > for flow rule management, and the integration with the DPDK rte_flow
> > API. The series adds support flow matching on the following protocols:
> > IPv4, IPv6, TCP, UDP, SCTP, ESP, and AH.
> >
> > Patch Overview:
> >
> > 1. "net/gve: add flow steering device option" checks for and enables
> >    the flow steering capability in the device options during
> >    initialization.
> > 2. "net/gve: introduce extended adminq command" adds infrastructure
> >    for sending extended admin queue commands. These commands use a
> >    flexible buffer descriptor format required for flow rule management.
> > 3. "net/gve: add adminq commands for flow steering" implements the
> >    specific admin queue commands to add and remove flow rules on the
> >    device, including handling of rule IDs and parameters.
> > 4. "net/gve: add rte flow API integration" exposes the flow steering
> >    functionality via the DPDK rte_flow API. This includes strict
> >    pattern validation, rule parsing, and lifecycle management (create,
> >    destroy, flush). It ensures thread-safe access to the flow subsystem
> >    and proper resource cleanup during device reset.
> >
> > Jasper Tran O'Leary (2):
> >   net/gve: add adminq commands for flow steering
> >   net/gve: add rte flow API integration
> >
> > Vee Agarwal (2):
> >   net/gve: add flow steering device option
> >   net/gve: introduce extended adminq command
> >
> >  doc/guides/nics/features/gve.ini       |  12 +
> >  doc/guides/nics/gve.rst                |  27 +
> >  doc/guides/rel_notes/release_26_03.rst |   1 +
> >  drivers/net/gve/base/gve.h             |   3 +-
> >  drivers/net/gve/base/gve_adminq.c      | 118 ++++-
> >  drivers/net/gve/base/gve_adminq.h      |  57 +++
> >  drivers/net/gve/gve_ethdev.c           |  83 +++-
> >  drivers/net/gve/gve_ethdev.h           |  46 ++
> >  drivers/net/gve/gve_flow_rule.c        | 658 +++++++++++++++++++++++++
> >  drivers/net/gve/gve_flow_rule.h        |  65 +++
> >  drivers/net/gve/meson.build            |   1 +
> >  11 files changed, 1066 insertions(+), 5 deletions(-)
> >  create mode 100644 dpdk/drivers/net/gve/gve_flow_rule.c
> >  create mode 100644 dpdk/drivers/net/gve/gve_flow_rule.h
> >
>
>
> Applied to next-net
>
> The detailed review report if you are interested.
>
> Reviewed the v4 series. Overall this is well-structured — locking
> discipline is sound, create/destroy/flush paths handle errors
> correctly with proper resource cleanup, and the bitmap slot is
> restored on adminq failure. Patches 1/4 and 2/4 are clean.
>
> A few items on 3/4 and 4/4:
>
> Patch 3/4:
>
>   [Warning] struct gve_flow_spec has a padding hole after the
>   tos/tclass u8 field (37 bytes of data, padded to 40 by the
>   compiler). Callers zero-initialize today so no live bug, but
>   consider adding GVE_CHECK_STRUCT_LEN for gve_flow_spec and
>   gve_flow_rule_params to guard against future changes, consistent
>   with other adminq structures.
>
> Patch 4/4:
>
>   [Warning] In gve_setup_flow_subsystem, the rte_zmalloc failure
>   path does goto free_flow_rule_bmp which calls
>   gve_flow_free_bmp(priv). This is safe (rte_free(NULL) is a
>   no-op) but misleading — the label says "free" when there's
>   nothing to free. Cleaner to just return -ENOMEM directly on the
>   first failure.
>
>   [Warning] gve_dev_reset tears down the flow subsystem and
>   re-initializes via gve_init_priv, but does not destroy/recreate
>   flow_rule_lock. This works today because
>   gve_teardown_flow_subsystem doesn't destroy the mutex (only
>   gve_dev_close does), but it's worth a comment to document this
>   invariant.
>

Reply via email to