On 8/8/2025 8:56 AM, Michal Kubiak wrote:
> This series modernizes the Rx path in the ice driver by removing legacy
> code and switching to the Page Pool API. The changes follow the same
> direction as previously done for the iavf driver, and aim to simplify
> buffer management, improve maintainability, and prepare for future
> infrastructure reuse.
> 
> An important motivation for this work was addressing reports of poor
> performance in XDP_TX mode when IOMMU is enabled. The legacy Rx model
> incurred significant overhead due to per-frame DMA mapping, which
> limited throughput in virtualized environments. This series eliminates
> those bottlenecks by adopting Page Pool and bi-directional DMA mapping.
> 
> The first patch removes the legacy Rx path, which relied on manual skb
> allocation and header copying. This path has become obsolete due to the
> availability of build_skb() and the increasing complexity of supporting
> features like XDP and multi-buffer.
> 
> The second patch drops the page splitting and recycling logic. While
> once used to optimize memory usage, this logic introduced significant
> complexity and hotpath overhead. Removing it simplifies the Rx flow and
> sets the stage for Page Pool adoption.
> 
> The final patch switches the driver to use the Page Pool and libeth
> APIs. It also updates the XDP implementation to use libeth_xdp helpers
> and optimizes XDP_TX by avoiding per-frame DMA mapping. This results in
> a significant performance improvement in virtualized environments with
> IOMMU enabled (over 5x gain in XDP_TX throughput). In other scenarios,
> performance remains on par with the previous implementation.
> 
> This conversion also aligns with the broader effort to modularize and
> unify XDP support across Intel Ethernet drivers.
> 
> Tested on various workloads including netperf and XDP modes (PASS, DROP,
> TX) with and without IOMMU. No regressions observed.
> 

Thanks for double checking again against 9K MTU :D

> Last but not least, it is suspected that this series may also help
> mitigate the memory consumption issues recently reported in the driver.
> For further details, see:
> 
> https://lore.kernel.org/intel-wired-lan/cak8ffz4hy6gujnenz3wy9jaylzxgfpr7dnzxzgmyoe44car...@mail.gmail.com/
> 

I believe we at least resolved the memory leak already, but if this
patch helps us reduce the amount of memory overhead queues take thats
good too.

> Thanks,
> Michal
> 
> ---
> 
> v2:
>  - Fix the traffic hang issue on iperf3 testing while MTU=9K is set (Jake).
>  - Fix crashes on MTU=9K and iperf3 testing (Jake).
>  - Improve the logic in the Rx path after it was integrated with libeth (Jake 
> & Olek).
>  - Remove unused variables and structure members (Jake).
>  - Extract the fix for using a bad allocation counter to a separate patch 
> targeted to "net"
>    (Paul).
> 
> 
> v1: 
> https://lore.kernel.org/intel-wired-lan/[email protected]/
> 
> Michal Kubiak (3):
>   ice: remove legacy Rx and construct SKB
>   ice: drop page splitting and recycling
>   ice: switch to Page Pool
> 
>  drivers/net/ethernet/intel/Kconfig            |   1 +
>  drivers/net/ethernet/intel/ice/ice.h          |   3 +-
>  drivers/net/ethernet/intel/ice/ice_base.c     | 124 ++--
>  drivers/net/ethernet/intel/ice/ice_ethtool.c  |  22 +-
>  drivers/net/ethernet/intel/ice/ice_lib.c      |   1 -
>  drivers/net/ethernet/intel/ice/ice_main.c     |  21 +-
>  drivers/net/ethernet/intel/ice/ice_txrx.c     | 645 +++---------------
>  drivers/net/ethernet/intel/ice/ice_txrx.h     |  41 +-
>  drivers/net/ethernet/intel/ice/ice_txrx_lib.c |  65 +-
>  drivers/net/ethernet/intel/ice/ice_txrx_lib.h |   9 -
>  drivers/net/ethernet/intel/ice/ice_virtchnl.c |   5 +-
>  drivers/net/ethernet/intel/ice/ice_xsk.c      | 146 +---
>  drivers/net/ethernet/intel/ice/ice_xsk.h      |   6 +-
>  13 files changed, 215 insertions(+), 874 deletions(-)
> 

Nice to continue seeing significant code size reductions with efforts
like these.

For the series:

Reviewed-by: Jacob Keller <[email protected]>


Attachment: OpenPGP_signature.asc
Description: OpenPGP digital signature

Reply via email to