> -----Original Message-----

> From: Intel-wired-lan 
> <[email protected]<mailto:[email protected]>>
>  On Behalf of Michal Kubiak

> Sent: Thursday, September 25, 2025 2:53 PM

> To: [email protected]<mailto:[email protected]>

> Cc: Fijalkowski, Maciej 
> <[email protected]<mailto:[email protected]>>; Lobakin, 
> Aleksander 
> <[email protected]<mailto:[email protected]>>; Keller, 
> Jacob E <[email protected]<mailto:[email protected]>>;

> Zaremba, Larysa <[email protected]<mailto:[email protected]>>; 
> [email protected]<mailto:[email protected]>; Kitszel, Przemyslaw 
> <[email protected]<mailto:[email protected]>>; 
> [email protected]<mailto:[email protected]>;

> Nguyen, Anthony L 
> <[email protected]<mailto:[email protected]>>; Kubiak, 
> Michal <[email protected]<mailto:[email protected]>>

> Subject: [Intel-wired-lan] [PATCH iwl-next v3 3/3] ice: switch to Page Pool

>

> This patch completes the transition of the ice driver to use the Page Pool 
> and libeth APIs, following the same direction as commit 5fa4caff59f2

> ("iavf: switch to Page Pool"). With the legacy page splitting and recycling 
> logic already removed, the driver is now in a clean state to adopt the modern 
> memory model.

>

> The Page Pool integration simplifies buffer management by offloading DMA 
> mapping and recycling to the core infrastructure.

> This eliminates the need for driver-specific handling of headroom, buffer 
> sizing, and page order. The libeth helper is used for CPU-side processing, 
> while DMA-for-device is handled by the Page Pool core.

>

> Additionally, this patch extends the conversion to cover XDP support.

> The driver now uses libeth_xdp helpers for Rx buffer processing, and 
> optimizes XDP_TX by skipping per-frame DMA mapping. Instead,

> all buffers are mapped as bi-directional up front, leveraging Page >Pool's 
> lifecycle management. This significantly reduces overhead in virtualized 
> environments.

>

> Performance observations:

> - In typical scenarios (netperf, XDP_PASS, XDP_DROP), performance remains

> on par with the previous implementation.

> - In XDP_TX mode:

> * With IOMMU enabled, performance improves dramatically - over 5x

> increase - due to reduced DMA mapping overhead and better memory reuse.

>* With IOMMU disabled, performance remains comparable to the previous

> implementation, with no significant changes observed.

> - In XDP_DROP mode:

> * For small MTUs, (where multiple buffers can be allocated on a single

> memory page), a performance drop of approximately 20% is observed.

> According to 'perf top' analysis, the bottleneck is caused by atomic

> reference counter increments in the Page Pool.

> * For normal MTUs, (where only one buffer can be allocated within a

> single memory page), performance remains comparable to baseline

> levels.

>

> This change is also a step toward a more modular and unified XDP 
> implementation across Intel Ethernet drivers, aligning with ongoing efforts 
> to consolidate and streamline feature support.

>

> Suggested-by: Maciej Fijalkowski 
> <[email protected]<mailto:[email protected]>>

> Suggested-by: Alexander Lobakin 
> <[email protected]<mailto:[email protected]>>

> Reviewed-by: Alexander Lobakin 
> <[email protected]<mailto:[email protected]>>

> Reviewed-by: Jacob Keller 
> <[email protected]<mailto:[email protected]>>

> Signed-off-by: Michal Kubiak 
> <[email protected]<mailto:[email protected]>>

> ---

> drivers/net/ethernet/intel/Kconfig            |   1 +

> drivers/net/ethernet/intel/ice/ice_base.c     |  91 ++--

> drivers/net/ethernet/intel/ice/ice_ethtool.c  |  17 +-

> drivers/net/ethernet/intel/ice/ice_lib.c      |   1 -

> drivers/net/ethernet/intel/ice/ice_main.c     |  10 +-

> drivers/net/ethernet/intel/ice/ice_txrx.c     | 442 ++++--------------

> drivers/net/ethernet/intel/ice/ice_txrx.h     |  37 +-

> drivers/net/ethernet/intel/ice/ice_txrx_lib.c |  65 ++-

> drivers/net/ethernet/intel/ice/ice_txrx_lib.h |   9 -

> drivers/net/ethernet/intel/ice/ice_xsk.c      |  76 +--

> drivers/net/ethernet/intel/ice/ice_xsk.h      |   6 +-

> 11 files changed, 203 insertions(+), 552 deletions(-)
>


Tested-by: Saritha Sanigani 
<[email protected]<mailto:[email protected]>> (A Contingent 
Worker at Intel)



Reply via email to