Re: [Intel-wired-lan] [PATCH iwl-next 0/3] ice: convert Rx path to Page Pool

2025-08-08 Thread Michal Kubiak
On Mon, Jul 14, 2025 at 04:35:26PM +0200, Alexander Lobakin wrote:
> From: Jacob Keller 
> Date: Thu, 10 Jul 2025 15:43:20 -0700
> 
> > 
> > 
> > On 7/7/2025 4:36 PM, Jacob Keller wrote:
> 
> [...]
> 
> > I got this to work with the following diff:
> > 
> > diff --git i/drivers/net/ethernet/intel/ice/ice_txrx.h
> > w/drivers/net/ethernet/intel/ice/ice_txrx.h
> > index 42e74925b9df..6b72608a20ab 100644
> > --- i/drivers/net/ethernet/intel/ice/ice_txrx.h
> > +++ w/drivers/net/ethernet/intel/ice/ice_txrx.h
> > @@ -342,7 +342,6 @@ struct ice_rx_ring {
> > struct ice_tx_ring *xdp_ring;
> > struct ice_rx_ring *next;   /* pointer to next ring in
> > q_vector */
> > struct xsk_buff_pool *xsk_pool;
> > -   u32 nr_frags;
> > u16 rx_buf_len;
> > dma_addr_t dma; /* physical address of ring */
> > u8 dcb_tc;  /* Traffic class of ring */
> > diff --git i/drivers/net/ethernet/intel/ice/ice_txrx.c
> > w/drivers/net/ethernet/intel/ice/ice_txrx.c
> > index 062291dac99c..403b5c54fd2a 100644
> > --- i/drivers/net/ethernet/intel/ice/ice_txrx.c
> > +++ w/drivers/net/ethernet/intel/ice/ice_txrx.c
> > @@ -831,8 +831,7 @@ static int ice_clean_rx_irq(struct ice_rx_ring
> > *rx_ring, int budget)
> > 
> > /* retrieve a buffer from the ring */
> > rx_buf = &rx_ring->rx_fqes[ntc];
> > -   if (!libeth_xdp_process_buff(xdp, rx_buf, size))
> > -   break;
> > +   libeth_xdp_process_buff(xdp, rx_buf, size);
> > 
> > if (++ntc == cnt)
> > ntc = 0;
> > @@ -852,25 +851,18 @@ static int ice_clean_rx_irq(struct ice_rx_ring
> > *rx_ring, int budget)
> > 
> > xdp->data = NULL;
> > rx_ring->first_desc = ntc;
> > -   rx_ring->nr_frags = 0;
> > continue;
> >  construct_skb:
> > skb = xdp_build_skb_from_buff(&xdp->base);
> > +   xdp->data = NULL;
> > +   rx_ring->first_desc = ntc;
> > 
> > /* exit if we failed to retrieve a buffer */
> > if (!skb) {
> > -   rx_ring->ring_stats->rx_stats.alloc_page_failed++;
> > -   xdp_verdict = ICE_XDP_CONSUMED;
> > -   xdp->data = NULL;
> > -   rx_ring->first_desc = ntc;
> > -   rx_ring->nr_frags = 0;
> > +   rx_ring->ring_stats->rx_stats.alloc_buf_failed++;
> > break;
> > }
> > 
> > -   xdp->data = NULL;
> > -   rx_ring->first_desc = ntc;
> > -   rx_ring->nr_frags = 0;
> > -
> > stat_err_bits = BIT(ICE_RX_FLEX_DESC_STATUS0_RXE_S);
> > if (unlikely(ice_test_staterr(rx_desc->wb.status_error0,
> >   stat_err_bits))) {
> 
> More or less. I'm taking over this series since Michał's on a vacation,
> I'll double check everything (against iavf and idpf as well).
> 
> Anyway, thanks for the fix.
> 
> > 
> > 
> > --->8---
> > 
> > The essential change is to not break if libeth_xdp_process_buff returns
> > false, since we still need to move the ring forward in this case, and
> > the usual reason it returns false is the zero-length descriptor we
> > sometimes get when using larger MTUs.
> > 
> > I also dropped some of the updates and re-ordered how we assign
> > xdp->data, and fixed the bug with the ring stats using alloc_page_failed
> > instead of alloc_buf_failed like we should have. I think this could be
> > further improved or cleaned up, but might be better to wait until the
> > full usage of the XDP helpers.
> > 
> > Regardless, we need something like this to fix the issues with larger MTU.
> 
> Thanks,
> Olek


Dear Jake and Olek,

Thanks for your support, detailed testing and fixes!

I successfully reproduced the crash during stress testing the series
using:
 - MTU == 9k,
 - iperf3 (for UDP traffic),
 - heavy HTTP workload running 20 threads and 10 connections.

After applying the fixes for v2, I observed no issues.

Thanks,
Michal



Re: [Intel-wired-lan] [PATCH iwl-next 0/3] ice: convert Rx path to Page Pool

2025-07-14 Thread Alexander Lobakin
From: Jacob Keller 
Date: Thu, 10 Jul 2025 15:43:20 -0700

> 
> 
> On 7/7/2025 4:36 PM, Jacob Keller wrote:

[...]

> I got this to work with the following diff:
> 
> diff --git i/drivers/net/ethernet/intel/ice/ice_txrx.h
> w/drivers/net/ethernet/intel/ice/ice_txrx.h
> index 42e74925b9df..6b72608a20ab 100644
> --- i/drivers/net/ethernet/intel/ice/ice_txrx.h
> +++ w/drivers/net/ethernet/intel/ice/ice_txrx.h
> @@ -342,7 +342,6 @@ struct ice_rx_ring {
> struct ice_tx_ring *xdp_ring;
> struct ice_rx_ring *next;   /* pointer to next ring in
> q_vector */
> struct xsk_buff_pool *xsk_pool;
> -   u32 nr_frags;
> u16 rx_buf_len;
> dma_addr_t dma; /* physical address of ring */
> u8 dcb_tc;  /* Traffic class of ring */
> diff --git i/drivers/net/ethernet/intel/ice/ice_txrx.c
> w/drivers/net/ethernet/intel/ice/ice_txrx.c
> index 062291dac99c..403b5c54fd2a 100644
> --- i/drivers/net/ethernet/intel/ice/ice_txrx.c
> +++ w/drivers/net/ethernet/intel/ice/ice_txrx.c
> @@ -831,8 +831,7 @@ static int ice_clean_rx_irq(struct ice_rx_ring
> *rx_ring, int budget)
> 
> /* retrieve a buffer from the ring */
> rx_buf = &rx_ring->rx_fqes[ntc];
> -   if (!libeth_xdp_process_buff(xdp, rx_buf, size))
> -   break;
> +   libeth_xdp_process_buff(xdp, rx_buf, size);
> 
> if (++ntc == cnt)
> ntc = 0;
> @@ -852,25 +851,18 @@ static int ice_clean_rx_irq(struct ice_rx_ring
> *rx_ring, int budget)
> 
> xdp->data = NULL;
> rx_ring->first_desc = ntc;
> -   rx_ring->nr_frags = 0;
> continue;
>  construct_skb:
> skb = xdp_build_skb_from_buff(&xdp->base);
> +   xdp->data = NULL;
> +   rx_ring->first_desc = ntc;
> 
> /* exit if we failed to retrieve a buffer */
> if (!skb) {
> -   rx_ring->ring_stats->rx_stats.alloc_page_failed++;
> -   xdp_verdict = ICE_XDP_CONSUMED;
> -   xdp->data = NULL;
> -   rx_ring->first_desc = ntc;
> -   rx_ring->nr_frags = 0;
> +   rx_ring->ring_stats->rx_stats.alloc_buf_failed++;
> break;
> }
> 
> -   xdp->data = NULL;
> -   rx_ring->first_desc = ntc;
> -   rx_ring->nr_frags = 0;
> -
> stat_err_bits = BIT(ICE_RX_FLEX_DESC_STATUS0_RXE_S);
> if (unlikely(ice_test_staterr(rx_desc->wb.status_error0,
>   stat_err_bits))) {

More or less. I'm taking over this series since Michał's on a vacation,
I'll double check everything (against iavf and idpf as well).

Anyway, thanks for the fix.

> 
> 
> --->8---
> 
> The essential change is to not break if libeth_xdp_process_buff returns
> false, since we still need to move the ring forward in this case, and
> the usual reason it returns false is the zero-length descriptor we
> sometimes get when using larger MTUs.
> 
> I also dropped some of the updates and re-ordered how we assign
> xdp->data, and fixed the bug with the ring stats using alloc_page_failed
> instead of alloc_buf_failed like we should have. I think this could be
> further improved or cleaned up, but might be better to wait until the
> full usage of the XDP helpers.
> 
> Regardless, we need something like this to fix the issues with larger MTU.

Thanks,
Olek


Re: [Intel-wired-lan] [PATCH iwl-next 0/3] ice: convert Rx path to Page Pool

2025-07-10 Thread Jacob Keller


On 7/7/2025 4:36 PM, Jacob Keller wrote:
>> I tried to apply these and test them, but I ran into several issues :(
>>
>> The iperf3 session starts with some traffic and then very quickly dies
>> to zero:
>>
>>> [  5]   4.00-5.00   sec  0.00 Bytes  0.00 bits/sec
>>> [  8]   4.00-5.00   sec  0.00 Bytes  0.00 bits/sec
>>> [ 10]   4.00-5.00   sec  0.00 Bytes  0.00 bits/sec
>>> [ 12]   4.00-5.00   sec  0.00 Bytes  0.00 bits/sec
>>> [ 14]   4.00-5.00   sec  0.00 Bytes  0.00 bits/sec
>>> [SUM]   4.00-5.00   sec  0.00 Bytes  0.00 bits/sec
>>> - - - - - - - - - - - - - - - - - - - - - - - - -
>>> [  5]   5.00-6.00   sec  0.00 Bytes  0.00 bits/sec
>>> [  8]   5.00-6.00   sec  0.00 Bytes  0.00 bits/sec
>>> [ 10]   5.00-6.00   sec  0.00 Bytes  0.00 bits/sec
>>> [ 12]   5.00-6.00   sec  0.00 Bytes  0.00 bits/sec
>>> [ 14]   5.00-6.00   sec  0.00 Bytes  0.00 bits/sec
>>> [SUM]   5.00-6.00   sec  0.00 Bytes  0.00 bits/sec
>>> - - - - - - - - - - - - - - - - - - - - - - - - -
>>> [  5]   6.00-7.00   sec  0.00 Bytes  0.00 bits/sec
>>> [  8]   6.00-7.00   sec  0.00 Bytes  0.00 bits/sec
>>> [ 10]   6.00-7.00   sec  0.00 Bytes  0.00 bits/sec
>>> [ 12]   6.00-7.00   sec  0.00 Bytes  0.00 bits/sec
>>> [ 14]   6.00-7.00   sec  0.00 Bytes  0.00 bits/sec
>>> [SUM]   6.00-7.00   sec  0.00 Bytes  0.00 bits/sec
>>> - - - - - - - - - - - - - - - - - - - - - - - - -
>>> [  5]   7.00-8.00   sec  0.00 Bytes  0.00 bits/sec
>>> [  8]   7.00-8.00   sec  0.00 Bytes  0.00 bits/sec
>>> [ 10]   7.00-8.00   sec  0.00 Bytes  0.00 bits/sec
>>> [ 12]   7.00-8.00   sec  0.00 Bytes  0.00 bits/sec
>>> [ 14]   7.00-8.00   sec  0.00 Bytes  0.00 bits/sec
>>> [SUM]   7.00-8.00   sec  0.00 Bytes  0.00 bits/sec
>>> - - - - - - - - - - - - - - - - - - - - - - - - -
>>> [  5]   8.00-9.00   sec  0.00 Bytes  0.00 bits/sec
>>> [  8]   8.00-9.00   sec  0.00 Bytes  0.00 bits/sec
>>> [ 10]   8.00-9.00   sec  0.00 Bytes  0.00 bits/sec
>>> [ 12]   8.00-9.00   sec  0.00 Bytes  0.00 bits/sec
>>> [ 14]   8.00-9.00   sec  0.00 Bytes  0.00 bits/sec
>>> [SUM]   8.00-9.00   sec  0.00 Bytes  0.00 bits/sec
>>> - - - - - - - - - - - - - - - - - - - - - - - - -
>>> [  5]   9.00-10.00  sec  0.00 Bytes  0.00 bits/sec
>>> [  8]   9.00-10.00  sec  0.00 Bytes  0.00 bits/sec
>>> [ 10]   9.00-10.00  sec  0.00 Bytes  0.00 bits/sec
>>> [ 12]   9.00-10.00  sec  0.00 Bytes  0.00 bits/sec
>>> [ 14]   9.00-10.00  sec  0.00 Bytes  0.00 bits/sec
>>> [SUM]   9.00-10.00  sec  0.00 Bytes  0.00 bits/sec
>>

I got this to work with the following diff:

diff --git i/drivers/net/ethernet/intel/ice/ice_txrx.h
w/drivers/net/ethernet/intel/ice/ice_txrx.h
index 42e74925b9df..6b72608a20ab 100644
--- i/drivers/net/ethernet/intel/ice/ice_txrx.h
+++ w/drivers/net/ethernet/intel/ice/ice_txrx.h
@@ -342,7 +342,6 @@ struct ice_rx_ring {
struct ice_tx_ring *xdp_ring;
struct ice_rx_ring *next;   /* pointer to next ring in
q_vector */
struct xsk_buff_pool *xsk_pool;
-   u32 nr_frags;
u16 rx_buf_len;
dma_addr_t dma; /* physical address of ring */
u8 dcb_tc;  /* Traffic class of ring */
diff --git i/drivers/net/ethernet/intel/ice/ice_txrx.c
w/drivers/net/ethernet/intel/ice/ice_txrx.c
index 062291dac99c..403b5c54fd2a 100644
--- i/drivers/net/ethernet/intel/ice/ice_txrx.c
+++ w/drivers/net/ethernet/intel/ice/ice_txrx.c
@@ -831,8 +831,7 @@ static int ice_clean_rx_irq(struct ice_rx_ring
*rx_ring, int budget)

/* retrieve a buffer from the ring */
rx_buf = &rx_ring->rx_fqes[ntc];
-   if (!libeth_xdp_process_buff(xdp, rx_buf, size))
-   break;
+   libeth_xdp_process_buff(xdp, rx_buf, size);

if (++ntc == cnt)
ntc = 0;
@@ -852,25 +851,18 @@ static int ice_clean_rx_irq(struct ice_rx_ring
*rx_ring, int budget)

xdp->data = NULL;
rx_ring->first_desc = ntc;
-   rx_ring->nr_frags = 0;
continue;
 construct_skb:
skb = xdp_build_skb_from_buff(&xdp->base);
+   xdp->data = NULL;
+   rx_ring->first_desc = ntc;

/* exit if we failed to retrieve a buffer */
if (!skb) {
-   rx_ring->ring_stats->rx_stats.alloc_page_failed++;
-   xdp_verdict = ICE_XDP_CONSUMED;
-   xdp->data = NULL;
-   rx_ring->first_desc = ntc;
-   rx_ring->nr_frags = 0;
+   rx_ring->ring_stats->rx_stats.alloc_buf_failed++;
break;
}

-   xdp->data = NULL;
-   rx_ring->first_desc = ntc;
-   rx_ring->nr_frags = 0;
-
stat_err_bits = BIT(ICE_RX_FLEX_DESC_STATUS0_RXE_S);
if (unlikely(ice_test_staterr(rx_desc->wb.status_error0,
  stat_err_bits))) {


--->8---

The essential

Re: [Intel-wired-lan] [PATCH iwl-next 0/3] ice: convert Rx path to Page Pool

2025-07-07 Thread Jacob Keller


On 7/7/2025 4:32 PM, Jacob Keller wrote:
> 
> 
> On 7/4/2025 9:18 AM, Michal Kubiak wrote:
>> This series modernizes the Rx path in the ice driver by removing legacy
>> code and switching to the Page Pool API. The changes follow the same
>> direction as previously done for the iavf driver, and aim to simplify
>> buffer management, improve maintainability, and prepare for future
>> infrastructure reuse.
>>
>> An important motivation for this work was addressing reports of poor
>> performance in XDP_TX mode when IOMMU is enabled. The legacy Rx model
>> incurred significant overhead due to per-frame DMA mapping, which
>> limited throughput in virtualized environments. This series eliminates
>> those bottlenecks by adopting Page Pool and bi-directional DMA mapping.
>>
>> The first patch removes the legacy Rx path, which relied on manual skb
>> allocation and header copying. This path has become obsolete due to the
>> availability of build_skb() and the increasing complexity of supporting
>> features like XDP and multi-buffer.
>>
>> The second patch drops the page splitting and recycling logic. While
>> once used to optimize memory usage, this logic introduced significant
>> complexity and hotpath overhead. Removing it simplifies the Rx flow and
>> sets the stage for Page Pool adoption.
>>
>> The final patch switches the driver to use the Page Pool and libeth
>> APIs. It also updates the XDP implementation to use libeth_xdp helpers
>> and optimizes XDP_TX by avoiding per-frame DMA mapping. This results in
>> a significant performance improvement in virtualized environments with
>> IOMMU enabled (over 5x gain in XDP_TX throughput). In other scenarios,
>> performance remains on par with the previous implementation.
>>
>> This conversion also aligns with the broader effort to modularize and
>> unify XDP support across Intel Ethernet drivers.
>>
>> Tested on various workloads including netperf and XDP modes (PASS, DROP,
>> TX) with and without IOMMU. No regressions observed.
>>
>> Last but not least, it is suspected that this series may also help
>> mitigate the memory consumption issues recently reported in the driver.
>> For further details, see:
>>
>> https://lore.kernel.org/intel-wired-lan/cak8ffz4hy6gujnenz3wy9jaylzxgfpr7dnzxzgmyoe44car...@mail.gmail.com/
>>
> 
> I tried to apply these and test them, but I ran into several issues :(
> 
> The iperf3 session starts with some traffic and then very quickly dies
> to zero:
> 
>> [  5]   4.00-5.00   sec  0.00 Bytes  0.00 bits/sec
>> [  8]   4.00-5.00   sec  0.00 Bytes  0.00 bits/sec
>> [ 10]   4.00-5.00   sec  0.00 Bytes  0.00 bits/sec
>> [ 12]   4.00-5.00   sec  0.00 Bytes  0.00 bits/sec
>> [ 14]   4.00-5.00   sec  0.00 Bytes  0.00 bits/sec
>> [SUM]   4.00-5.00   sec  0.00 Bytes  0.00 bits/sec
>> - - - - - - - - - - - - - - - - - - - - - - - - -
>> [  5]   5.00-6.00   sec  0.00 Bytes  0.00 bits/sec
>> [  8]   5.00-6.00   sec  0.00 Bytes  0.00 bits/sec
>> [ 10]   5.00-6.00   sec  0.00 Bytes  0.00 bits/sec
>> [ 12]   5.00-6.00   sec  0.00 Bytes  0.00 bits/sec
>> [ 14]   5.00-6.00   sec  0.00 Bytes  0.00 bits/sec
>> [SUM]   5.00-6.00   sec  0.00 Bytes  0.00 bits/sec
>> - - - - - - - - - - - - - - - - - - - - - - - - -
>> [  5]   6.00-7.00   sec  0.00 Bytes  0.00 bits/sec
>> [  8]   6.00-7.00   sec  0.00 Bytes  0.00 bits/sec
>> [ 10]   6.00-7.00   sec  0.00 Bytes  0.00 bits/sec
>> [ 12]   6.00-7.00   sec  0.00 Bytes  0.00 bits/sec
>> [ 14]   6.00-7.00   sec  0.00 Bytes  0.00 bits/sec
>> [SUM]   6.00-7.00   sec  0.00 Bytes  0.00 bits/sec
>> - - - - - - - - - - - - - - - - - - - - - - - - -
>> [  5]   7.00-8.00   sec  0.00 Bytes  0.00 bits/sec
>> [  8]   7.00-8.00   sec  0.00 Bytes  0.00 bits/sec
>> [ 10]   7.00-8.00   sec  0.00 Bytes  0.00 bits/sec
>> [ 12]   7.00-8.00   sec  0.00 Bytes  0.00 bits/sec
>> [ 14]   7.00-8.00   sec  0.00 Bytes  0.00 bits/sec
>> [SUM]   7.00-8.00   sec  0.00 Bytes  0.00 bits/sec
>> - - - - - - - - - - - - - - - - - - - - - - - - -
>> [  5]   8.00-9.00   sec  0.00 Bytes  0.00 bits/sec
>> [  8]   8.00-9.00   sec  0.00 Bytes  0.00 bits/sec
>> [ 10]   8.00-9.00   sec  0.00 Bytes  0.00 bits/sec
>> [ 12]   8.00-9.00   sec  0.00 Bytes  0.00 bits/sec
>> [ 14]   8.00-9.00   sec  0.00 Bytes  0.00 bits/sec
>> [SUM]   8.00-9.00   sec  0.00 Bytes  0.00 bits/sec
>> - - - - - - - - - - - - - - - - - - - - - - - - -
>> [  5]   9.00-10.00  sec  0.00 Bytes  0.00 bits/sec
>> [  8]   9.00-10.00  sec  0.00 Bytes  0.00 bits/sec
>> [ 10]   9.00-10.00  sec  0.00 Bytes  0.00 bits/sec
>> [ 12]   9.00-10.00  sec  0.00 Bytes  0.00 bits/sec
>> [ 14]   9.00-10.00  sec  0.00 Bytes  0.00 bits/sec
>> [SUM]   9.00-10.00  sec  0.00 Bytes  0.00 bits/sec
> 
> I eventually got a crash:
> 
> 
>> jekeller-stp-glorfindel login: [  326.338776] [ cut here 
>> ]
>> [  326.343440] WARNING: CPU: 109 PID: 0 at 
>> include/net/page_pool/helpers.h:297 libeth_rx_recycle_slow+0x2f/0x4f [libeth]
>> [  326.354082] Modules linked in: ice gnss libeth_xdp libeth cfg80211

Re: [Intel-wired-lan] [PATCH iwl-next 0/3] ice: convert Rx path to Page Pool

2025-07-07 Thread Jacob Keller


On 7/4/2025 9:18 AM, Michal Kubiak wrote:
> This series modernizes the Rx path in the ice driver by removing legacy
> code and switching to the Page Pool API. The changes follow the same
> direction as previously done for the iavf driver, and aim to simplify
> buffer management, improve maintainability, and prepare for future
> infrastructure reuse.
> 
> An important motivation for this work was addressing reports of poor
> performance in XDP_TX mode when IOMMU is enabled. The legacy Rx model
> incurred significant overhead due to per-frame DMA mapping, which
> limited throughput in virtualized environments. This series eliminates
> those bottlenecks by adopting Page Pool and bi-directional DMA mapping.
> 
> The first patch removes the legacy Rx path, which relied on manual skb
> allocation and header copying. This path has become obsolete due to the
> availability of build_skb() and the increasing complexity of supporting
> features like XDP and multi-buffer.
> 
> The second patch drops the page splitting and recycling logic. While
> once used to optimize memory usage, this logic introduced significant
> complexity and hotpath overhead. Removing it simplifies the Rx flow and
> sets the stage for Page Pool adoption.
> 
> The final patch switches the driver to use the Page Pool and libeth
> APIs. It also updates the XDP implementation to use libeth_xdp helpers
> and optimizes XDP_TX by avoiding per-frame DMA mapping. This results in
> a significant performance improvement in virtualized environments with
> IOMMU enabled (over 5x gain in XDP_TX throughput). In other scenarios,
> performance remains on par with the previous implementation.
> 
> This conversion also aligns with the broader effort to modularize and
> unify XDP support across Intel Ethernet drivers.
> 
> Tested on various workloads including netperf and XDP modes (PASS, DROP,
> TX) with and without IOMMU. No regressions observed.
> 
> Last but not least, it is suspected that this series may also help
> mitigate the memory consumption issues recently reported in the driver.
> For further details, see:
> 
> https://lore.kernel.org/intel-wired-lan/cak8ffz4hy6gujnenz3wy9jaylzxgfpr7dnzxzgmyoe44car...@mail.gmail.com/
> 

I tried to apply these and test them, but I ran into several issues :(

The iperf3 session starts with some traffic and then very quickly dies
to zero:

> [  5]   4.00-5.00   sec  0.00 Bytes  0.00 bits/sec
> [  8]   4.00-5.00   sec  0.00 Bytes  0.00 bits/sec
> [ 10]   4.00-5.00   sec  0.00 Bytes  0.00 bits/sec
> [ 12]   4.00-5.00   sec  0.00 Bytes  0.00 bits/sec
> [ 14]   4.00-5.00   sec  0.00 Bytes  0.00 bits/sec
> [SUM]   4.00-5.00   sec  0.00 Bytes  0.00 bits/sec
> - - - - - - - - - - - - - - - - - - - - - - - - -
> [  5]   5.00-6.00   sec  0.00 Bytes  0.00 bits/sec
> [  8]   5.00-6.00   sec  0.00 Bytes  0.00 bits/sec
> [ 10]   5.00-6.00   sec  0.00 Bytes  0.00 bits/sec
> [ 12]   5.00-6.00   sec  0.00 Bytes  0.00 bits/sec
> [ 14]   5.00-6.00   sec  0.00 Bytes  0.00 bits/sec
> [SUM]   5.00-6.00   sec  0.00 Bytes  0.00 bits/sec
> - - - - - - - - - - - - - - - - - - - - - - - - -
> [  5]   6.00-7.00   sec  0.00 Bytes  0.00 bits/sec
> [  8]   6.00-7.00   sec  0.00 Bytes  0.00 bits/sec
> [ 10]   6.00-7.00   sec  0.00 Bytes  0.00 bits/sec
> [ 12]   6.00-7.00   sec  0.00 Bytes  0.00 bits/sec
> [ 14]   6.00-7.00   sec  0.00 Bytes  0.00 bits/sec
> [SUM]   6.00-7.00   sec  0.00 Bytes  0.00 bits/sec
> - - - - - - - - - - - - - - - - - - - - - - - - -
> [  5]   7.00-8.00   sec  0.00 Bytes  0.00 bits/sec
> [  8]   7.00-8.00   sec  0.00 Bytes  0.00 bits/sec
> [ 10]   7.00-8.00   sec  0.00 Bytes  0.00 bits/sec
> [ 12]   7.00-8.00   sec  0.00 Bytes  0.00 bits/sec
> [ 14]   7.00-8.00   sec  0.00 Bytes  0.00 bits/sec
> [SUM]   7.00-8.00   sec  0.00 Bytes  0.00 bits/sec
> - - - - - - - - - - - - - - - - - - - - - - - - -
> [  5]   8.00-9.00   sec  0.00 Bytes  0.00 bits/sec
> [  8]   8.00-9.00   sec  0.00 Bytes  0.00 bits/sec
> [ 10]   8.00-9.00   sec  0.00 Bytes  0.00 bits/sec
> [ 12]   8.00-9.00   sec  0.00 Bytes  0.00 bits/sec
> [ 14]   8.00-9.00   sec  0.00 Bytes  0.00 bits/sec
> [SUM]   8.00-9.00   sec  0.00 Bytes  0.00 bits/sec
> - - - - - - - - - - - - - - - - - - - - - - - - -
> [  5]   9.00-10.00  sec  0.00 Bytes  0.00 bits/sec
> [  8]   9.00-10.00  sec  0.00 Bytes  0.00 bits/sec
> [ 10]   9.00-10.00  sec  0.00 Bytes  0.00 bits/sec
> [ 12]   9.00-10.00  sec  0.00 Bytes  0.00 bits/sec
> [ 14]   9.00-10.00  sec  0.00 Bytes  0.00 bits/sec
> [SUM]   9.00-10.00  sec  0.00 Bytes  0.00 bits/sec

I eventually got a crash:


> jekeller-stp-glorfindel login: [  326.338776] [ cut here 
> ]
> [  326.343440] WARNING: CPU: 109 PID: 0 at 
> include/net/page_pool/helpers.h:297 libeth_rx_recycle_slow+0x2f/0x4f [libeth]
> [  326.354082] Modules linked in: ice gnss libeth_xdp libeth cfg80211 rfkill 
> nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib nft_reject_inet nf_reject_ipv4 
> nf_reject_ipv6 nft_reject nft_ct nft_chain_nat ebtable_

Re: [Intel-wired-lan] [PATCH iwl-next 0/3] ice: convert Rx path to Page Pool

2025-07-04 Thread Michal Kubiak
Hi all,

Just a quick heads-up that I’ll be on vacation for the next three
weeks and won’t be able to actively respond to comments on my
patch series during that time.

For any urgent issues, Olek Lobakin has kindly agreed to cover
for me.

Thanks for your understanding!
Michal