On 5/7/25 1:00 AM, Michal Kubiak wrote:
On Tue, May 06, 2025 at 10:31:59PM -0700, Jesse Brandeburg wrote:
On 4/22/25 8:36 AM, Michal Kubiak wrote:
Hi,

Some of our customers have reported a crash problem when trying to load
the XDP program on machines with a large number of CPU cores. After
extensive debugging, it became clear that the root cause of the problem
lies in the Tx scheduler implementation, which does not seem to be able
to handle the creation of a large number of Tx queues (even though this
number does not exceed the number of available queues reported by the
FW).
This series addresses this problem.

Hi Michal,

Unfortunately this version of the series seems to reintroduce the original
problem error: -22.
Hi Jesse,

Thanks for testing and reporting!

I will take a look at the problem and try to reproduce it locally. I would also
have a few questions inline.

First, was your original problem not the failure with error: -5? Or did you have
both (-5 and -22), depending on the scenario/environment?
I am asking because it seems that these two errors occurred at different
initialization stages of the tx scheduler. Of course, the series
was intended to address both of these issues.


We had a few issues to work through, I believe the original problem we had was -22 (just confirmed) with more than 320 CPUs.

I double checked the patches, they looked like they were applied in our test
version 2025.5.8 build which contained a 6.12.26 kernel with this series
applied (all 3)

Our setup is saying max 252 combined queues, but running 384 CPUs by
default, loads an XDP program, then reduces the number of queues using
ethtool, to 192. After that we get the error -22 and link is down.

To be honest, I did not test the scenario in which the number of queues is
reduced while the XDP program is running. This is the first thing I will check.

Cool, I hope it will help your repro, but see below.

Can you please confirm that you did that step on both the current
and the draft version of the series?
It would also be interesting to check what happens if the queue number is 
reduced
before loading the XDP program.

We noticed we had a difference in the testing of draft and current. We have a patch against the kernel that was helping us work around this issue, which looked like this:


diff --git a/drivers/net/ethernet/intel/ice/ice_irq.c b/drivers/net/ethernet/intel/ice/ice_irq.c
index ad82ff7d1995..622d409efbce 100644
--- a/drivers/net/ethernet/intel/ice/ice_irq.c
+++ b/drivers/net/ethernet/intel/ice/ice_irq.c
@@ -126,6 +126,10 @@ static void ice_reduce_msix_usage(struct ice_pf *pf, int v_remain)
     }
 }

+static int num_msix = -1;
+module_param(num_msix, int, 0644);
+MODULE_PARM_DESC(num_msix, "Default limit of MSI-X vectors for LAN");
+
 /**
  * ice_ena_msix_range - Request a range of MSIX vectors from the OS
  * @pf: board private structure
@@ -156,7 +160,16 @@ static int ice_ena_msix_range(struct ice_pf *pf)
     v_wanted = v_other;

     /* LAN traffic */
-    pf->num_lan_msix = num_cpus;
+    /* Cloudflare: allocate the msix vector count based on module param
+     * num_msix. Alternately, default to half the number of CPUs or 128,
+     * whichever is smallest, and should the number of CPUs be 2, 1, or
+     * 0, then default to 2 vectors
+     */
+    if (num_msix != -1)
+        pf->num_lan_msix = num_msix;
+    else
+        pf->num_lan_msix = min_t(u16, (num_cpus / 2) ?: 2, 128);
+
     v_wanted += pf->num_lan_msix;

     /* RDMA auxiliary driver */


The module parameter helped us limit the number of vectors, which allowed our machines to finish booting before your new patches were available.

The failure of the new patch was when this value was set to 252, and the "draft" patch also fails in this configuration (this is new info from today)


The original version you had sent us was working fine when we tested it, so
the problem seems to be between those two versions. I suppose it could be
So the problem is also related to the inital number of queues the driver starts with. When we
possible (but unlikely because I used git to apply the patches) that there
was something wrong with the source code, but I sincerely doubt it as the
patches had applied cleanly.
The reason it worked fine was we tested "draft" (and now the new patches too) with the module parameter set to 384 queues (with 384 CPUs), or letting it default to 128 queues, both worked with the new and old series. 252 seems to be some magic failure causing number with both patches, we don't know why.


Thanks for your patience while we worked through the testing differences here today. Hope this helps and let me know if you have more questions.


Jesse

Reply via email to