On Thu, May 07, 2026 at 10:51:05PM -0700, Shradha Gupta wrote:

...

> > > We can definately get our throughput run results on other suggestions
> > > you have. And about that, I just needed a bit more clarity on what to
> > > test against. Are you suggesting, with irq_setup() intact and in use, we
> > > configure the non-mana IRQs to say CPU0 and capture the numbers?
> > 
> > Can you try this:
> > 
> >        while(len--)
> >                // Or cpu_online_mask or cpu_all_mask?
> >                irq_set_affinity_and_hint(*irqs++, NULL);
> > 
> > And compare it to the linear version under your vCPU scenario?
> > 
> > Can you run your throughput test alone and on parallel with some
> > IRQ torture test?
> > 
> >         stress-ng --timer 4 --timeout 60s
> > 
> > And maybe pin the stress test to the default CPU. Assuming it's 0:
> > 
> >         taskset -c 0 stress-ng --timer 4 --timeout 60s
> > 
> > Unless the 'linear' version is significantly faster, I'd stick to the
> > above.
> > 
> > Thanks,
> > Yury
> 
> Hey Yury,
> 
> We tried a few tests with your suggestion, and throughput seems to be
> the same compared to the linear distribution approach. We stressed out
> CPU0 in both the cases and the results were similar. No IRQ migration
> was observed in either case and no throughput drop.
>  
> But one observation I had was that " irq_set_affinity_and_hint(*irqs++,
> NULL);" is essentially a no-op and we end up relying on the initial
> placement from pci_alloc_irq_vectors().

Yes you are, assuming you're not binding them before in your call chain.

> Even though in these tests we
> were not able to reproduce it, but with this distribution there is a
> chance we end up clustering the mana queue IRQs, while other vCPUs are
> not running any network load.

That sounds like an IRQ balancer bug which you're unable to reproduce. 

> It's because the placement depends on
> system-wide IRQ state at allocation time.

I don't understand this point. The 

        irq_set_affinity_and_hint(*irqs++, NULL);

simply means: I trust system IRQ balancer to pick the best CPU for my
IRQ at runtime. It doesn't refer any "IRQ state at allocation time".
  
> The linear approach however gaurantees each queue IRQ lands on a
> distinct vCPU regardless of system state. Even after stressing the cpus
> using stress-ng, we did not observe any significant throughput drop.

If you just do nothing, it would lead to the same numbers, right? What
does that "non-significant throughput drop" mean? It sounds like the
linear approach is slightly worse.

--

So, as you can't demonstrate solid benefit for the 'linear' IRQ placement,
I would just stick to the no-affinity logic.

Thanks,
Yury

Reply via email to