On 2024-12-22 2:23 a.m., Shay Drori wrote:
On 18/12/2024 18:58, Ahmed Zaki wrote:
External email: Use caution opening links or attachments
Move the IRQ affinity management to the napi struct. All drivers that are
already using netif_napi_set_irq() are modified to the new API. Except
mlx5 because it is implementing IRQ pools and moving to the new API does
not seem trivial.
Tested on bnxt, ice and idpf.
---
Opens: is cpu_online_mask the best default mask? drivers do this
differently
cpu_online_mask is not the best default mask for IRQ affinity management.
Here are two reasons:
- Performance Gains from Driver-Specific CPU Assignment: Many drivers
assign different CPUs to each IRQ to optimize performance. This plays
a crucial role in CPU utilization.
- Impact of NUMA Node Distance on Traffic Performance: NUMA topology
plays a crucial role in IRQ performance. Assigning IRQs to CPUs on
the same NUMA node as the associated device minimizes latency caused
by remote memory access.[1]
[1]
for more details on NUMA preference, you can look at commit
2acda57736de1e486036b90a648e67a3599080a1
Thanks for replying.
I will use cpumask_local_spread() (which now considers NUMA distances)
in the next iteration.