On Wed, Apr 22, 2026 at 02:52:08PM -0400, Aaron Tomlin wrote: > From: Daniel Wagner <[email protected]> > > Ensure that IRQ affinity setup also respects the queue-to-CPU mapping > constraints provided by the block layer. This allows the NVMe driver > to avoid assigning interrupts to CPUs that the block layer has excluded > (e.g., isolated CPUs). > > Signed-off-by: Daniel Wagner <[email protected]> > Reviewed-by: Martin K. Petersen <[email protected]> > Reviewed-by: Hannes Reinecke <[email protected]> > Signed-off-by: Aaron Tomlin <[email protected]> > --- > drivers/nvme/host/pci.c | 1 + > 1 file changed, 1 insertion(+) > > diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c > index db5fc9bf6627..daa041d15d3c 100644 > --- a/drivers/nvme/host/pci.c > +++ b/drivers/nvme/host/pci.c > @@ -2862,6 +2862,7 @@ static int nvme_setup_irqs(struct nvme_dev *dev, > unsigned int nr_io_queues) > .pre_vectors = 1, > .calc_sets = nvme_calc_irq_sets, > .priv = dev, > + .mask = blk_mq_possible_queue_affinity(), > }; > unsigned int irq_queues, poll_queues; > unsigned int flags = PCI_IRQ_ALL_TYPES | PCI_IRQ_AFFINITY; > -- > 2.51.0
Hi Daniel, Martin, Hannes, I think we can drop this patch, including other similar changes [1][2]. The next iteration of patch 12 [3] in my queue, irq_create_affinity_masks() has been modified to respect the housekeeping CPU mask. By intersecting the base affinity mask with the HK_TYPE_IO_QUEUE mask prior to topological distribution (group_mask_cpus_evenly()), we ensure that managed interrupts are kept off isolated CPUs. [1]: https://lore.kernel.org/lkml/[email protected]/ [2]: https://lore.kernel.org/lkml/[email protected]/ [3]: https://lore.kernel.org/lkml/[email protected]/ -- Aaron Tomlin
signature.asc
Description: PGP signature

