On Wed, Sep 30, 2015 at 6:45 AM, Keith Busch <keith.bu...@intel.com> wrote: > On Tue, 29 Sep 2015, Ming Lei wrote: >> >> Yes, I thought of that before, but it has the following cons: >> >> - some drivers/devices may need different IRQ affinity policy, such as >> virtio >> devices which has its own set affinity handler(see >> virtqueue_set_affinity()), > > > That's not a very good example to support your cause; virtio_scsi's use > is a perfect example for one that would benefit from letting blk-mq > handle affinity. virtio_scsi sets affinity only when there is a 1:1 > mapping of cpu's to queue's, but this driver doesn't know the mapping > that blk-mq used, creating a potentially less than optimal mapping.
The 1:1 mapping is introduced before blk-mq, and that doesn't mean we have to do that for blk-mq. Actualy I mean virtio-scsi just lets the 1st CPU of the cpumask handle the virt-queue's irq, instead of all CPUs mapped to the hw queue(virt-queue). > >> - block core has to get the irq vector information which has to be >> setup/finalized >> before blk-mq uses that for setting irq affinity, for example, in case >> NVMe's admin >> queue, its vector can be changed after admin queue's initialization. > > > Why do you want to put a hint on the admin queue's irq? No, I don't want, and it is just a example, I mean other drivers/devices may have this kind of situation too. -- Ming Lei -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/