On 03/09/2016 03:45 PM, wenxi...@linux.vnet.ibm.com wrote: > From: Wen Xiong <wenxi...@linux.vnet.ibm.com> > > Add scsi multi-queue support in ipr driver. This feature > is enabled by setting ipr_use_blk_mq=1 when loading ipr > module and scsi blk-mq is enabled in the kernel. > > Signed-off-by: Wen Xiong <wenxi...@linux.vnet.ibm.com> > --- > drivers/scsi/ipr.c | 15 +++++++++++++-- > 1 files changed, 13 insertions(+), 2 deletions(-) > > diff --git a/drivers/scsi/ipr.c b/drivers/scsi/ipr.c > index 468c690..e50ed51 100644 > --- a/drivers/scsi/ipr.c > +++ b/drivers/scsi/ipr.c > @@ -230,6 +230,8 @@ MODULE_PARM_DESC(number_of_msix, "Specify the number of > MSIX interrupts to use o > module_param_named(fast_reboot, ipr_fast_reboot, int, S_IRUGO | S_IWUSR); > MODULE_PARM_DESC(fast_reboot, "Skip adapter shutdown during reboot. Set to 1 > to enable. (default: 0)"); > module_param_named(cpu_map, ipr_cpu_map, int, 0); > +module_param_named(use_blk_mq, ipr_use_blk_mq, uint, S_IRUGO); > +MODULE_PARM_DESC(use_blk_mq, "ipr use block mq enable/disable. (default:1)"); > MODULE_PARM_DESC(cpu_map, "Enable CPU affinity per adapter. (default:1)"); > MODULE_LICENSE("GPL"); > MODULE_VERSION(IPR_DRIVER_VERSION); > @@ -6396,6 +6398,7 @@ static int ipr_queuecommand(struct Scsi_Host *shost, > int rc; > struct ipr_hrr_queue *hrrq; > int hrrq_id; > + uint32_t tag;
Can you change this to u32 instead, to be consistent with the rest of the code? > > ioa_cfg = (struct ipr_ioa_cfg *)shost->hostdata; > > @@ -6409,7 +6412,12 @@ static int ipr_queuecommand(struct Scsi_Host *shost, > return rc; > } > > - hrrq_id = ipr_get_hrrq_index(ioa_cfg); > + if (ioa_cfg->use_blk_mq && shost_use_blk_mq(scsi_cmd->device->host)) { > + tag = blk_mq_unique_tag(scsi_cmd->request); > + hrrq_id = blk_mq_unique_tag_to_hwq(tag); > + } else > + hrrq_id = ipr_get_hrrq_index(ioa_cfg); > + Have you tried this patch without this section? It seems like this might result in less optimal irq affinity. Without this section of the patch, we'd select an HRRQ based on the CPU we are running on, thereby ensuring cache affinity. With this section of the patch, I'm not seeing how our IRQ affinity is utilized by the block layer to map an mq_hw_ctx to IRQ/CPU affinity binding. Maybe I need to look closer... > hrrq = &ioa_cfg->hrrq[hrrq_id]; > > spin_lock_irqsave(hrrq->lock, hrrq_flags); > @@ -10331,6 +10339,8 @@ static int ipr_probe_ioa(struct pci_dev *pdev, > (unsigned int)num_online_cpus(), > (unsigned int)IPR_MAX_HRRQ_NUM); > > + host->nr_hw_queues = ioa_cfg->hrrq_num; > + > if ((rc = ipr_save_pcix_cmd_reg(ioa_cfg))) > goto out_msi_disable; > > @@ -10567,7 +10577,7 @@ static int ipr_probe(struct pci_dev *pdev, const > struct pci_device_id *dev_id) > return rc; > } > > - rc = scsi_add_host(ioa_cfg->host, &pdev->dev); > + rc = scsi_add_host_with_dma(ioa_cfg->host, &pdev->dev, &pdev->dev); > > if (rc) { > __ipr_remove(pdev); > @@ -10596,6 +10606,7 @@ static int ipr_probe(struct pci_dev *pdev, const > struct pci_device_id *dev_id) > > scsi_scan_host(ioa_cfg->host); > ioa_cfg->iopoll_weight = ioa_cfg->chip_cfg->iopoll_weight; > + ioa_cfg->use_blk_mq = ipr_use_blk_mq; > > if (ioa_cfg->iopoll_weight && ioa_cfg->sis64 && ioa_cfg->nvectors > 1) { > for (i = 1; i < ioa_cfg->hrrq_num; i++) { > -- Brian King Power Linux I/O IBM Linux Technology Center ------------------------------------------------------------------------------ Transform Data into Opportunity. Accelerate data analysis in your applications with Intel Data Analytics Acceleration Library. Click to learn more. http://pubads.g.doubleclick.net/gampad/clk?id=278785231&iu=/4140 _______________________________________________ Iprdd-devel mailing list Iprdd-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/iprdd-devel