On Mon, Feb 01, 2021 at 10:30:17AM -0800, Jianxiong Gao wrote:
> @@ -868,12 +871,24 @@ static blk_status_t nvme_map_data(struct nvme_dev *dev, 
> struct request *req,
>       if (!iod->nents)
>               goto out_free_sg;
>  
> +     offset_ret = dma_set_min_align_mask(dev->dev, NVME_CTRL_PAGE_SIZE - 1);
> +     if (offset_ret) {
> +             dev_warn(dev->dev, "dma_set_min_align_mask failed to set 
> offset\n");
> +             goto out_free_sg;
> +     }
> +
>       if (is_pci_p2pdma_page(sg_page(iod->sg)))
>               nr_mapped = pci_p2pdma_map_sg_attrs(dev->dev, iod->sg,
>                               iod->nents, rq_dma_dir(req), DMA_ATTR_NO_WARN);
>       else
>               nr_mapped = dma_map_sg_attrs(dev->dev, iod->sg, iod->nents,
>                                            rq_dma_dir(req), DMA_ATTR_NO_WARN);
> +
> +     offset_ret = dma_set_min_align_mask(dev->dev, 0);
> +     if (offset_ret) {
> +             dev_warn(dev->dev, "dma_set_min_align_mask failed to reset 
> offset\n");
> +             goto out_free_sg;
> +     }
>       if (!nr_mapped)
>               goto out_free_sg;

Why is this setting being done and undone on each IO? Wouldn't it be
more efficient to set it once during device initialization?

And more importantly, this isn't thread safe: one CPU may be setting the
device's dma alignment mask to 0 while another CPU is expecting it to be
NVME_CTRL_PAGE_SIZE - 1.
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Reply via email to