On Tue, Sep 02, 2025 at 03:59:37PM -0600, Keith Busch wrote: > On Tue, Sep 02, 2025 at 10:49:48PM +0200, Marek Szyprowski wrote: > > On 19.08.2025 19:36, Leon Romanovsky wrote: > > > @@ -87,8 +87,8 @@ static bool blk_dma_map_bus(struct blk_dma_iter *iter, > > > struct phys_vec *vec) > > > static bool blk_dma_map_direct(struct request *req, struct device > > > *dma_dev, > > > struct blk_dma_iter *iter, struct phys_vec *vec) > > > { > > > - iter->addr = dma_map_page(dma_dev, phys_to_page(vec->paddr), > > > - offset_in_page(vec->paddr), vec->len, rq_dma_dir(req)); > > > + iter->addr = dma_map_phys(dma_dev, vec->paddr, vec->len, > > > + rq_dma_dir(req), 0); > > > if (dma_mapping_error(dma_dev, iter->addr)) { > > > iter->status = BLK_STS_RESOURCE; > > > return false; > > > > I wonder where is the corresponding dma_unmap_page() call and its change > > to dma_unmap_phys()... > > You can't do that in the generic layer, so it's up to the caller. The > dma addrs that blk_dma_iter yield are used in a caller specific > structure. For example, for NVMe, it goes into an NVMe PRP. The generic > layer doesn't know what that is, so the driver has to provide the > unmapping.
To be specific I think it is this hunk in another patch that matches the above: @@ -682,11 +682,15 @@ static void nvme_free_prps(struct request *req) { struct nvme_iod *iod = blk_mq_rq_to_pdu(req); struct nvme_queue *nvmeq = req->mq_hctx->driver_data; + unsigned int attrs = 0; unsigned int i; + if (req->cmd_flags & REQ_MMIO) + attrs = DMA_ATTR_MMIO; + for (i = 0; i < iod->nr_dma_vecs; i++) - dma_unmap_page(nvmeq->dev->dev, iod->dma_vecs[i].addr, - iod->dma_vecs[i].len, rq_dma_dir(req)); + dma_unmap_phys(nvmeq->dev->dev, iod->dma_vecs[i].addr, + iod->dma_vecs[i].len, rq_dma_dir(req), attrs); And it is functionally fine to split the series like this because unmap_page is a nop around unmap_phys: void dma_unmap_page_attrs(struct device *dev, dma_addr_t addr, size_t size, enum dma_data_direction dir, unsigned long attrs) { if (unlikely(attrs & DMA_ATTR_MMIO)) return; dma_unmap_phys(dev, addr, size, dir, attrs); } EXPORT_SYMBOL(dma_unmap_page_attrs); Jason