On Fri, Jun 27, 2025 at 08:02:13PM +0300, Leon Romanovsky wrote: > On Fri, Jun 27, 2025 at 03:44:10PM +0200, Marek Szyprowski wrote: > > On 25.06.2025 15:18, Leon Romanovsky wrote: > > > This series refactors the DMA mapping to use physical addresses > > > as the primary interface instead of page+offset parameters. This > > > change aligns the DMA API with the underlying hardware reality where > > > DMA operations work with physical addresses, not page structures. > > > > > > The series consists of 8 patches that progressively convert the DMA > > > mapping infrastructure from page-based to physical address-based APIs: > > > > > > The series maintains backward compatibility by keeping the old > > > page-based API as wrapper functions around the new physical > > > address-based implementations. > > > > Thanks for this rework! I assume that the next step is to add map_phys > > callback also to the dma_map_ops and teach various dma-mapping providers > > to use it to avoid more phys-to-page-to-phys conversions. > > Probably Christoph will say yes, however I personally don't see any > benefit in this. Maybe I wrong here, but all existing .map_page() > implementation platforms don't support p2p anyway. They won't benefit > from this such conversion. > > > > > I only wonder if this newly introduced dma_map_phys()/dma_unmap_phys() > > API is also suitable for the recently discussed PCI P2P DMA? While > > adding a new API maybe we should take this into account? > > First, immediate user (not related to p2p) is blk layer: > https://lore.kernel.org/linux-nvme/bcdcb5eb-17ed-412f-bf5c-303079798...@nvidia.com/T/#m7e715697d4b2e3997622a3400243477c75cab406 > > +static bool blk_dma_map_direct(struct request *req, struct device *dma_dev, > + struct blk_dma_iter *iter, struct phys_vec *vec) > +{ > + iter->addr = dma_map_page(dma_dev, phys_to_page(vec->paddr), > + offset_in_page(vec->paddr), vec->len, rq_dma_dir(req)); > + if (dma_mapping_error(dma_dev, iter->addr)) { > + iter->status = BLK_STS_RESOURCE; > + return false; > + } > + iter->len = vec->len; > + return true; > +} > > Block layer started to store phys addresses instead of struct pages and > this phys_to_page() conversion in data-path will be avoided.
I almost completed main user of this dma_map_phys() callback. It is rewrite of this patch [PATCH v3 3/3] vfio/pci: Allow MMIO regions to be exported through dma-buf https://lore.kernel.org/all/20250307052248.405803-4-vivek.kasire...@intel.com/ Whole populate_sgt()->dma_map_resource() block looks differently now and it is relying on dma_map_phys() as we are exporting memory without struct pages. It will be something like this: 89 for (i = 0; i < priv->nr_ranges; i++) { 90 phys = pci_resource_start(priv->vdev->pdev, 91 dma_ranges[i].region_index); 92 phys += dma_ranges[i].offset; 93 94 if (priv->bus_addr) { 95 addr = pci_p2pdma_bus_addr_map(&p2pdma_state, phys); 96 fill_sg_entry(sgl, dma_ranges[i].length, addr); 97 sgl = sg_next(sgl); 98 } else if (dma_use_iova(&priv->state)) { 99 ret = dma_iova_link(attachment->dev, &priv->state, phys, 100 priv->mapped_len, 101 dma_ranges[i].length, dir, attrs); 102 if (ret) 103 goto err_unmap_dma; 104 105 priv->mapped_len += dma_ranges[i].length; 106 } else { 107 addr = dma_map_phys(attachment->dev, phys, 0, 108 dma_ranges[i].length, dir, attrs); 109 ret = dma_mapping_error(attachment->dev, addr); 110 if (ret) 111 goto unmap_dma_buf; 112 113 fill_sg_entry(sgl, dma_ranges[i].length, addr); 114 sgl = sg_next(sgl); 115 } 116 } 117 118 if (dma_use_iova(&priv->state) && !priv->bus_addr) { 119 ret = dma_iova_sync(attachment->dev, &pri->state, 0, 120 priv->mapped_len); 121 if (ret) 122 goto err_unmap_dma; 123 124 fill_sg_entry(sgl, priv->mapped_len, priv->state.addr); 125 } > > > My main concern is the lack of the source phys addr passed to the > > dma_unmap_phys() > > function and I'm aware that this might complicate a bit code conversion > > from old dma_map/unmap_page() API. It is not needed for now, all p2p logic is external to DMA API. Thanks > > > > Best regards > > -- > > Marek Szyprowski, PhD > > Samsung R&D Institute Poland > > > > >