On Tue, Aug 19, 2025 at 08:36:53PM +0300, Leon Romanovsky wrote: > From: Leon Romanovsky <leo...@nvidia.com> > > Extend base DMA page API to handle MMIO flow and follow > existing dma_map_resource() implementation to rely on dma_map_direct() > only to take DMA direct path.
I would reword this a little bit too dma-mapping: implement DMA_ATTR_MMIO for dma_(un)map_page_attrs() Make dma_map_page_attrs() and dma_map_page_attrs() respect DMA_ATTR_MMIO. DMA_ATR_MMIO makes the functions behave the same as dma_(un)map_resource(): - No swiotlb is possible - Legacy dma_ops arches use ops->map_resource() - No kmsan - No arch_dma_map_phys_direct() The prior patches have made the internl funtions called here support DMA_ATTR_MMIO. This is also preparation for turning dma_map_resource() into an inline calling dma_map_phys(DMA_ATTR_MMIO) to consolidate the flows. > @@ -166,14 +167,25 @@ dma_addr_t dma_map_page_attrs(struct device *dev, > struct page *page, > return DMA_MAPPING_ERROR; > > if (dma_map_direct(dev, ops) || > - arch_dma_map_phys_direct(dev, phys + size)) > + (!is_mmio && arch_dma_map_phys_direct(dev, phys + size))) > addr = dma_direct_map_phys(dev, phys, size, dir, attrs); PPC is the only user of arch_dma_map_phys_direct() and it looks like it should be called on MMIO memory. Seems like another inconsistency with map_resource. I'd leave it like the above though for this series. > else if (use_dma_iommu(dev)) > addr = iommu_dma_map_phys(dev, phys, size, dir, attrs); > - else > + else if (is_mmio) { > + if (!ops->map_resource) > + return DMA_MAPPING_ERROR; Probably written like: if (ops->map_resource) addr = ops->map_resource(dev, phys, size, dir, attrs); else addr = DMA_MAPPING_ERROR; As I think some of the design here is to run the trace even on the failure path? Otherwise looks OK Reviewed-by: Jason Gunthorpe <j...@nvidia.com> Jason