On Mon, Aug 04, 2025 at 03:42:43PM +0300, Leon Romanovsky wrote: > From: Leon Romanovsky <leo...@nvidia.com> > > Extend base DMA page API to handle MMIO flow.
I would mention here this follows the long ago agreement that we don't need to enable P2P in the legacy dma_ops area. Simply failing when getting an ATTR_MMIO is OK. > --- a/kernel/dma/mapping.c > +++ b/kernel/dma/mapping.c > @@ -158,6 +158,7 @@ dma_addr_t dma_map_page_attrs(struct device *dev, struct > page *page, > { > const struct dma_map_ops *ops = get_dma_ops(dev); > phys_addr_t phys = page_to_phys(page) + offset; > + bool is_mmio = attrs & DMA_ATTR_MMIO; > dma_addr_t addr; > > BUG_ON(!valid_dma_direction(dir)); > @@ -166,12 +167,23 @@ dma_addr_t dma_map_page_attrs(struct device *dev, > struct page *page, > return DMA_MAPPING_ERROR; > > if (dma_map_direct(dev, ops) || > - arch_dma_map_phys_direct(dev, phys + size)) > + (!is_mmio && arch_dma_map_phys_direct(dev, phys + size))) > addr = dma_direct_map_phys(dev, phys, size, dir, attrs); I don't know this area, maybe explain a bit in the commit message how you see ATTR_MMIO interacts with arch_dma_map_phys_direct ? > else if (use_dma_iommu(dev)) > addr = iommu_dma_map_phys(dev, phys, size, dir, attrs); > - else > + else if (is_mmio) { > + if (!ops->map_resource) > + return DMA_MAPPING_ERROR; > + > + addr = ops->map_resource(dev, phys, size, dir, attrs); > + } else { > + /* > + * All platforms which implement .map_page() don't support > + * non-struct page backed addresses. > + */ > addr = ops->map_page(dev, page, offset, size, dir, attrs); Comment could be clearer maybe just: The dma_ops API contract for ops->map_page() requires kmappable memory, while ops->map_resource() does not. But this approach looks good to me, it prevents non-kmappable phys from going down to the legacy dma_ops map_page where it cannot work. >From here you could do what Marek and Christoph asked to flush the struct page out of the ops->map_page() and replace it with kmap_local_phys(). Jason