On Thu, Mar 12, 2026 at 09:26:45AM -0300, Jason Gunthorpe wrote: > On Wed, Mar 11, 2026 at 09:08:51PM +0200, Leon Romanovsky wrote: > > From: Leon Romanovsky <[email protected]> > > > > HMM mirroring can work on coherent systems without SWIOTLB path only. > > Until introduction of DMA_ATTR_REQUIRE_COHERENT, there was no reliable > > way to indicate that and various approximation was done: > > HMM is fundamentally about allowing a sophisticated device to > independently DMA to a process's memory concurrently with the CPU > accessing the same memory. It is similar to SVA but does not rely on > IOMMU support. Since the entire use model is concurrent access to the > same memory it becomes fatally broken as a uAPI if SWIOTLB is > replacing the memory, or the CPU caches are incoherent with DMA. > > Till now there was no reliable way to indicate that and various > approximation was done: > > > int hmm_dma_map_alloc(struct device *dev, struct hmm_dma_map *map, > > size_t nr_entries, size_t dma_entry_size) > > { > > <...> > > /* > > * The HMM API violates our normal DMA buffer ownership rules and > > can't > > * transfer buffer ownership. The dma_addressing_limited() check > > is a > > * best approximation to ensure no swiotlb buffering happens. > > */ > > dma_need_sync = !dev->dma_skip_sync; > > if (dma_need_sync || dma_addressing_limited(dev)) > > return -EOPNOTSUPP; > > Can it get dropped now then?
Better not, it allows us to reject caller much earlier than DMA mapping flow. It is much saner to fail during UMEM ODP creation than start to fail for ODP pagefaults. > > > So let's mark mapped buffers with DMA_ATTR_REQUIRE_COHERENT attribute > > to prevent DMA debugging warnings for cache overlapped entries. > > Well, that isn't the main motivation, this prevents silent data > corruption if someone tries to use hmm in a system with swiotlb or > incoherent DMA, > > Looks OK otherwise > > Reviewed-by: Jason Gunthorpe <[email protected]> > > Jason >
