On 2022-11-24 17:24, Daniel Vetter wrote:
On Thu, Nov 24, 2022 at 11:11:21AM +0000, Robin Murphy wrote:
On 2022-11-23 17:28, Daniel Vetter wrote:
This code was added in b65e64f7ccd4 ("drm/cma: Use
dma_mmap_writecombine() to mmap buffer"), but does not explain why
it's needed.

It should be entirely unnecessary, because remap_pfn_range(), which is
what the various dma_mmap functiosn are built on top of, does already
unconditionally adjust the vma flags:

Not all dma_mmap_*() implementations use remap_pfn_range() though. For
instance on ARM where one set of DMA ops uses vm_map_pages(), but AFAICS not
all the relevant drivers would set VM_MIXEDMAP to prevent reaching the
BUG_ON(vma->vm_flags & VM_PFNMAP) in there.

Uh a dma_mmap which does not use VM_PFNMAP has pretty good chances of
being busted, since that allows get_user_pages on these memory
allocations. And I'm really not sure that's a bright idea ...

Can you please point me at these dma-ops so that I can try and understand
what they're trying to do?

See arm_iommu_mmap_attrs(), but also one of the paths in iommu_dma_mmap() for both arm64 and x86. These aren't using remap_pfn_range() because they're mapping a potentially-disjoint set of arbitrary pages, not a single physically-contiguous range. And for the avoidance of doubt, yes, in those cases they will always be real kernel pages. dma_mmap_attrs() can be relied upon to do the right thing for whatever dma_alloc_attrs() did; what isn't reliable is trying to second-guess from outside exactly what that might be.

I forgot to mention also that removing the VM_DONTEXPAND line will seemingly just reintroduce the annoying warning spam for which we added it in the first place (and 59f39bfa6553 does document this same reasoning).

Thanks,
Robin.

-Daniel


Robin.

https://elixir.bootlin.com/linux/v6.1-rc6/source/mm/memory.c#L2518

More importantly, it does uncondtionally set VM_PFNMAP, so clearing
that does not make much sense.

Patch motived by discussions around enforcing VM_PFNMAP semantics for
all dma-buf users, where Thomas asked why dma helpers will work with
that dma_buf_mmap() contract.

References: 
https://lore.kernel.org/dri-devel/[email protected]/
Cc: Laurent Pinchart <[email protected]>
Cc: Dave Airlie <[email protected]>
Cc: Maarten Lankhorst <[email protected]>
Cc: Maxime Ripard <[email protected]>
Cc: Thomas Zimmermann <[email protected]>
Cc: Sumit Semwal <[email protected]>
Cc: "Christian K�nig" <[email protected]>
Signed-off-by: Daniel Vetter <[email protected]>
---
   drivers/gpu/drm/drm_gem_dma_helper.c | 7 ++-----
   1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_dma_helper.c 
b/drivers/gpu/drm/drm_gem_dma_helper.c
index 1e658c448366..637a5cc62457 100644
--- a/drivers/gpu/drm/drm_gem_dma_helper.c
+++ b/drivers/gpu/drm/drm_gem_dma_helper.c
@@ -525,13 +525,10 @@ int drm_gem_dma_mmap(struct drm_gem_dma_object *dma_obj, 
struct vm_area_struct *
        int ret;
        /*
-        * Clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and set the
-        * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we want to map
-        * the whole buffer.
+        * Set the vm_pgoff (used as a fake buffer offset by DRM) to 0 as we
+        * want to map the whole buffer.
         */
        vma->vm_pgoff -= drm_vma_node_start(&obj->vma_node);
-       vma->vm_flags &= ~VM_PFNMAP;
-       vma->vm_flags |= VM_DONTEXPAND;
        if (dma_obj->map_noncoherent) {
                vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);

Reply via email to