On Fri, 28 Nov 2025 19:52:44 +0100
Loïc Molinari <[email protected]> wrote:

> Attempt a PMD sized PFN insertion into the VMA if the faulty address
> of the fault handler is part of a huge page.
> 
> On builds with CONFIG_TRANSPARENT_HUGEPAGE enabled, if the mmap() user
> address is PMD size aligned, if the GEM object is backed by shmem
> buffers on mountpoints setting the 'huge=' option and if the shmem
> backing store manages to allocate a huge folio, CPU mapping would then
> benefit from significantly increased memcpy() performance. When these
> conditions are met on a system with 2 MiB huge pages, an aligned copy
> of 2 MiB would raise a single page fault instead of 4096.
> 
> v4:
> - implement map_pages instead of huge_fault
> 
> v6:
> - get rid of map_pages handler for now (keep it for another series
>   along with arm64 contpte support)
> 
> Signed-off-by: Loïc Molinari <[email protected]>
> ---
>  drivers/gpu/drm/drm_gem_shmem_helper.c | 55 +++++++++++++++++++++-----
>  1 file changed, 46 insertions(+), 9 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c 
> b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index be89be1c804c..81f4ac7cb8f6 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -567,31 +567,68 @@ int drm_gem_shmem_dumb_create(struct drm_file *file, 
> struct drm_device *dev,
>  }
>  EXPORT_SYMBOL_GPL(drm_gem_shmem_dumb_create);
>  
> +static bool drm_gem_shmem_fault_is_valid(struct drm_gem_object *obj,
> +                                      pgoff_t pgoff)

AFAICT, extracting the fault_is_valid() logic into a helper is
orthogonal to the huge_page mapping stuff, and I don't see it being
used in the rest of the series (I guess it was when you were
introducing support for map_pages()). Maybe this should be done in a
separate patch, or postponed until there's a second place checking for
fault validity, dunno.

> +{
> +     struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
> +
> +     if (drm_WARN_ON_ONCE(obj->dev, !shmem->pages) ||
> +         pgoff >= (obj->size >> PAGE_SHIFT) ||
> +         shmem->madv < 0)
> +             return false;
> +
> +     return true;
> +}
> +
> +static bool drm_gem_shmem_map_pmd(struct vm_fault *vmf, unsigned long addr,
> +                               struct page *page)

nit: could we name that one drm_gem_shmem_try_map_pmd()?

With my two nits addressed, the patch is

Reviewed-by: Boris Brezillon <[email protected]>

> +{
> +#ifdef CONFIG_ARCH_SUPPORTS_PMD_PFNMAP
> +     unsigned long pfn = page_to_pfn(page);
> +     unsigned long paddr = pfn << PAGE_SHIFT;
> +     bool aligned = (addr & ~PMD_MASK) == (paddr & ~PMD_MASK);
> +
> +     if (aligned &&
> +         pmd_none(*vmf->pmd) &&
> +         folio_test_pmd_mappable(page_folio(page))) {
> +             pfn &= PMD_MASK >> PAGE_SHIFT;
> +             if (vmf_insert_pfn_pmd(vmf, pfn, false) == VM_FAULT_NOPAGE)
> +                     return true;
> +     }
> +#endif
> +
> +     return false;
> +}
> +
>  static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
>  {
>       struct vm_area_struct *vma = vmf->vma;
>       struct drm_gem_object *obj = vma->vm_private_data;
>       struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
> -     loff_t num_pages = obj->size >> PAGE_SHIFT;
> -     vm_fault_t ret;
> -     struct page *page;
> +     struct page **pages = shmem->pages;
>       pgoff_t page_offset;
> +     unsigned long pfn;
> +     vm_fault_t ret;
>  
>       /* Offset to faulty address in the VMA (without the fake offset). */
>       page_offset = vmf->pgoff - vma->vm_pgoff;
>  
>       dma_resv_lock(shmem->base.resv, NULL);
>  
> -     if (page_offset >= num_pages ||
> -         drm_WARN_ON_ONCE(obj->dev, !shmem->pages) ||
> -         shmem->madv < 0) {
> +     if (unlikely(!drm_gem_shmem_fault_is_valid(obj, page_offset))) {
>               ret = VM_FAULT_SIGBUS;
> -     } else {
> -             page = shmem->pages[page_offset];
> +             goto out;
> +     }
>  
> -             ret = vmf_insert_pfn(vma, vmf->address, page_to_pfn(page));
> +     if (drm_gem_shmem_map_pmd(vmf, vmf->address, pages[page_offset])) {
> +             ret = VM_FAULT_NOPAGE;
> +             goto out;
>       }
>  
> +     pfn = page_to_pfn(pages[page_offset]);
> +     ret = vmf_insert_pfn(vma, vmf->address, pfn);
> +
> + out:
>       dma_resv_unlock(shmem->base.resv);
>  
>       return ret;

Reply via email to