On Tue, 27 Jan 2026 14:16:37 +0100
Thomas Zimmermann <[email protected]> wrote:

> Invoke folio_mark_accessed() in mmap page faults to add the folio to
> the memory manager's LRU list. Userspace invokes mmap to get the memory
> for software rendering. Later compositors will do the same to create the
> final on-screen image, so keeping the pages in LRU makes sense. Avoids
> paging out graphics buffers when under memory pressure.
> 
> In page_mkwrite, further invoke the folio_mark_dirty() to add the folio
> for writeback, should the underlying file be paged out from system memory.
> This rarely happens in practice, yet it would corrupt the buffer content.
> 
> This has little effect on a system's hardware-accelerated rendering, which
> only mmaps for an initial setup of textures, meshes, shaders, etc.
> 
> Signed-off-by: Thomas Zimmermann <[email protected]>

Reviewed-by: Boris Brezillon <[email protected]>

> ---
>  drivers/gpu/drm/drm_gem_shmem_helper.c | 18 +++++++++++++++++-
>  1 file changed, 17 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c 
> b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index b6ddabbfcc52..30cd34d3a111 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -562,8 +562,10 @@ static bool drm_gem_shmem_try_map_pmd(struct vm_fault 
> *vmf, unsigned long addr,
>  
>               if (folio_test_pmd_mappable(folio)) {
>                       /* Read-only mapping; split upon write fault */
> -                     if (vmf_insert_folio_pmd(vmf, folio, false) == 
> VM_FAULT_NOPAGE)
> +                     if (vmf_insert_folio_pmd(vmf, folio, false) == 
> VM_FAULT_NOPAGE) {
> +                             folio_mark_accessed(folio);
>                               return true;
> +                     }
>               }
>       }
>  #endif
> @@ -605,6 +607,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault 
> *vmf)
>               get_page(page);
>  
>               folio_lock(folio);
> +             folio_mark_accessed(folio);
>  
>               vmf->page = page;
>               ret = VM_FAULT_LOCKED;
> @@ -653,10 +656,23 @@ static void drm_gem_shmem_vm_close(struct 
> vm_area_struct *vma)
>       drm_gem_vm_close(vma);
>  }
>  
> +static vm_fault_t drm_gem_shmem_page_mkwrite(struct vm_fault *vmf)
> +{
> +     struct folio *folio = page_folio(vmf->page);
> +
> +     file_update_time(vmf->vma->vm_file);
> +
> +     folio_lock(folio);
> +     folio_mark_dirty(folio);
> +
> +     return VM_FAULT_LOCKED;
> +}
> +
>  const struct vm_operations_struct drm_gem_shmem_vm_ops = {
>       .fault = drm_gem_shmem_fault,
>       .open = drm_gem_shmem_vm_open,
>       .close = drm_gem_shmem_vm_close,
> +     .page_mkwrite = drm_gem_shmem_page_mkwrite,
>  };
>  EXPORT_SYMBOL_GPL(drm_gem_shmem_vm_ops);
>  

Reply via email to