Hi Boris,

thanks for reviewing the series.

Am 09.02.26 um 15:23 schrieb Boris Brezillon:
On Mon,  9 Feb 2026 14:27:14 +0100
Thomas Zimmermann <[email protected]> wrote:

Invoke folio_mark_accessed() in mmap page faults to add the folio to
the memory manager's LRU list. Userspace invokes mmap to get the memory
for software rendering. Compositors do the same when creating the final
on-screen image, so keeping the pages in LRU makes sense. Avoids paging
out graphics buffers when under memory pressure.

In pfn_mkwrite, further invoke the folio_mark_dirty() to add the folio
for writeback should the underlying file be paged out from system memory.
This rarely happens in practice, yet it would corrupt the buffer content.

This has little effect on a system's hardware-accelerated rendering, which
only mmaps for an initial setup of textures, meshes, shaders, etc.

v3:
- rewrite for VM_PFNMAP
v2:
- adapt to changes in drm_gem_shmem_try_mmap_pmd()

Signed-off-by: Thomas Zimmermann <[email protected]>
Reviewed-by: Boris Brezillon <[email protected]>
---
  drivers/gpu/drm/drm_gem_shmem_helper.c | 20 ++++++++++++++++++++
  1 file changed, 20 insertions(+)

diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c 
b/drivers/gpu/drm/drm_gem_shmem_helper.c
index c3a054899ba3..0c86ad40a049 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -598,6 +598,9 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
        if (ret != VM_FAULT_NOPAGE)
                ret = vmf_insert_pfn(vma, vmf->address, pfn);
+ if (likely(!(ret & VM_FAULT_ERROR)))
Can't we just go

        if (ret == VM_FAULT_NOPAGE)

here?

After reviewing the code in vmf_insert_pfn, I think so. All we'll see is _OOM and _SIGBUS; or _NOPAGE on success.  I'll change it then.



+               folio_mark_accessed(folio);
+
  out:
        dma_resv_unlock(obj->resv);
@@ -638,10 +641,27 @@ static void drm_gem_shmem_vm_close(struct vm_area_struct *vma)
        drm_gem_vm_close(vma);
  }
+static vm_fault_t drm_gem_shmem_pfn_mkwrite(struct vm_fault *vmf)
+{
+       struct vm_area_struct *vma = vmf->vma;
+       struct drm_gem_object *obj = vma->vm_private_data;
+       struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
+       pgoff_t page_offset = vmf->pgoff - vma->vm_pgoff; /* page offset within 
VMA */
+       struct page *page = shmem->pages[page_offset];
Should we have a

        if (WARN_ON(!shmem->pages ||
                    page_offset <= (obj->size >> PAGE_SHIFT)))
                return VM_FAULT_SIGBUS;


?

I left it out because it doesn't seem necessary.  In the fault handler in drm_gem_shmem_fault(), I can see that we could get an OOB access. But we only call pfn_mkwrite() after going through _fault() first. I don't see a way of getting here unless we've already tested for the page in _fault().

Best regards
Thomas


+       struct folio *folio = page_folio(page);
+
+       file_update_time(vma->vm_file);
+
+       folio_mark_dirty(folio);
+
+       return 0;
+}
+
  const struct vm_operations_struct drm_gem_shmem_vm_ops = {
        .fault = drm_gem_shmem_fault,
        .open = drm_gem_shmem_vm_open,
        .close = drm_gem_shmem_vm_close,
+       .pfn_mkwrite = drm_gem_shmem_pfn_mkwrite,
  };
  EXPORT_SYMBOL_GPL(drm_gem_shmem_vm_ops);

--
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstr. 146, 90461 Nürnberg, Germany, www.suse.com
GF: Jochen Jaser, Andrew McDonald, Werner Knoblich, (HRB 36809, AG Nürnberg)


Reply via email to