Hi Tommaso,

> -----Original Message-----
> From: Tommaso Merciai <[email protected]>
> Sent: 12 March 2026 17:37
> Subject: Re: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status 
> in mmap
> 
> Hi Thomas,
> Thanks for your patch.
> 
> I'm working on DSI support for RZ/G3E from this morning rebasing on top of 
> next-20260311 I'm seeing
> that weston hang on my side:
> 
> Reverting this patch fix the issue.
> (git revert 28e3918179aa)
> 
> I'm wondering if anyone encountered this issue?
> Thanks in advance.


I am also seeing same issue on RZ/G3L with weston.

Cheers,
Biju



> 
> Kind Regards,
> Tommaso
> 
> On Fri, Feb 27, 2026 at 12:42:10PM +0100, Thomas Zimmermann wrote:
> > Invoke folio_mark_accessed() in mmap page faults to add the folio to
> > the memory manager's LRU list. Userspace invokes mmap to get the
> > memory for software rendering. Compositors do the same when creating
> > the final on-screen image, so keeping the pages in LRU makes sense.
> > Avoids paging out graphics buffers when under memory pressure.
> >
> > In pfn_mkwrite, further invoke the folio_mark_dirty() to add the folio
> > for writeback should the underlying file be paged out from system memory.
> > This rarely happens in practice, yet it would corrupt the buffer content.
> >
> > This has little effect on a system's hardware-accelerated rendering,
> > which only mmaps for an initial setup of textures, meshes, shaders, etc.
> >
> > v4:
> > - test for VM_FAULT_NOPAGE before marking folio as accessed (Boris)
> > - test page-array bounds in mkwrite handler (Boris)
> > v3:
> > - rewrite for VM_PFNMAP
> > v2:
> > - adapt to changes in drm_gem_shmem_try_mmap_pmd()
> >
> > Signed-off-by: Thomas Zimmermann <[email protected]>
> > Reviewed-by: Boris Brezillon <[email protected]>
> > ---
> >  drivers/gpu/drm/drm_gem_shmem_helper.c | 22 ++++++++++++++++++++++
> >  1 file changed, 22 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c
> > b/drivers/gpu/drm/drm_gem_shmem_helper.c
> > index cefa50eaf7a4..1ab2bbd3860c 100644
> > --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> > +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> > @@ -598,6 +598,9 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault 
> > *vmf)
> >     if (ret != VM_FAULT_NOPAGE)
> >             ret = vmf_insert_pfn(vma, vmf->address, pfn);
> >
> > +   if (ret == VM_FAULT_NOPAGE)
> > +           folio_mark_accessed(folio);
> > +
> >  out:
> >     dma_resv_unlock(obj->resv);
> >
> > @@ -638,10 +641,29 @@ static void drm_gem_shmem_vm_close(struct 
> > vm_area_struct *vma)
> >     drm_gem_vm_close(vma);
> >  }
> >
> > +static vm_fault_t drm_gem_shmem_pfn_mkwrite(struct vm_fault *vmf) {
> > +   struct vm_area_struct *vma = vmf->vma;
> > +   struct drm_gem_object *obj = vma->vm_private_data;
> > +   struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
> > +   loff_t num_pages = obj->size >> PAGE_SHIFT;
> > +   pgoff_t page_offset = vmf->pgoff - vma->vm_pgoff; /* page offset
> > +within VMA */
> > +
> > +   if (drm_WARN_ON(obj->dev, !shmem->pages || page_offset >= num_pages))
> > +           return VM_FAULT_SIGBUS;
> > +
> > +   file_update_time(vma->vm_file);
> > +
> > +   folio_mark_dirty(page_folio(shmem->pages[page_offset]));
> > +
> > +   return 0;
> > +}
> > +
> >  const struct vm_operations_struct drm_gem_shmem_vm_ops = {
> >     .fault = drm_gem_shmem_fault,
> >     .open = drm_gem_shmem_vm_open,
> >     .close = drm_gem_shmem_vm_close,
> > +   .pfn_mkwrite = drm_gem_shmem_pfn_mkwrite,
> >  };
> >  EXPORT_SYMBOL_GPL(drm_gem_shmem_vm_ops);
> >
> > --
> > 2.52.0
> >

Reply via email to