On Thu, 19 Mar 2026 14:17:31 +0000
Biju Das <[email protected]> wrote:

> Hi All,
> 
> > -----Original Message-----
> > From: Biju Das
> > Sent: 14 March 2026 09:43
> > Subject: RE: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty 
> > status in mmap
> > 
> > Hi Boris Brezillon,
> >   
> > > -----Original Message-----
> > > From: dri-devel <[email protected]> On Behalf Of
> > > Boris Brezillon
> > > Sent: 13 March 2026 17:46
> > > Subject: Re: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty
> > > status in mmap
> > >
> > > On Fri, 13 Mar 2026 13:55:21 +0100
> > > Boris Brezillon <[email protected]> wrote:
> > >  
> > > > On Fri, 13 Mar 2026 13:43:28 +0100
> > > > Boris Brezillon <[email protected]> wrote:
> > > >  
> > > > > On Fri, 13 Mar 2026 13:18:35 +0100 Boris Brezillon
> > > > > <[email protected]> wrote:
> > > > >  
> > > > > > On Fri, 13 Mar 2026 12:04:25 +0000 Biju Das
> > > > > > <[email protected]> wrote:
> > > > > >  
> > > > > > > > -----Original Message-----
> > > > > > > > From: dri-devel <[email protected]> On
> > > > > > > > Behalf Of Boris Brezillon
> > > > > > > > Sent: 13 March 2026 11:57
> > > > > > > > Subject: Re: [PATCH v4 5/6] drm/gem-shmem: Track folio
> > > > > > > > accessed/dirty status in mmap
> > > > > > > >
> > > > > > > > On Fri, 13 Mar 2026 11:29:47 +0100 Thomas Zimmermann
> > > > > > > > <[email protected]> wrote:
> > > > > > > >  
> > > > > > > > > Hi
> > > > > > > > >
> > > > > > > > > Am 13.03.26 um 11:18 schrieb Boris Brezillon:
> > > > > > > > > [...]  
> > > > > > > > > >>>>> +       if (drm_WARN_ON(obj->dev, !shmem->pages || 
> > > > > > > > > >>>>> page_offset >= num_pages))
> > > > > > > > > >>>>> +               return VM_FAULT_SIGBUS;
> > > > > > > > > >>>>> +
> > > > > > > > > >>>>> +       file_update_time(vma->vm_file);
> > > > > > > > > >>>>> +
> > > > > > > > > >>>>> +
> > > > > > > > > >>>>> +folio_mark_dirty(page_folio(shmem->pages[page_offse
> > > > > > > > > >>>>> +t]));  
> > > > > > > > > > Do we need a folio_mark_dirty_lock() here?  
> > > > > > > > >
> > > > > > > > > There is a helper for that with some documentation. [1]  
> > > > > > > >
> > > > > > > > This [1] seems to solve the problem for me. Still unsure
> > > > > > > > about the folio_mark_dirty_lock vs folio_mark_dirty though.
> > > > > > > >
> > > > > > > > [1]https://yhbt.net/lore/dri-devel/20260312155027.1682606-1-
> > > > > > > > [email protected]/  
> > > > > > >
> > > > > > > FYI, I used folio_mark_dirty_lock() still it does not solve the 
> > > > > > > issue with weston hang.  
> > > > > >
> > > > > > The patch I pointed to has nothing to do with
> > > > > > folio_mark_dirty_lock(), It's a bug caused by huge page mapping 
> > > > > > changes.  
> > > > >
> > > > > Scratch that. I had a bunch of other changes on top, and it hangs
> > > > > again now that I dropped those.  
> > > >
> > > > Seems like it's the combination of huge pages and mkwrite that's
> > > > causing issues, if I disable huge pages, it doesn't hang...  
> > >
> > > I managed to have it working with the following diff. I still need to
> > > check why the "map-RO-split+RW- on-demand" approach doesn't work
> > > (races between huge_fault and pfn_mkwrite?), but I think it's okay to
> > > map the real thing writable on the first attempt anyway (we're not
> > > trying to do CoW here, since we're always pointing to the same page,
> > > it's just the permissions that change). Note that there's still the
> > > race fixed by
> > > https://yhbt.net/lore/dri-devel/20260312155027.1682606-1-pedrodemargom
> > > [email protected]/ in this diff, I just tried to keep the diffstat minimal.  
> > 
> > I confirm with this diff, weston is now coming up on RZ/V2L SMARC EVK.
> > 
> > I am just using drm-misc-next + this diff + arm64 defconfig + Poky (Yocto 
> > Project Reference Distro)
> > 5.0.11 + Mesa
> > 
> > Cheers,
> > Biju
> >   
> > >  
> > > --->8---  
> > > diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c
> > > b/drivers/gpu/drm/drm_gem_shmem_helper.c
> > > index 4500deef4127..4efdce5a60f0 100644
> > > --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> > > +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> > > @@ -561,9 +561,8 @@ static vm_fault_t 
> > > drm_gem_shmem_try_insert_pfn_pmd(struct vm_fault *vmf, unsigne
> > >         bool aligned = (vmf->address & ~PMD_MASK) == (paddr &
> > > ~PMD_MASK);
> > >
> > >         if (aligned && pmd_none(*vmf->pmd)) {
> > > -               /* Read-only mapping; split upon write fault */
> > >                 pfn &= PMD_MASK >> PAGE_SHIFT;
> > > -               return vmf_insert_pfn_pmd(vmf, pfn, false);
> > > +               return vmf_insert_pfn_pmd(vmf, pfn, vmf->flags &
> > > + FAULT_FLAG_WRITE);
> > >         }
> > >  #endif
> > >
> > > @@ -597,8 +596,12 @@ static vm_fault_t drm_gem_shmem_fault(struct
> > > vm_fault *vmf)
> > >
> > >         pfn = page_to_pfn(page);
> > >
> > > -       if (folio_test_pmd_mappable(folio))
> > > +       if (folio_test_pmd_mappable(folio)) {
> > >                 ret = drm_gem_shmem_try_insert_pfn_pmd(vmf, pfn);
> > > +               if (ret == VM_FAULT_NOPAGE && vmf->flags & 
> > > FAULT_FLAG_WRITE)
> > > +                       folio_mark_dirty(folio);
> > > +       }
> > > +
> > >         if (ret != VM_FAULT_NOPAGE)
> > >                 ret = vmf_insert_pfn(vma, vmf->address, pfn);
> > >  
> 
> Any patch available to fix the weston issue? Still the issue is present with 
> drm-misc-next
> Please let me know.

Not yet. I want to land [1] first. I'll try to post an official version
of the proposed fix tomorrow, though I'd really like to have some
inputs from the MM maintainers before claiming this is the right fix...

[1]https://yhbt.net/lore/dri-devel/[email protected]/

Reply via email to