> On Wed, Aug 09, 2017 at 12:17:09PM -0400, jgli...@redhat.com wrote:
> > From: Jérôme Glisse <jgli...@redhat.com>
> > 
> > MMU notifiers can sleep, but in try_to_unmap_one() we call
> > mmu_notifier_invalidate_page() under page table lock.
> > 
> > Let's instead use mmu_notifier_invalidate_range() outside
> > page_vma_mapped_walk() loop.
> > 
> > Signed-off-by: Jérôme Glisse <jgli...@redhat.com>
> > Cc: Kirill A. Shutemov <kirill.shute...@linux.intel.com>
> > Cc: Andrew Morton <a...@linux-foundation.org>
> > Fixes: c7ab0d2fdc84 ("mm: convert try_to_unmap_one() to use
> > page_vma_mapped_walk()")
> > ---
> >  mm/rmap.c | 36 +++++++++++++++++++++---------------
> >  1 file changed, 21 insertions(+), 15 deletions(-)
> > 
> > diff --git a/mm/rmap.c b/mm/rmap.c
> > index aff607d5f7d2..d60e887f1cda 100644
> > --- a/mm/rmap.c
> > +++ b/mm/rmap.c
> > @@ -1329,7 +1329,8 @@ static bool try_to_unmap_one(struct page *page,
> > struct vm_area_struct *vma,
> >     };
> >     pte_t pteval;
> >     struct page *subpage;
> > -   bool ret = true;
> > +   bool ret = true, invalidation_needed = false;
> > +   unsigned long end = address + PAGE_SIZE;
> 
> I think it should be 'address + (1UL << compound_order(page))'.

Can't address point to something else than first page in huge page ?
Also i did use end as an optimization ie maybe not all the pte in the
range are valid and thus they not all need to be invalidated hence by
tracking the last one that needs invalidation i am limiting the range.

But it is a small optimization so i am not attach to it.

Jérôme

Reply via email to