On Fri, Jun 09, 2017 at 03:45:54PM +0100, Will Deacon wrote:
> On Wed, Jun 07, 2017 at 06:15:02PM +0200, Peter Zijlstra wrote:
> > Commit:
> > 
> >   af2c1401e6f9 ("mm: numa: guarantee that tlb_flush_pending updates are 
> > visible before page table updates")
> > 
> > added smp_mb__before_spinlock() to set_tlb_flush_pending(). I think we
> > can solve the same problem without this barrier.
> > 
> > If instead we mandate that mm_tlb_flush_pending() is used while
> > holding the PTL we're guaranteed to observe prior
> > set_tlb_flush_pending() instances.
> > 
> > For this to work we need to rework migrate_misplaced_transhuge_page()
> > a little and move the test up into do_huge_pmd_numa_page().
> > 
> > Cc: Mel Gorman <[email protected]>
> > Cc: Rik van Riel <[email protected]>
> > Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
> > ---
> > --- a/include/linux/mm_types.h
> > +++ b/include/linux/mm_types.h
> > @@ -527,18 +527,16 @@ static inline cpumask_t *mm_cpumask(stru
> >   */
> >  static inline bool mm_tlb_flush_pending(struct mm_struct *mm)
> >  {
> > -   barrier();
> > +   /*
> > +    * Must be called with PTL held; such that our PTL acquire will have
> > +    * observed the store from set_tlb_flush_pending().
> > +    */
> >     return mm->tlb_flush_pending;
> >  }
> >  static inline void set_tlb_flush_pending(struct mm_struct *mm)
> >  {
> >     mm->tlb_flush_pending = true;
> > -
> > -   /*
> > -    * Guarantee that the tlb_flush_pending store does not leak into the
> > -    * critical section updating the page tables
> > -    */
> > -   smp_mb__before_spinlock();
> > +   barrier();
> 
> Why do you need the barrier() here? Isn't the ptl unlock sufficient?

General paranioa I think. I'll have another look.

Reply via email to