On Mon, May 13, 2019 at 05:38:04PM +0100, Will Deacon wrote:
> On Fri, May 10, 2019 at 07:26:54AM +0800, Yang Shi wrote:
> > diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
> > index 99740e1..469492d 100644
> > --- a/mm/mmu_gather.c
> > +++ b/mm/mmu_gather.c
> > @@ -245,14 +245,39 @@ void tlb_finish_mmu(struct mmu_gather *tlb,
> >  {
> >     /*
> >      * If there are parallel threads are doing PTE changes on same range
> > +    * under non-exclusive lock (e.g., mmap_sem read-side) but defer TLB
> > +    * flush by batching, one thread may end up seeing inconsistent PTEs
> > +    * and result in having stale TLB entries.  So flush TLB forcefully
> > +    * if we detect parallel PTE batching threads.
> > +    *
> > +    * However, some syscalls, e.g. munmap(), may free page tables, this
> > +    * needs force flush everything in the given range. Otherwise this
> > +    * may result in having stale TLB entries for some architectures,
> > +    * e.g. aarch64, that could specify flush what level TLB.
> >      */
> > +   if (mm_tlb_flush_nested(tlb->mm) && !tlb->fullmm) {
> > +           /*
> > +            * Since we can't tell what we actually should have
> > +            * flushed, flush everything in the given range.
> > +            */
> > +           tlb->freed_tables = 1;
> > +           tlb->cleared_ptes = 1;
> > +           tlb->cleared_pmds = 1;
> > +           tlb->cleared_puds = 1;
> > +           tlb->cleared_p4ds = 1;
> > +
> > +           /*
> > +            * Some architectures, e.g. ARM, that have range invalidation
> > +            * and care about VM_EXEC for I-Cache invalidation, need force
> > +            * vma_exec set.
> > +            */
> > +           tlb->vma_exec = 1;
> > +
> > +           /* Force vma_huge clear to guarantee safer flush */
> > +           tlb->vma_huge = 0;
> > +
> > +           tlb->start = start;
> > +           tlb->end = end;
> >     }
> 
> Whilst I think this is correct, it would be interesting to see whether
> or not it's actually faster than just nuking the whole mm, as I mentioned
> before.
> 
> At least in terms of getting a short-term fix, I'd prefer the diff below
> if it's not measurably worse.

So what point? General paranoia? Either change should allow PPC to get
rid of its magic mushrooms, the below would be a little bit easier for
them because they already do full invalidate correct.

> --->8
> 
> diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
> index 99740e1dd273..cc251422d307 100644
> --- a/mm/mmu_gather.c
> +++ b/mm/mmu_gather.c
> @@ -251,8 +251,9 @@ void tlb_finish_mmu(struct mmu_gather *tlb,
>        * forcefully if we detect parallel PTE batching threads.
>        */
>       if (mm_tlb_flush_nested(tlb->mm)) {
> +             tlb->fullmm = 1;
>               __tlb_reset_range(tlb);
> -             __tlb_adjust_range(tlb, start, end - start);
> +             tlb->freed_tables = 1;
>       }
>  
>       tlb_flush_mmu(tlb);

Reply via email to