On 08/27/2018 04:00 AM, Peter Zijlstra wrote:
>
> The one obvious thing SH and ARM want is a sensible default for
> tlb_start_vma(). (also: https://lkml.org/lkml/2004/1/15/6 )
>
> The below make tlb_start_vma() default to flush_cache_range(), which
> should be right and sufficient. The only exceptions that I found where
> (oddly):
>
>   - m68k-mmu
>   - sparc64
>   - unicore
>
> Those architectures appear to have a non-NOP flush_cache_range(), but
> their current tlb_start_vma() does not call it.

So indeed we follow the DaveM's insight from 2004 about tlb_{start,end}_vma() 
and
those are No-ops for ARC for the general case. For the historic VIPT aliasing
dcache they are what they should be per 2004 link above - I presume that is all
hunky dory with you ?

> Furthermore, I think tlb_flush() is broken on arc and parisc; in
> particular they don't appear to have any TLB invalidate for the
> shift_arg_pages() case, where we do not call tlb_*_vma() and fullmm=0.

Care to explain this issue a bit more ?
And that is independent of the current discussion.

> Possibly shift_arg_pages() should be fixed instead.
>
> Some archs (nds32,sparc32) avoid this by having an unconditional
> flush_tlb_mm() in tlb_flush(), which seems somewhat suboptimal if you
> have flush_tlb_range().  TLB_FLUSH_VMA() might be an option, however
> hideous it is.
>
> ---
>
> diff --git a/arch/arc/include/asm/tlb.h b/arch/arc/include/asm/tlb.h
> index a9db5f62aaf3..7af2b373ebe7 100644
> --- a/arch/arc/include/asm/tlb.h
> +++ b/arch/arc/include/asm/tlb.h
> @@ -23,15 +23,6 @@ do {                                               \
>   *
>   * Note, read http://lkml.org/lkml/2004/1/15/6
>   */
> -#ifndef CONFIG_ARC_CACHE_VIPT_ALIASING
> -#define tlb_start_vma(tlb, vma)
> -#else
> -#define tlb_start_vma(tlb, vma)                                              
> \
> -do {                                                                 \
> -     if (!tlb->fullmm)                                               \
> -             flush_cache_range(vma, vma->vm_start, vma->vm_end);     \
> -} while(0)
> -#endif
>  
>  #define tlb_end_vma(tlb, vma)                                                
> \
>  do {                                                                 \

[snip..]

>                                     \
> diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
> index e811ef7b8350..1d037fd5bb7a 100644
> --- a/include/asm-generic/tlb.h
> +++ b/include/asm-generic/tlb.h
> @@ -181,19 +181,21 @@ static inline void 
> tlb_remove_check_page_size_change(struct mmu_gather *tlb,
>   * the vmas are adjusted to only cover the region to be torn down.
>   */
>  #ifndef tlb_start_vma
> -#define tlb_start_vma(tlb, vma) do { } while (0)
> +#define tlb_start_vma(tlb, vma)                                              
> \
> +do {                                                                 \
> +     if (!tlb->fullmm)                                               \
> +             flush_cache_range(vma, vma->vm_start, vma->vm_end);     \
> +} while (0)
>  #endif

So for non aliasing arches to be not affected, this relies on 
flush_cache_range()
to be no-op ?

> -#define __tlb_end_vma(tlb, vma)                                      \
> -     do {                                                    \
> -             if (!tlb->fullmm && tlb->end) {                 \
> -                     tlb_flush(tlb);                         \
> -                     __tlb_reset_range(tlb);                 \
> -             }                                               \
> -     } while (0)
> -
>  #ifndef tlb_end_vma
> -#define tlb_end_vma  __tlb_end_vma
> +#define tlb_end_vma(tlb, vma)                                                
> \
> +     do {                                                            \
> +             if (!tlb->fullmm && tlb->end) {                         \
> +                     tlb_flush(tlb);                                 \
> +                     __tlb_reset_range(tlb);                         \
> +             }                                                       \
> +     } while (0)
>  #endif
>  
>  #ifndef __tlb_remove_tlb_entry

And this one is for shift_arg_pages() but will also cause extraneous flushes for
other cases - not happening currently !

-Vineet

Reply via email to