Nicholas Piggin <npig...@gmail.com> writes:

> The POWER5 < DD2.1 issue is that slbie needs to be issued more than
> once. It came in with this change:
>
> ChangeSet@1.1608, 2004-04-29 07:12:31-07:00, da...@gibson.dropbear.id.au
>   [PATCH] POWER5 erratum workaround
>
>   Early POWER5 revisions (<DD2.1) have a problem requiring slbie
>   instructions to be repeated under some circumstances.  The patch below
>   adds a workaround (patch made by Anton Blanchard).

Thanks for extracting this. Can we add this to the code? Also I am not
sure what is repeated here? Is it that we just need one slb extra(hence
only applicable to offset == 1) or is it that we need to make sure there
is always one slb extra? The code does the former.  Do you a have link for
that email patch?


>
> The extra slbie in switch_slb is done even for the case where slbia is
> called (slb_flush_and_rebolt). I don't believe that is required
> because there are other slb_flush_and_rebolt callers which do not
> issue the workaround slbie, which would be broken if it was required.
>
> It also seems to be fine inside the isync with the first slbie, as it
> is in the kernel stack switch code.
>
> So move this workaround to where it is required. This is not much of
> an optimisation because this is the fast path, but it makes the code
> more understandable and neater.
>
> Signed-off-by: Nicholas Piggin <npig...@gmail.com>
> ---
>  arch/powerpc/mm/slb.c | 14 +++++++-------
>  1 file changed, 7 insertions(+), 7 deletions(-)
>
> diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
> index 1c7128c63a4b..d952ece3abf7 100644
> --- a/arch/powerpc/mm/slb.c
> +++ b/arch/powerpc/mm/slb.c
> @@ -226,7 +226,6 @@ static inline int esids_match(unsigned long addr1, 
> unsigned long addr2)
>  void switch_slb(struct task_struct *tsk, struct mm_struct *mm)
>  {
>       unsigned long offset;
> -     unsigned long slbie_data = 0;
>       unsigned long pc = KSTK_EIP(tsk);
>       unsigned long stack = KSTK_ESP(tsk);
>       unsigned long exec_base;
> @@ -241,7 +240,9 @@ void switch_slb(struct task_struct *tsk, struct mm_struct 
> *mm)
>       offset = get_paca()->slb_cache_ptr;
>       if (!mmu_has_feature(MMU_FTR_NO_SLBIE_B) &&
>           offset <= SLB_CACHE_ENTRIES) {
> +             unsigned long slbie_data;
>               int i;
> +
>               asm volatile("isync" : : : "memory");
>               for (i = 0; i < offset; i++) {
>                       slbie_data = (unsigned long)get_paca()->slb_cache[i]
> @@ -251,15 +252,14 @@ void switch_slb(struct task_struct *tsk, struct 
> mm_struct *mm)
>                       slbie_data |= SLBIE_C; /* C set for user addresses */
>                       asm volatile("slbie %0" : : "r" (slbie_data));
>               }
> -             asm volatile("isync" : : : "memory");
> -     } else {
> -             __slb_flush_and_rebolt();
> -     }
>  
> -     if (!cpu_has_feature(CPU_FTR_ARCH_207S)) {
>               /* Workaround POWER5 < DD2.1 issue */
> -             if (offset == 1 || offset > SLB_CACHE_ENTRIES)
> +             if (!cpu_has_feature(CPU_FTR_ARCH_207S) && offset == 1)
>                       asm volatile("slbie %0" : : "r" (slbie_data));
> +
> +             asm volatile("isync" : : : "memory");
> +     } else {
> +             __slb_flush_and_rebolt();
>       }
>  
>       get_paca()->slb_cache_ptr = 0;
> -- 
> 2.18.0

Reply via email to