Re: [RFC PATCH 4/5] KVM: PPC: Book3S HV: handle need_tlb_flush in C before low-level guest entry

2018-04-14 Thread Nicholas Piggin
On Tue, 10 Apr 2018 22:48:41 +1000
Nicholas Piggin  wrote:

> Move this flushing out of assembly and have it use Linux TLB
> flush implementations introduced earlier. This allows powerpc:tlbie
> trace events to be used.

Ah, I kept wondering why my guests were locking up with stale PWC
after this patch... Turns out I made a significant oversight -- radix
wants to flush the LPID's process scoped translations here, to deal
with vCPU migration. My patch had it flushing the partition scoped.

Radix and HPT have quite different requirements here, and HPT has to
flush in real-mode. So I'll update the patch and leave the asm flush
in place for HPT only, and do the right thing here for radix.

We already have other gaps with tracing HPT flushes, so they will all
have to be dealt with some other way if we want that capability.

After this is fixed, the series has been stable here so far. I'll
re-send all the tlb stuff tomorrow if it holds up.

> 
> Signed-off-by: Nicholas Piggin 
> ---
>  arch/powerpc/kvm/book3s_hv.c| 21 +++-
>  arch/powerpc/kvm/book3s_hv_rmhandlers.S | 43 +
>  2 files changed, 21 insertions(+), 43 deletions(-)
> 
> diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
> index 81e2ea882d97..5d4783b5b47a 100644
> --- a/arch/powerpc/kvm/book3s_hv.c
> +++ b/arch/powerpc/kvm/book3s_hv.c
> @@ -2680,7 +2680,7 @@ static noinline void kvmppc_run_core(struct 
> kvmppc_vcore *vc)
>   int sub;
>   bool thr0_done;
>   unsigned long cmd_bit, stat_bit;
> - int pcpu, thr;
> + int pcpu, thr, tmp;
>   int target_threads;
>   int controlled_threads;
>   int trap;
> @@ -2780,6 +2780,25 @@ static noinline void kvmppc_run_core(struct 
> kvmppc_vcore *vc)
>   return;
>   }
>  
> + /*
> +  * Do we need to flush the TLB for the LPAR? (see TLB comment above)
> + * On POWER9, individual threads can come in here, but the
> + * TLB is shared between the 4 threads in a core, hence
> + * invalidating on one thread invalidates for all.
> + * Thus we make all 4 threads use the same bit here.
> + */
> + tmp = pcpu;
> + if (cpu_has_feature(CPU_FTR_ARCH_300))
> + tmp &= ~0x3UL;
> + if (cpumask_test_cpu(tmp, >kvm->arch.need_tlb_flush)) {
> + if (kvm_is_radix(vc->kvm))
> + radix__local_flush_tlb_lpid(vc->kvm->arch.lpid);
> + else
> + hash__local_flush_tlb_lpid(vc->kvm->arch.lpid);
> + /* Clear the bit after the TLB flush */
> + cpumask_clear_cpu(tmp, >kvm->arch.need_tlb_flush);
> + }
> +
>   kvmppc_clear_host_core(pcpu);
>  
>   /* Decide on micro-threading (split-core) mode */
> diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S 
> b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
> index bd63fa8a08b5..6a23a0f3ceea 100644
> --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
> +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
> @@ -647,49 +647,8 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_300)
>   mtspr   SPRN_LPID,r7
>   isync
>  
> - /* See if we need to flush the TLB */
> - lhz r6,PACAPACAINDEX(r13)   /* test_bit(cpu, need_tlb_flush) */
> -BEGIN_FTR_SECTION
> - /*
> -  * On POWER9, individual threads can come in here, but the
> -  * TLB is shared between the 4 threads in a core, hence
> -  * invalidating on one thread invalidates for all.
> -  * Thus we make all 4 threads use the same bit here.
> -  */
> - clrrdi  r6,r6,2
> -END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300)
> - clrldi  r7,r6,64-6  /* extract bit number (6 bits) */
> - srdir6,r6,6 /* doubleword number */
> - sldir6,r6,3 /* address offset */
> - add r6,r6,r9
> - addir6,r6,KVM_NEED_FLUSH/* dword in kvm->arch.need_tlb_flush */
> - li  r8,1
> - sld r8,r8,r7
> - ld  r7,0(r6)
> - and.r7,r7,r8
> - beq 22f
> - /* Flush the TLB of any entries for this LPID */
> - lwz r0,KVM_TLB_SETS(r9)
> - mtctr   r0
> - li  r7,0x800/* IS field = 0b10 */
> - ptesync
> - li  r0,0/* RS for P9 version of tlbiel */
> - bne cr7, 29f
> -28:  tlbiel  r7  /* On P9, rs=0, RIC=0, PRS=0, R=0 */
> - addir7,r7,0x1000
> - bdnz28b
> - b   30f
> -29:  PPC_TLBIEL(7,0,2,1,1)   /* for radix, RIC=2, PRS=1, R=1 */
> - addir7,r7,0x1000
> - bdnz29b
> -30:  ptesync
> -23:  ldarx   r7,0,r6 /* clear the bit after TLB flushed */
> - andcr7,r7,r8
> - stdcx.  r7,0,r6
> - bne 23b
> -
>   /* Add timebase offset onto timebase */
> -22:  ld  r8,VCORE_TB_OFFSET(r5)
> + ld  r8,VCORE_TB_OFFSET(r5)
>   cmpdi   r8,0
>   beq 37f
>   mftbr6  /* 

Re: [RFC PATCH 4/5] KVM: PPC: Book3S HV: handle need_tlb_flush in C before low-level guest entry

2018-04-10 Thread Nicholas Piggin
On Wed, 11 Apr 2018 11:32:12 +1000
Benjamin Herrenschmidt  wrote:

> On Tue, 2018-04-10 at 22:48 +1000, Nicholas Piggin wrote:
> >  
> > +   /*
> > +* Do we need to flush the TLB for the LPAR? (see TLB comment above)
> > + * On POWER9, individual threads can come in here, but the
> > + * TLB is shared between the 4 threads in a core, hence
> > + * invalidating on one thread invalidates for all.
> > + * Thus we make all 4 threads use the same bit here.
> > + */  
> 
> This might be true of P9 implementation but isn't architecturally
> correct. From an ISA perspective, the threads could have dedicatd
> tagged TLB entries. Do we need to be careful here vs. backward
> compatiblity ?

I think so. I noticed that, just trying to do like for like replacement
with this patch.

Yes it should have a feature bit test for this optimization IMO. That
can be expanded if other CPUs have the same ability... Is it even
a worthwhile optimisation to do at this point, I wonder? I didn't see
it being hit a lot in traces.

> Also this won't flush ERAT entries for another thread afaik.

Yeah, I'm still not entirely clear exactly when ERATs get invalidated.
I would like to see more commentary here to show why it's okay.

> 
> > +   tmp = pcpu;
> > +   if (cpu_has_feature(CPU_FTR_ARCH_300))
> > +   tmp &= ~0x3UL;
> > +   if (cpumask_test_cpu(tmp, >kvm->arch.need_tlb_flush)) {
> > +   if (kvm_is_radix(vc->kvm))
> > +   radix__local_flush_tlb_lpid(vc->kvm->arch.lpid);
> > +   else
> > +   hash__local_flush_tlb_lpid(vc->kvm->arch.lpid);
> > +   /* Clear the bit after the TLB flush */
> > +   cpumask_clear_cpu(tmp, >kvm->arch.need_tlb_flush);
> > +   }
> > +  
> 



Re: [RFC PATCH 4/5] KVM: PPC: Book3S HV: handle need_tlb_flush in C before low-level guest entry

2018-04-10 Thread Benjamin Herrenschmidt
On Tue, 2018-04-10 at 22:48 +1000, Nicholas Piggin wrote:
>  
> +   /*
> +* Do we need to flush the TLB for the LPAR? (see TLB comment above)
> + * On POWER9, individual threads can come in here, but the
> + * TLB is shared between the 4 threads in a core, hence
> + * invalidating on one thread invalidates for all.
> + * Thus we make all 4 threads use the same bit here.
> + */

This might be true of P9 implementation but isn't architecturally
correct. From an ISA perspective, the threads could have dedicatd
tagged TLB entries. Do we need to be careful here vs. backward
compatiblity ?

Also this won't flush ERAT entries for another thread afaik.

> +   tmp = pcpu;
> +   if (cpu_has_feature(CPU_FTR_ARCH_300))
> +   tmp &= ~0x3UL;
> +   if (cpumask_test_cpu(tmp, >kvm->arch.need_tlb_flush)) {
> +   if (kvm_is_radix(vc->kvm))
> +   radix__local_flush_tlb_lpid(vc->kvm->arch.lpid);
> +   else
> +   hash__local_flush_tlb_lpid(vc->kvm->arch.lpid);
> +   /* Clear the bit after the TLB flush */
> +   cpumask_clear_cpu(tmp, >kvm->arch.need_tlb_flush);
> +   }
> +