Re: [RFC PATCH 05/12] [WIP] powerpc/tm: Reclaim/recheckpoint on entry/exit

2018-02-19 Thread Cyril Bur
On Tue, 2018-02-20 at 16:25 +1100, Michael Neuling wrote:
> > > > @@ -1055,6 +1082,8 @@ void restore_tm_state(struct pt_regs *regs)
> > > > msr_diff = current->thread.ckpt_regs.msr & ~regs->msr;
> > > > msr_diff &= MSR_FP | MSR_VEC | MSR_VSX;
> > > >  
> > > > +   tm_recheckpoint(¤t->thread);
> > > > +
> > > 
> > > So why do we do tm_recheckpoint at all? Shouldn't most of the tm_blah 
> > > code go
> > > away in process.c after all this?
> > > 
> > 
> > I'm not sure I follow, we need to recheckpoint because we're going back
> > to userspace? Or would you rather calling the tm.S code directly from
> > the exception return path?
> 
> Yeah, I was thinking the point of this series was.  We do tm_reclaim right on
> entry and tm_recheckpoint right on exit.  
> 

Yeah that's the ultimate goal, considering I haven't been attacked or
offered more drugs I feel like what I've done isn't crazy. Your
feedback is great, thanks.

> The bits in between (ie. the tm_blah() calls process.c) would mostly go away.
> 
> 
> > Yes, I hope we'll be able to have a fairly big cleanup commit of tm_
> > code in process.c at the end of this series.
> 
> Yep, agreed.
> 
> Mikey


Re: [RFC PATCH 05/12] [WIP] powerpc/tm: Reclaim/recheckpoint on entry/exit

2018-02-19 Thread Michael Neuling

> > > @@ -1055,6 +1082,8 @@ void restore_tm_state(struct pt_regs *regs)
> > >   msr_diff = current->thread.ckpt_regs.msr & ~regs->msr;
> > >   msr_diff &= MSR_FP | MSR_VEC | MSR_VSX;
> > >  
> > > + tm_recheckpoint(¤t->thread);
> > > +
> > 
> > So why do we do tm_recheckpoint at all? Shouldn't most of the tm_blah code 
> > go
> > away in process.c after all this?
> > 
> 
> I'm not sure I follow, we need to recheckpoint because we're going back
> to userspace? Or would you rather calling the tm.S code directly from
> the exception return path?

Yeah, I was thinking the point of this series was.  We do tm_reclaim right on
entry and tm_recheckpoint right on exit.  

The bits in between (ie. the tm_blah() calls process.c) would mostly go away.


> Yes, I hope we'll be able to have a fairly big cleanup commit of tm_
> code in process.c at the end of this series.

Yep, agreed.

Mikey


Re: [RFC PATCH 05/12] [WIP] powerpc/tm: Reclaim/recheckpoint on entry/exit

2018-02-19 Thread Cyril Bur
On Tue, 2018-02-20 at 13:50 +1100, Michael Neuling wrote:
> On Tue, 2018-02-20 at 11:22 +1100, Cyril Bur wrote:
> 
> 
> The comment from the cover sheet should be here
> 
> > ---
> >  arch/powerpc/include/asm/exception-64s.h | 25 +
> >  arch/powerpc/kernel/entry_64.S   |  5 +
> >  arch/powerpc/kernel/process.c| 37 
> > 
> >  3 files changed, 63 insertions(+), 4 deletions(-)
> > 
> > diff --git a/arch/powerpc/include/asm/exception-64s.h 
> > b/arch/powerpc/include/asm/exception-64s.h
> > index 471b2274fbeb..f904f19a9ec2 100644
> > --- a/arch/powerpc/include/asm/exception-64s.h
> > +++ b/arch/powerpc/include/asm/exception-64s.h
> > @@ -35,6 +35,7 @@
> >   * implementations as possible.
> >   */
> >  #include 
> > +#include 
> >  
> >  /* PACA save area offsets (exgen, exmc, etc) */
> >  #define EX_R9  0
> > @@ -127,6 +128,26 @@
> > hrfid;  \
> > b   hrfi_flush_fallback
> >  
> > +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
> > +#define TM_KERNEL_ENTRY
> > \
> > +   ld  r3,_MSR(r1);\
> > +   /* Probably don't need to check if coming from user/kernel */   \
> > +   /* If TM is suspended or active then we must have come from*/   \
> > +   /* userspace */ \
> > +   andi.   r0,r3,MSR_PR;   \
> > +   beq 1f; \
> > +   rldicl. r3,r3,(64-MSR_TS_LG),(64-2); /* SUSPENDED or ACTIVE*/   \
> > +   beql+   1f; /* Not SUSPENDED or ACTIVE */   \
> > +   bl  save_nvgprs;\
> > +   RECONCILE_IRQ_STATE(r10,r11);   \
> > +   li  r3,TM_CAUSE_MISC;   \
> > +   bl  tm_reclaim_current; /* uint8 cause */   \
> > +1:
> > +
> > +#else /* CONFIG_PPC_TRANSACTIONAL_MEM */
> > +#define TM_KERNEL_ENTRY
> > +#endif /* CONFIG_PPC_TRANSACTIONAL_MEM */
> > +
> >  #ifdef CONFIG_RELOCATABLE
> >  #define __EXCEPTION_RELON_PROLOG_PSERIES_1(label, h)   
> > \
> > mfspr   r11,SPRN_##h##SRR0; /* save SRR0 */ \
> > @@ -675,6 +696,9 @@ END_FTR_SECTION_IFSET(CPU_FTR_CTRL)
> > EXCEPTION_PROLOG_COMMON(trap, area);\
> > /* Volatile regs are potentially clobbered here */  \
> > additions;  \
> > +   /* This is going to need to go somewhere else as well */\
> > +   /* See comment in tm_recheckpoint()   */\
> > +   TM_KERNEL_ENTRY;\
> > addir3,r1,STACK_FRAME_OVERHEAD; \
> > bl  hdlr;   \
> > b   ret
> > @@ -689,6 +713,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_CTRL)
> > EXCEPTION_PROLOG_COMMON_3(trap);\
> > /* Volatile regs are potentially clobbered here */  \
> > additions;  \
> > +   TM_KERNEL_ENTRY;\
> > addir3,r1,STACK_FRAME_OVERHEAD; \
> > bl  hdlr
> >  
> > diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
> > index 2cb5109a7ea3..107c15c6f48b 100644
> > --- a/arch/powerpc/kernel/entry_64.S
> > +++ b/arch/powerpc/kernel/entry_64.S
> > @@ -126,6 +126,11 @@ BEGIN_FW_FTR_SECTION
> >  33:
> >  END_FW_FTR_SECTION_IFSET(FW_FEATURE_SPLPAR)
> >  #endif /* CONFIG_VIRT_CPU_ACCOUNTING_NATIVE && CONFIG_PPC_SPLPAR */
> > +   TM_KERNEL_ENTRY
> > +   REST_GPR(0,r1)
> > +   REST_4GPRS(3,r1)
> > +   REST_2GPRS(7,r1)
> > +   addir9,r1,STACK_FRAME_OVERHEAD
> 
> Why are we doing these restores here now?

The syscall handler expects the syscall params to still be in their
respective regs.

> 
> >  
> > /*
> >  * A syscall should always be called with interrupts enabled
> > diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
> > index 77dc6d8288eb..ea75da0fd506 100644
> > --- a/arch/powerpc/kernel/process.c
> > +++ b/arch/powerpc/kernel/process.c
> > @@ -951,6 +951,23 @@ void tm_recheckpoint(struct thread_struct *thread)
> > if (!(thread->regs->msr & MSR_TM))
> > return;
> >  
> > +   /*
> > +* This is 'that' comment.
> 
> I think I'm in the loop here but I don't actually know what this means. 
> 
> Senior Mikey moment or Crazy Cyril comments? I'll let the peanut gallery 
> decide.
> 

Oh quite possibly crazy Cyril comment that will have to be...
normalised. I should actually delete this and see if that's still the
case.

> > +*
> > +* If we get where with tm suspended or active then something
> 
> s/where/here/
> 
> > +* has gone wron

Re: [RFC PATCH 05/12] [WIP] powerpc/tm: Reclaim/recheckpoint on entry/exit

2018-02-19 Thread Michael Neuling
On Tue, 2018-02-20 at 11:22 +1100, Cyril Bur wrote:


The comment from the cover sheet should be here

> ---
>  arch/powerpc/include/asm/exception-64s.h | 25 +
>  arch/powerpc/kernel/entry_64.S   |  5 +
>  arch/powerpc/kernel/process.c| 37 
> 
>  3 files changed, 63 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/exception-64s.h 
> b/arch/powerpc/include/asm/exception-64s.h
> index 471b2274fbeb..f904f19a9ec2 100644
> --- a/arch/powerpc/include/asm/exception-64s.h
> +++ b/arch/powerpc/include/asm/exception-64s.h
> @@ -35,6 +35,7 @@
>   * implementations as possible.
>   */
>  #include 
> +#include 
>  
>  /* PACA save area offsets (exgen, exmc, etc) */
>  #define EX_R90
> @@ -127,6 +128,26 @@
>   hrfid;  \
>   b   hrfi_flush_fallback
>  
> +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
> +#define TM_KERNEL_ENTRY  
> \
> + ld  r3,_MSR(r1);\
> + /* Probably don't need to check if coming from user/kernel */   \
> + /* If TM is suspended or active then we must have come from*/   \
> + /* userspace */ \
> + andi.   r0,r3,MSR_PR;   \
> + beq 1f; \
> + rldicl. r3,r3,(64-MSR_TS_LG),(64-2); /* SUSPENDED or ACTIVE*/   \
> + beql+   1f; /* Not SUSPENDED or ACTIVE */   \
> + bl  save_nvgprs;\
> + RECONCILE_IRQ_STATE(r10,r11);   \
> + li  r3,TM_CAUSE_MISC;   \
> + bl  tm_reclaim_current; /* uint8 cause */   \
> +1:
> +
> +#else /* CONFIG_PPC_TRANSACTIONAL_MEM */
> +#define TM_KERNEL_ENTRY
> +#endif /* CONFIG_PPC_TRANSACTIONAL_MEM */
> +
>  #ifdef CONFIG_RELOCATABLE
>  #define __EXCEPTION_RELON_PROLOG_PSERIES_1(label, h) \
>   mfspr   r11,SPRN_##h##SRR0; /* save SRR0 */ \
> @@ -675,6 +696,9 @@ END_FTR_SECTION_IFSET(CPU_FTR_CTRL)
>   EXCEPTION_PROLOG_COMMON(trap, area);\
>   /* Volatile regs are potentially clobbered here */  \
>   additions;  \
> + /* This is going to need to go somewhere else as well */\
> + /* See comment in tm_recheckpoint()   */\
> + TM_KERNEL_ENTRY;\
>   addir3,r1,STACK_FRAME_OVERHEAD; \
>   bl  hdlr;   \
>   b   ret
> @@ -689,6 +713,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_CTRL)
>   EXCEPTION_PROLOG_COMMON_3(trap);\
>   /* Volatile regs are potentially clobbered here */  \
>   additions;  \
> + TM_KERNEL_ENTRY;\
>   addir3,r1,STACK_FRAME_OVERHEAD; \
>   bl  hdlr
>  
> diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
> index 2cb5109a7ea3..107c15c6f48b 100644
> --- a/arch/powerpc/kernel/entry_64.S
> +++ b/arch/powerpc/kernel/entry_64.S
> @@ -126,6 +126,11 @@ BEGIN_FW_FTR_SECTION
>  33:
>  END_FW_FTR_SECTION_IFSET(FW_FEATURE_SPLPAR)
>  #endif /* CONFIG_VIRT_CPU_ACCOUNTING_NATIVE && CONFIG_PPC_SPLPAR */
> + TM_KERNEL_ENTRY
> + REST_GPR(0,r1)
> + REST_4GPRS(3,r1)
> + REST_2GPRS(7,r1)
> + addir9,r1,STACK_FRAME_OVERHEAD

Why are we doing these restores here now?

>  
>   /*
>* A syscall should always be called with interrupts enabled
> diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
> index 77dc6d8288eb..ea75da0fd506 100644
> --- a/arch/powerpc/kernel/process.c
> +++ b/arch/powerpc/kernel/process.c
> @@ -951,6 +951,23 @@ void tm_recheckpoint(struct thread_struct *thread)
>   if (!(thread->regs->msr & MSR_TM))
>   return;
>  
> + /*
> +  * This is 'that' comment.

I think I'm in the loop here but I don't actually know what this means. 

Senior Mikey moment or Crazy Cyril comments? I'll let the peanut gallery decide.

> +  *
> +  * If we get where with tm suspended or active then something

s/where/here/

> +  * has gone wrong. I've added this now as a proof of concept.
> +  *
> +  * The problem I'm seeing without it is an attempt to
> +  * recheckpoint a CPU without a previous reclaim.
> +  *
> +  * I'm probably missed an exception entry with the
> +  * TM_KERNEL_ENTRY macro. Should be easy enough to find.
> +  */
> + if (MSR_TM_ACTIVE(mfmsr()))
> + return;

I don't really get this.  Wouldn't this test

[RFC PATCH 05/12] [WIP] powerpc/tm: Reclaim/recheckpoint on entry/exit

2018-02-19 Thread Cyril Bur
---
 arch/powerpc/include/asm/exception-64s.h | 25 +
 arch/powerpc/kernel/entry_64.S   |  5 +
 arch/powerpc/kernel/process.c| 37 
 3 files changed, 63 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/include/asm/exception-64s.h 
b/arch/powerpc/include/asm/exception-64s.h
index 471b2274fbeb..f904f19a9ec2 100644
--- a/arch/powerpc/include/asm/exception-64s.h
+++ b/arch/powerpc/include/asm/exception-64s.h
@@ -35,6 +35,7 @@
  * implementations as possible.
  */
 #include 
+#include 
 
 /* PACA save area offsets (exgen, exmc, etc) */
 #define EX_R9  0
@@ -127,6 +128,26 @@
hrfid;  \
b   hrfi_flush_fallback
 
+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+#define TM_KERNEL_ENTRY
\
+   ld  r3,_MSR(r1);\
+   /* Probably don't need to check if coming from user/kernel */   \
+   /* If TM is suspended or active then we must have come from*/   \
+   /* userspace */ \
+   andi.   r0,r3,MSR_PR;   \
+   beq 1f; \
+   rldicl. r3,r3,(64-MSR_TS_LG),(64-2); /* SUSPENDED or ACTIVE*/   \
+   beql+   1f; /* Not SUSPENDED or ACTIVE */   \
+   bl  save_nvgprs;\
+   RECONCILE_IRQ_STATE(r10,r11);   \
+   li  r3,TM_CAUSE_MISC;   \
+   bl  tm_reclaim_current; /* uint8 cause */   \
+1:
+
+#else /* CONFIG_PPC_TRANSACTIONAL_MEM */
+#define TM_KERNEL_ENTRY
+#endif /* CONFIG_PPC_TRANSACTIONAL_MEM */
+
 #ifdef CONFIG_RELOCATABLE
 #define __EXCEPTION_RELON_PROLOG_PSERIES_1(label, h)   \
mfspr   r11,SPRN_##h##SRR0; /* save SRR0 */ \
@@ -675,6 +696,9 @@ END_FTR_SECTION_IFSET(CPU_FTR_CTRL)
EXCEPTION_PROLOG_COMMON(trap, area);\
/* Volatile regs are potentially clobbered here */  \
additions;  \
+   /* This is going to need to go somewhere else as well */\
+   /* See comment in tm_recheckpoint()   */\
+   TM_KERNEL_ENTRY;\
addir3,r1,STACK_FRAME_OVERHEAD; \
bl  hdlr;   \
b   ret
@@ -689,6 +713,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_CTRL)
EXCEPTION_PROLOG_COMMON_3(trap);\
/* Volatile regs are potentially clobbered here */  \
additions;  \
+   TM_KERNEL_ENTRY;\
addir3,r1,STACK_FRAME_OVERHEAD; \
bl  hdlr
 
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index 2cb5109a7ea3..107c15c6f48b 100644
--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -126,6 +126,11 @@ BEGIN_FW_FTR_SECTION
 33:
 END_FW_FTR_SECTION_IFSET(FW_FEATURE_SPLPAR)
 #endif /* CONFIG_VIRT_CPU_ACCOUNTING_NATIVE && CONFIG_PPC_SPLPAR */
+   TM_KERNEL_ENTRY
+   REST_GPR(0,r1)
+   REST_4GPRS(3,r1)
+   REST_2GPRS(7,r1)
+   addir9,r1,STACK_FRAME_OVERHEAD
 
/*
 * A syscall should always be called with interrupts enabled
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index 77dc6d8288eb..ea75da0fd506 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -951,6 +951,23 @@ void tm_recheckpoint(struct thread_struct *thread)
if (!(thread->regs->msr & MSR_TM))
return;
 
+   /*
+* This is 'that' comment.
+*
+* If we get where with tm suspended or active then something
+* has gone wrong. I've added this now as a proof of concept.
+*
+* The problem I'm seeing without it is an attempt to
+* recheckpoint a CPU without a previous reclaim.
+*
+* I'm probably missed an exception entry with the
+* TM_KERNEL_ENTRY macro. Should be easy enough to find.
+*/
+   if (MSR_TM_ACTIVE(mfmsr()))
+   return;
+
+   tm_enable();
+
/* We really can't be interrupted here as the TEXASR registers can't
 * change and later in the trecheckpoint code, we have a userspace R1.
 * So let's hard disable over this region.
@@ -1009,6 +1026,13 @@ static inline void tm_recheckpoint_new_task(struct 
task_struct *new)
 static inline void __switch_to_tm(struct task_struct *prev,
struct task_struct *new)
 {
+   /*
+* So, with the rewor