Re: [PATCH] macintosh: Add module license to ans-lcd

2018-01-29 Thread Daniel Axtens
Hi,

That matches the SPDX identifier from the top of the file, so:

Reviewed-by: Daniel Axtens 

Regards,
Daniel

Larry Finger  writes:

> In kernel 4.15, the modprobe step on my PowerBook G5 started complaining that
> there was no module license for ans-lcd.
>
> Signed-off-by: Larry Finger 
> ---
>  drivers/macintosh/ans-lcd.c | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/drivers/macintosh/ans-lcd.c b/drivers/macintosh/ans-lcd.c
> index 1de81d922d8a..c8e078b911c7 100644
> --- a/drivers/macintosh/ans-lcd.c
> +++ b/drivers/macintosh/ans-lcd.c
> @@ -201,3 +201,4 @@ anslcd_exit(void)
>  
>  module_init(anslcd_init);
>  module_exit(anslcd_exit);
> +MODULE_LICENSE("GPL v2");
> -- 
> 2.16.1


Re: [PATCH 25/26] KVM: PPC: Book3S PR: Support TAR handling for PR KVM HTM.

2018-01-29 Thread Simon Guo
Hi Paul,
On Wed, Jan 24, 2018 at 03:02:58PM +1100, Paul Mackerras wrote:
> On Thu, Jan 11, 2018 at 06:11:38PM +0800, wei.guo.si...@gmail.com wrote:
> > From: Simon Guo 
> > 
> > Currently guest kernel doesn't handle TAR fac unavailable and it always
> > runs with TAR bit on. PR KVM will lazily enable TAR. TAR is not a
> > frequent-use reg and it is not included in SVCPU struct.
> > 
> > To make it work for transaction memory at PR KVM:
> > 1). Flush/giveup TAR at kvmppc_save_tm_pr().
> > 2) If we are receiving a TAR fac unavail exception inside a transaction,
> > the checkpointed TAR might be a TAR value from another process. So we need
> > treclaim the transaction, then load the desired TAR value into reg, and
> > perform trecheckpoint.
> > 3) Load TAR facility at kvmppc_restore_tm_pr() when TM active.
> > The reason we always loads TAR when restoring TM is that:
> > If we don't do this way, when there is a TAR fac unavailable exception
> > during TM active:
> > case 1: it is the 1st TAR fac unavail exception after tbegin.
> > vcpu->arch.tar should be reloaded as checkpoint tar val.
> > case 2: it is the 2nd or later TAR fac unavail exception after tbegin.
> > vcpu->arch.tar_tm should be reloaded as checkpoint tar val.
> > There will be unnecessary difficulty to handle the above 2 cases.
> > 
> > at the end of emulating treclaim., the correct TAR val need to be loaded
> > into reg if FSCR_TAR bit is on.
> > at the beginning of emulating trechkpt., TAR needs to be flushed so that
> > the right tar val can be copy into tar_tm.
> 
> Would it be simpler always to load up TAR when guest_MSR[TM] is 1?
> 
> Paul.
Sure. it will have a similar solution with math regs.
Thanks for the suggestion,

BR
- Simon


Re: [PATCH 23/26] KVM: PPC: Book3S PR: add emulation for tabort. for privilege guest

2018-01-29 Thread Simon Guo
Hi Paul,
On Tue, Jan 23, 2018 at 08:44:16PM +1100, Paul Mackerras wrote:
> On Thu, Jan 11, 2018 at 06:11:36PM +0800, wei.guo.si...@gmail.com wrote:
> > From: Simon Guo 
> > 
> > Currently privilege guest will be run with TM disabled.
> > 
> > Although the privilege guest cannot initiate a new transaction,
> > it can use tabort to terminate its problem state's transaction.
> > So it is still necessary to emulate tabort. for privilege guest.
> > 
> > This patch adds emulation for tabort. of privilege guest.
> > 
> > Tested with:
> > https://github.com/justdoitqd/publicFiles/blob/master/test_tabort.c
> > 
> > Signed-off-by: Simon Guo 
> > ---
> >  arch/powerpc/include/asm/kvm_book3s.h |  1 +
> >  arch/powerpc/kvm/book3s_emulate.c | 31 +++
> >  arch/powerpc/kvm/book3s_pr.c  |  2 +-
> >  3 files changed, 33 insertions(+), 1 deletion(-)
> > 
> > diff --git a/arch/powerpc/include/asm/kvm_book3s.h 
> > b/arch/powerpc/include/asm/kvm_book3s.h
> > index 524cd82..8bd454c 100644
> > --- a/arch/powerpc/include/asm/kvm_book3s.h
> > +++ b/arch/powerpc/include/asm/kvm_book3s.h
> > @@ -258,6 +258,7 @@ extern void kvmppc_copy_from_svcpu(struct kvm_vcpu 
> > *vcpu,
> >  void kvmppc_save_tm_pr(struct kvm_vcpu *vcpu);
> >  void kvmppc_restore_tm_pr(struct kvm_vcpu *vcpu);
> >  void kvmppc_restore_tm_sprs(struct kvm_vcpu *vcpu);
> > +void kvmppc_save_tm_sprs(struct kvm_vcpu *vcpu);
> 
> Why do you add this declaration, and change it from "static inline" to
> "inline" below, when this patch doesn't use it?  Also, making it
> "inline" is pointless if it has a caller outside the source file where
> it's defined (if gcc wants to inline uses of it inside the same source
> file, it will do so anyway even without the "inline" keyword.)
> 
> Paul.
It is a leave over of my previous rework. Sorry and I will remove
them.

Thanks,
- Simon


Re: [PATCH 21/26] KVM: PPC: Book3S PR: adds emulation for treclaim.

2018-01-29 Thread Simon Guo
Hi Paul,
On Tue, Jan 23, 2018 at 08:23:23PM +1100, Paul Mackerras wrote:
> On Thu, Jan 11, 2018 at 06:11:34PM +0800, wei.guo.si...@gmail.com wrote:
> > From: Simon Guo 
> > 
> > This patch adds support for "treclaim." emulation when PR KVM guest
> > executes treclaim. and traps to host.
> > 
> > We will firstly doing treclaim. and save TM checkpoint and doing
> > treclaim. Then it is necessary to update vcpu current reg content
> > with checkpointed vals. When rfid into guest again, those vcpu
> > current reg content(now the checkpoint vals) will be loaded into
> > regs.
> > 
> > Signed-off-by: Simon Guo 
> > ---
> >  arch/powerpc/include/asm/reg.h|  4 +++
> >  arch/powerpc/kvm/book3s_emulate.c | 66 
> > ++-
> >  2 files changed, 69 insertions(+), 1 deletion(-)
> > 
> > diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
> > index 6c293bc..b3bcf6b 100644
> > --- a/arch/powerpc/include/asm/reg.h
> > +++ b/arch/powerpc/include/asm/reg.h
> > @@ -244,12 +244,16 @@
> >  #define SPRN_TEXASR0x82/* Transaction EXception & Summary */
> >  #define SPRN_TEXASRU   0x83/* ''  ''  ''Upper 32  */
> >  #define TEXASR_FC_LG   (63 - 7)/* Failure Code */
> > +#define TEXASR_AB_LG   (63 - 31)   /* Abort */
> > +#define TEXASR_SU_LG   (63 - 32)   /* Suspend */
> >  #define TEXASR_HV_LG   (63 - 34)   /* Hypervisor state*/
> >  #define TEXASR_PR_LG   (63 - 35)   /* Privilege level */
> >  #define TEXASR_FS_LG   (63 - 36)   /* failure summary */
> >  #define TEXASR_EX_LG   (63 - 37)   /* TFIAR exact bit */
> >  #define TEXASR_ROT_LG  (63 - 38)   /* ROT bit */
> >  #define TEXASR_FC  (ASM_CONST(0xFF) << TEXASR_FC_LG)
> > +#define TEXASR_AB  __MASK(TEXASR_AB_LG)
> > +#define TEXASR_SU  __MASK(TEXASR_SU_LG)
> >  #define TEXASR_HV  __MASK(TEXASR_HV_LG)
> >  #define TEXASR_PR  __MASK(TEXASR_PR_LG)
> >  #define TEXASR_FS  __MASK(TEXASR_FS_LG)
> 
> It would be good to collect up all the modifications you need to make
> to reg.h into a single patch at the beginning of the patch series --
> that will make it easier to merge it all.
> 
OK.

> > diff --git a/arch/powerpc/kvm/book3s_emulate.c 
> > b/arch/powerpc/kvm/book3s_emulate.c
> > index 1eb1900..51c0e20 100644
> > --- a/arch/powerpc/kvm/book3s_emulate.c
> > +++ b/arch/powerpc/kvm/book3s_emulate.c
> 
> [snip]
> 
> > @@ -127,6 +130,42 @@ void kvmppc_copyfrom_vcpu_tm(struct kvm_vcpu *vcpu)
> > vcpu->arch.vrsave = vcpu->arch.vrsave_tm;
> >  }
> >  
> > +static void kvmppc_emulate_treclaim(struct kvm_vcpu *vcpu, int ra_val)
> > +{
> > +   unsigned long guest_msr = kvmppc_get_msr(vcpu);
> > +   int fc_val = ra_val ? ra_val : 1;
> > +
> > +   kvmppc_save_tm_pr(vcpu);
> > +
> > +   preempt_disable();
> > +   kvmppc_copyfrom_vcpu_tm(vcpu);
> > +   preempt_enable();
> > +
> > +   /*
> > +* treclaim need quit to non-transactional state.
> > +*/
> > +   guest_msr &= ~(MSR_TS_MASK);
> > +   kvmppc_set_msr(vcpu, guest_msr);
> > +
> > +   preempt_disable();
> > +   tm_enable();
> > +   vcpu->arch.texasr = mfspr(SPRN_TEXASR);
> > +   vcpu->arch.texasr &= ~TEXASR_FC;
> > +   vcpu->arch.texasr |= ((u64)fc_val << TEXASR_FC_LG);
> 
> You're doing failure recording here unconditionally, but the
> architecture says that treclaim. only does failure recording if
> TEXASR_FS is not already set.
> 
I need add that. And the CR0 setting is also missed. 
Thanks for the catch.

[snip]

BR,
- Simon


Re: [PATCH 22/26] KVM: PPC: Book3S PR: add emulation for trechkpt in PR KVM.

2018-01-29 Thread Simon Guo
Hi Paul,
On Tue, Jan 23, 2018 at 08:36:44PM +1100, Paul Mackerras wrote:
> On Thu, Jan 11, 2018 at 06:11:35PM +0800, wei.guo.si...@gmail.com wrote:
> > From: Simon Guo 
> > 
> > This patch adds host emulation when guest PR KVM executes "trechkpt.",
> > which is a privileged instruction and will trap into host.
> > 
> > We firstly copy vcpu ongoing content into vcpu tm checkpoint
> > content, then perform kvmppc_restore_tm_pr() to do trechkpt.
> > with updated vcpu tm checkpoint vals.
> > 
> > Signed-off-by: Simon Guo 
> 
> [snip]
> 
> > +static void kvmppc_emulate_trchkpt(struct kvm_vcpu *vcpu)
> > +{
> > +   unsigned long guest_msr = kvmppc_get_msr(vcpu);
> > +
> > +   preempt_disable();
> > +   vcpu->arch.save_msr_tm = MSR_TS_S;
> > +   vcpu->arch.save_msr_tm &= ~(MSR_FP | MSR_VEC | MSR_VSX);
> 
> This looks odd, since you are clearing bits when you have just set
> save_msr_tm to a constant value that doesn't have these bits set.
> This could be taken as a sign that the previous line has a bug and you
> meant "|=" or something similar instead of "=".  I think you probably
> did mean "=", in which case you should remove the line clearing
> FP/VEC/VSX.

I will rework and remove "save_msr_tm" from the code.

Thanks,
- Simon


Re: [PATCH 19/26] KVM: PPC: Book3S PR: always fail transaction in guest privilege state

2018-01-29 Thread Simon Guo
Hi Paul,
On Tue, Jan 23, 2018 at 07:30:33PM +1100, Paul Mackerras wrote:
> On Thu, Jan 11, 2018 at 06:11:32PM +0800, wei.guo.si...@gmail.com wrote:
> > From: Simon Guo 
> > 
> > Currently kernel doesn't use transaction memory.
> > And there is an issue for privilege guest that:
> > tbegin/tsuspend/tresume/tabort TM instructions can impact MSR TM bits
> > without trap into PR host. So following code will lead to a false mfmsr
> > result:
> > tbegin  <- MSR bits update to Transaction active.
> > beq <- failover handler branch
> > mfmsr   <- still read MSR bits from magic page with
> > transaction inactive.
> > 
> > It is not an issue for non-privilege guest since its mfmsr is not patched
> > with magic page and will always trap into PR host.
> > 
> > This patch will always fail tbegin attempt for privilege guest, so that
> > the above issue is prevented. It is benign since currently (guest) kernel
> > doesn't initiate a transaction.
> > 
> > Test case:
> > https://github.com/justdoitqd/publicFiles/blob/master/test_tbegin_pr.c
> > 
> > Signed-off-by: Simon Guo 
> 
> You need to handle the case where MSR_TM is not set in the guest MSR,
> and give the guest a facility unavailable interrupt.
Thanks for the catch.

> 
> [snip]
> 
> > --- a/arch/powerpc/kvm/book3s_pr.c
> > +++ b/arch/powerpc/kvm/book3s_pr.c
> > @@ -255,7 +255,7 @@ static inline void kvmppc_save_tm_sprs(struct kvm_vcpu 
> > *vcpu)
> > tm_disable();
> >  }
> >  
> > -static inline void kvmppc_restore_tm_sprs(struct kvm_vcpu *vcpu)
> > +inline void kvmppc_restore_tm_sprs(struct kvm_vcpu *vcpu)
> 
> You should probably remove the 'inline' here too.
OK.

BR,
- Simon



Re: [PATCH 18/26] KVM: PPC: Book3S PR: make mtspr/mfspr emulation behavior based on active TM SPRs

2018-01-29 Thread Simon Guo
Hi Paul,
On Tue, Jan 23, 2018 at 07:17:45PM +1100, Paul Mackerras wrote:
> On Thu, Jan 11, 2018 at 06:11:31PM +0800, wei.guo.si...@gmail.com wrote:
> > From: Simon Guo 
> > 
> > The mfspr/mtspr on TM SPRs(TEXASR/TFIAR/TFHAR) are non-privileged
> > instructions and can be executed at PR KVM guest without trapping
> > into host in problem state. We only emulate mtspr/mfspr
> > texasr/tfiar/tfhar at guest PR=0 state.
> > 
> > When we are emulating mtspr tm sprs at guest PR=0 state, the emulation
> > result need to be visible to guest PR=1 state. That is, the actual TM
> > SPR val should be loaded into actual registers.
> > 
> > We already flush TM SPRs into vcpu when switching out of CPU, and load
> > TM SPRs when switching back.
> > 
> > This patch corrects mfspr()/mtspr() emulation for TM SPRs to make the
> > actual source/dest based on actual TM SPRs.
> > 
> > Signed-off-by: Simon Guo 
> > ---
> >  arch/powerpc/kvm/book3s_emulate.c | 35 +++
> >  1 file changed, 27 insertions(+), 8 deletions(-)
> > 
> > diff --git a/arch/powerpc/kvm/book3s_emulate.c 
> > b/arch/powerpc/kvm/book3s_emulate.c
> > index e096d01..c2836330 100644
> > --- a/arch/powerpc/kvm/book3s_emulate.c
> > +++ b/arch/powerpc/kvm/book3s_emulate.c
> > @@ -521,13 +521,26 @@ int kvmppc_core_emulate_mtspr_pr(struct kvm_vcpu 
> > *vcpu, int sprn, ulong spr_val)
> > break;
> >  #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
> > case SPRN_TFHAR:
> > -   vcpu->arch.tfhar = spr_val;
> > -   break;
> > case SPRN_TEXASR:
> > -   vcpu->arch.texasr = spr_val;
> > -   break;
> > case SPRN_TFIAR:
> > -   vcpu->arch.tfiar = spr_val;
> > +   if (MSR_TM_ACTIVE(kvmppc_get_msr(vcpu))) {
> > +   /* it is illegal to mtspr() TM regs in
> > +* other than non-transactional state.
> > +*/
> > +   kvmppc_core_queue_program(vcpu, SRR1_PROGTM);
> > +   emulated = EMULATE_AGAIN;
> > +   break;
> > +   }
> 
> We also need to check that the guest has TM enabled in the guest MSR,
> and give them a facility unavailable interrupt if not.
> 
> > +
> > +   tm_enable();
> > +   if (sprn == SPRN_TFHAR)
> > +   mtspr(SPRN_TFHAR, spr_val);
> > +   else if (sprn == SPRN_TEXASR)
> > +   mtspr(SPRN_TEXASR, spr_val);
> > +   else
> > +   mtspr(SPRN_TFIAR, spr_val);
> > +   tm_disable();
> 
> I haven't seen any checks that we are on a CPU that has TM.  What
> happens if a guest does a mtmsrd with TM=1 and then a mtspr to TEXASR
> when running on a POWER7 (assuming the host kernel was compiled with
> CONFIG_PPC_TRANSACTIONAL_MEM=y)?
> 
> Ideally, if the host CPU does not have TM functionality, these mtsprs
> would be treated as no-ops and attempts to set the TM or TS fields in
> the guest MSR would be ignored.
> 
> > +
> > break;
> >  #endif
> >  #endif
> > @@ -674,13 +687,19 @@ int kvmppc_core_emulate_mfspr_pr(struct kvm_vcpu 
> > *vcpu, int sprn, ulong *spr_val
> > break;
> >  #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
> > case SPRN_TFHAR:
> > -   *spr_val = vcpu->arch.tfhar;
> > +   tm_enable();
> > +   *spr_val = mfspr(SPRN_TFHAR);
> > +   tm_disable();
> > break;
> > case SPRN_TEXASR:
> > -   *spr_val = vcpu->arch.texasr;
> > +   tm_enable();
> > +   *spr_val = mfspr(SPRN_TEXASR);
> > +   tm_disable();
> > break;
> > case SPRN_TFIAR:
> > -   *spr_val = vcpu->arch.tfiar;
> > +   tm_enable();
> > +   *spr_val = mfspr(SPRN_TFIAR);
> > +   tm_disable();
> > break;
> 
> These need to check MSR_TM in the guest MSR, and become no-ops on
> machines without TM capability.

Thanks for the above catches. I will rework later.

BR,
- Simon


Re: [PATCH 17/26] KVM: PPC: Book3S PR: add math support for PR KVM HTM

2018-01-29 Thread Simon Guo
Hi Paul,
On Tue, Jan 23, 2018 at 06:29:27PM +1100, Paul Mackerras wrote:
> On Thu, Jan 11, 2018 at 06:11:30PM +0800, wei.guo.si...@gmail.com wrote:
> > ines: 219
> > 
> > From: Simon Guo 
> > 
> > The math registers will be saved into vcpu->arch.fp/vr and corresponding
> > vcpu->arch.fp_tm/vr_tm area.
> > 
> > We flush or giveup the math regs into vcpu->arch.fp/vr before saving
> > transaction. After transaction is restored, the math regs will be loaded
> > back into regs.
> 
> It looks to me that you are loading up the math regs on every vcpu
> load, not just those with an active transaction.  That seems like
> overkill.
> 
> > If there is a FP/VEC/VSX unavailable exception during transaction active
> > state, the math checkpoint content might be incorrect and we need to do
> > treclaim./load the correct checkpoint val/trechkpt. sequence to retry the
> > transaction.
> 
> I would prefer a simpler approach where just before entering the
> guest, we check if the guest MSR TM bit is set, and if so we make sure
> that whichever math regs are enabled in the guest MSR are actually
> loaded on the CPU, that is, that guest_owned_ext has the same bits set
> as the guest MSR.  Then we never have to handle a FP/VEC/VSX
> unavailable interrupt with a transaction active (other than by simply
> passing it on to the guest).

Good idea. I will rework as this way.

Thanks,
- Simon


Re: [PATCH 16/26] KVM: PPC: Book3S PR: add transaction memory save/restore skeleton for PR KVM

2018-01-29 Thread Simon Guo
Hi Paul,
On Tue, Jan 23, 2018 at 05:04:09PM +1100, Paul Mackerras wrote:
> On Thu, Jan 11, 2018 at 06:11:29PM +0800, wei.guo.si...@gmail.com wrote:
> > From: Simon Guo 
> > 
> > The transaction memory checkpoint area save/restore behavior is
> > triggered when VCPU qemu process is switching out/into CPU. ie.
> > at kvmppc_core_vcpu_put_pr() and kvmppc_core_vcpu_load_pr().
> > 
> > MSR TM active state is determined by TS bits:
> > active: 10(transactional) or 01 (suspended)
> > inactive: 00 (non-transactional)
> > We don't "fake" TM functionality for guest. We "sync" guest virtual
> > MSR TM active state(10 or 01) with shadow MSR. That is to say,
> > we don't emulate a transactional guest with a TM inactive MSR.
> > 
> > TM SPR support(TFIAR/TFAR/TEXASR) has already been supported by
> > commit 9916d57e64a4 ("KVM: PPC: Book3S PR: Expose TM registers").
> > Math register support (FPR/VMX/VSX) will be done at subsequent
> > patch.
> > 
> > - TM save:
> > When kvmppc_save_tm_pr() is invoked, whether TM context need to
> > be saved can be determined by current host MSR state:
> > * TM active - save TM context
> > * TM inactive - no need to do so and only save TM SPRs.
> > 
> > - TM restore:
> > However when kvmppc_restore_tm_pr() is invoked, there is an
> > issue to determine whether TM restore should be performed.
> > The TM active host MSR val saved in kernel stack is not loaded yet.
> 
> I don't follow this exactly.  What is the value saved on the kernel
> stack?
> 
> I get that we may not have done the sync from the shadow MSR back to
> the guest MSR, since that is done in kvmppc_handle_exit_pr() with
> interrupts enabled and we might be unloading because we got
> preempted.  In that case we would have svcpu->in_use = 1, and we
> should in fact do the sync of the TS bits from shadow_msr to the vcpu
> MSR value in kvmppc_copy_from_svcpu().  If you did that then both the
> load and put functions could just rely on the vcpu's MSR value.
> 
Yes. that looks more clean and simpler!

> > We don't know whether there is a transaction to be restored from
> > current host MSR TM status at kvmppc_restore_tm_pr(). To solve this
> > issue, we save current MSR into vcpu->arch.save_msr_tm at
> > kvmppc_save_tm_pr(), and kvmppc_restore_tm_pr() check TS bits of
> > vcpu->arch.save_msr_tm to decide whether to do TM restore.
> > 
> > Signed-off-by: Simon Guo 
> > Suggested-by: Paul Mackerras 
> > ---
> >  arch/powerpc/include/asm/kvm_book3s.h |  6 +
> >  arch/powerpc/include/asm/kvm_host.h   |  1 +
> >  arch/powerpc/kvm/book3s_pr.c  | 41 
> > +++
> >  3 files changed, 48 insertions(+)
> > 
> > diff --git a/arch/powerpc/include/asm/kvm_book3s.h 
> > b/arch/powerpc/include/asm/kvm_book3s.h
> > index 9a66700..d8dbfa5 100644
> > --- a/arch/powerpc/include/asm/kvm_book3s.h
> > +++ b/arch/powerpc/include/asm/kvm_book3s.h
> > @@ -253,6 +253,12 @@ extern void kvmppc_copy_to_svcpu(struct 
> > kvmppc_book3s_shadow_vcpu *svcpu,
> >  struct kvm_vcpu *vcpu);
> >  extern void kvmppc_copy_from_svcpu(struct kvm_vcpu *vcpu,
> >struct kvmppc_book3s_shadow_vcpu *svcpu);
> > +
> > +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
> > +void kvmppc_save_tm_pr(struct kvm_vcpu *vcpu);
> > +void kvmppc_restore_tm_pr(struct kvm_vcpu *vcpu);
> > +#endif
> 
> It would be cleaner at the point where you use these if you added a
> #else clause to define a null version for the case when transactional
> memory support is not configured, like this:
> 
> +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
> +void kvmppc_save_tm_pr(struct kvm_vcpu *vcpu);
> +void kvmppc_restore_tm_pr(struct kvm_vcpu *vcpu);
> +#else
> +static inline void kvmppc_save_tm_pr(struct kvm_vcpu *vcpu) {}
> +static inline void kvmppc_restore_tm_pr(struct kvm_vcpu *vcpu) {}
> +#endif
> 
> That way you don't need the #ifdef at the call site.
> 
Thanks for the tip.

> > @@ -131,6 +135,10 @@ static void kvmppc_core_vcpu_put_pr(struct kvm_vcpu 
> > *vcpu)
> > if (kvmppc_is_split_real(vcpu))
> > kvmppc_unfixup_split_real(vcpu);
> >  
> > +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
> > +   kvmppc_save_tm_pr(vcpu);
> > +#endif
> > +
> > kvmppc_giveup_ext(vcpu, MSR_FP | MSR_VEC | MSR_VSX);
> > kvmppc_giveup_fac(vcpu, FSCR_TAR_LG);
> 
> I think you should do these giveup_ext/giveup_fac calls before calling
> kvmppc_save_tm_pr, because the treclaim in kvmppc_save_tm_pr will
> modify all the FP/VEC/VSX registers and the TAR.
I handled giveup_ext/giveup_fac() within kvmppc_save_tm_pr(), so that
other place (like kvmppc_emulate_treclaim() can invoke
kvmppc_save_tm_pr() easily). But I think moving the calls sequence as 
you suggested above will be more readable.

Thanks,
- Simon


Re: [PATCH 04/26] KVM: PPC: Book3S PR: add C function wrapper for _kvmppc_save/restore_tm()

2018-01-29 Thread Simon Guo
Hi Paul,
On Tue, Jan 23, 2018 at 04:49:16PM +1100, Paul Mackerras wrote:
> On Thu, Jan 11, 2018 at 06:11:17PM +0800, wei.guo.si...@gmail.com wrote:
> > From: Simon Guo 
> > 
> > Currently _kvmppc_save/restore_tm() APIs can only be invoked from
> > assembly function. This patch adds C function wrappers for them so
> > that they can be safely called from C function.
> > 
> > Signed-off-by: Simon Guo 
> 
> [snip]
> 
> > --- a/arch/powerpc/include/asm/asm-prototypes.h
> > +++ b/arch/powerpc/include/asm/asm-prototypes.h
> > @@ -126,4 +126,11 @@ unsigned long __init prom_init(unsigned long r3, 
> > unsigned long r4,
> >  void _mcount(void);
> >  unsigned long prepare_ftrace_return(unsigned long parent, unsigned long 
> > ip);
> >  
> > +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
> > +/* Transaction memory related */
> > +struct kvm_vcpu;
> > +void _kvmppc_restore_tm_pr(struct kvm_vcpu *vcpu, u64 guest_msr);
> > +void _kvmppc_save_tm_pr(struct kvm_vcpu *vcpu, u64 guest_msr);
> > +#endif
> 
> It's not generally necessary to have ifdefs around function
> declarations.  If the function is never defined because the feature
> is not configured in, that is fine.
> 
Got it. Thanks.

> > @@ -149,6 +149,58 @@ _GLOBAL(kvmppc_save_tm)
> > blr
> >  
> >  /*
> > + * _kvmppc_save_tm() is a wrapper around __kvmppc_save_tm(), so that it can
> > + * be invoked from C function by PR KVM only.
> > + */
> > +_GLOBAL(_kvmppc_save_tm_pr)
> > +   mflrr5
> > +   std r5, PPC_LR_STKOFF(r1)
> > +   stdur1, -SWITCH_FRAME_SIZE(r1)
> > +   SAVE_NVGPRS(r1)
> > +
> > +   /* save MSR since TM/math bits might be impacted
> > +* by __kvmppc_save_tm().
> > +*/
> > +   mfmsr   r5
> > +   SAVE_GPR(5, r1)
> > +
> > +   /* also save DSCR/CR so that it can be recovered later */
> > +   mfspr   r6, SPRN_DSCR
> > +   SAVE_GPR(6, r1)
> > +
> > +   mfcrr7
> > +   stw r7, _CCR(r1)
> > +
> > +   /* allocate stack frame for __kvmppc_save_tm since
> > +* it will save LR into its stackframe and we don't
> > +* want to corrupt _kvmppc_save_tm_pr's.
> > +*/
> > +   stdur1, -PPC_MIN_STKFRM(r1)
> 
> You don't need to do this.  In the PowerPC ELF ABI, functions always
> save their LR (i.e. their return address) in their *caller's* stack
> frame, not their own.  You have established a stack frame for
> _kvmppc_save_tm_pr above, and that is sufficient.  Same comment
> applies for _kvmppc_restore_tm_pr.
Ah..yes. I need remove that.

Thanks,
- Simon


Re: [PATCH 02/26] KVM: PPC: Book3S PR: add new parameter (guest MSR) for kvmppc_save_tm()/kvmppc_restore_tm()

2018-01-29 Thread Simon Guo
Hi Paul,
On Tue, Jan 23, 2018 at 04:42:09PM +1100, Paul Mackerras wrote:
> On Thu, Jan 11, 2018 at 06:11:15PM +0800, wei.guo.si...@gmail.com wrote:
> > From: Simon Guo 
> > 
> > HV KVM and PR KVM need different MSR source to indicate whether
> > treclaim. or trecheckpoint. is necessary.
> > 
> > This patch add new parameter (guest MSR) for these kvmppc_save_tm/
> > kvmppc_restore_tm() APIs:
> > - For HV KVM, it is VCPU_MSR
> > - For PR KVM, it is current host MSR or VCPU_SHADOW_SRR1
> > 
> > This enhancement enables these 2 APIs to be reused by PR KVM later.
> > And the patch keeps HV KVM logic unchanged.
> > 
> > This patch also reworks kvmppc_save_tm()/kvmppc_restore_tm() to
> > have a clean ABI: r3 for vcpu and r4 for guest_msr.
> > 
> > Signed-off-by: Simon Guo 
> 
> Question: why do you switch from using HSTATE_HOST_R1 to HSTATE_SCRATCH2
> 
> > @@ -42,11 +45,11 @@ _GLOBAL(kvmppc_save_tm)
> > rldimi  r8, r0, MSR_TM_LG, 63-MSR_TM_LG
> > mtmsrd  r8
> >  
> > -   ld  r5, VCPU_MSR(r9)
> > -   rldicl. r5, r5, 64 - MSR_TS_S_LG, 62
> > +   rldicl. r4, r4, 64 - MSR_TS_S_LG, 62
> > beq 1f  /* TM not active in guest. */
> >  
> > -   std r1, HSTATE_HOST_R1(r13)
> > +   std r1, HSTATE_SCRATCH2(r13)
> 
> ... here?
> 
> > @@ -166,17 +173,17 @@ _GLOBAL(kvmppc_restore_tm)
> >  * The user may change these outside of a transaction, so they must
> >  * always be context switched.
> >  */
> > -   ld  r5, VCPU_TFHAR(r4)
> > -   ld  r6, VCPU_TFIAR(r4)
> > -   ld  r7, VCPU_TEXASR(r4)
> > +   ld  r5, VCPU_TFHAR(r3)
> > +   ld  r6, VCPU_TFIAR(r3)
> > +   ld  r7, VCPU_TEXASR(r3)
> > mtspr   SPRN_TFHAR, r5
> > mtspr   SPRN_TFIAR, r6
> > mtspr   SPRN_TEXASR, r7
> >  
> > -   ld  r5, VCPU_MSR(r4)
> > +   mr  r5, r4
> > rldicl. r5, r5, 64 - MSR_TS_S_LG, 62
> > beqlr   /* TM not active in guest */
> > -   std r1, HSTATE_HOST_R1(r13)
> > +   std r1, HSTATE_SCRATCH2(r13)
> 
> and here?
> 
> Please add a paragraph to the patch description explaining why you are
> making that change.
In subsequent patches, kvmppc_save_tm/kvmppc_restore_tm() will be
invoked by wrapper function who setup addtional stack frame and 
update R1(then update HSTATE_HOST_R1 with addtional offset). Although 
HSTATE_HOST_R1 is now used safely(always PPC_STL before entering 
guest and PPC_LL in kvmppc_interrupt_pr()), I worried a future usage 
will take an assumption on HSTATE_HOST_R1 value and bring trouble.

As a result, in kvmppc_save_tm/kvmppc_restore_tm() case, I choose
HSTATE_SCRATCH2 to restore the r1. I will update the commit message. 


Thanks,
- Simon




> 
> Paul.


Re: [PATCH 13/26] KVM: PPC: Book3S PR: adds new kvmppc_copyto_vcpu_tm/kvmppc_copyfrom_vcpu_tm API for PR KVM.

2018-01-29 Thread Simon Guo
Hi Paul,
On Tue, Jan 23, 2018 at 04:52:19PM +1100, Paul Mackerras wrote:
> On Thu, Jan 11, 2018 at 06:11:26PM +0800, wei.guo.si...@gmail.com wrote:
> > From: Simon Guo 
> > 
> > This patch adds 2 new APIs: kvmppc_copyto_vcpu_tm() and
> > kvmppc_copyfrom_vcpu_tm().  These 2 APIs will be used to copy from/to TM
> > data between VCPU_TM/VCPU area.
> > 
> > PR KVM will use these APIs for treclaim. or trchkpt. emulation.
> > 
> > Signed-off-by: Simon Guo 
> > Reviewed-by: Paul Mackerras 
> 
> Actually, I take that back.  You have missed XER. :)
Thanks for the catch. I will fix that.

> 
> Paul.

BR,
- Simon


Re: [PATCH 07/11] powerpc/64s: Add support for RFI flush of L1-D cache

2018-01-29 Thread Michael Ellerman
Christian Zigotzky  writes:

> FYI:
>
> A-EON AmigaOne X1000 (CPU P.A. Semi PWRficient PA6T-1682M with two PA6T 
> cores):
>
> /sys/devices/system/cpu/vulnerabilities/
>
> -r--r--r-- 1 root root 4096 Jan 25 09:38 meltdown
> -r--r--r-- 1 root root 4096 Jan 25 09:38 spectre_v1
> -r--r--r-- 1 root root 4096 Jan 25 09:38 spectre_v2
>
> meltdown Vulnerable
> spectre_v1 Not affected
> spectre_v2 Not affected

Which may or may not be true, this is still all a work in progress.

cheers


[PATCH v2 1/1] KVM: PPC: Book3S: Add MMIO emulation for VMX instructions

2018-01-29 Thread Jose Ricardo Ziviani
This patch provides the MMIO load/store vector indexed
X-Form emulation.

Instructions implemented:
lvx: the quadword in storage addressed by the result of EA &
0x___fff0 is loaded into VRT.

stvx: the contents of VRS are stored into the quadword in storage
addressed by the result of EA & 0x___fff0.

Signed-off-by: Jose Ricardo Ziviani 
---
 arch/powerpc/include/asm/kvm_host.h   |   2 +
 arch/powerpc/include/asm/kvm_ppc.h|   4 +
 arch/powerpc/include/asm/ppc-opcode.h |   6 ++
 arch/powerpc/kvm/emulate_loadstore.c  |  34 
 arch/powerpc/kvm/powerpc.c| 153 +-
 5 files changed, 198 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/kvm_host.h 
b/arch/powerpc/include/asm/kvm_host.h
index 3aa5b577cd60..2c14a78c61a4 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -690,6 +690,7 @@ struct kvm_vcpu_arch {
u8 mmio_vsx_offset;
u8 mmio_vsx_copy_type;
u8 mmio_vsx_tx_sx_enabled;
+   u8 mmio_vmx_copy_nums;
u8 osi_needed;
u8 osi_enabled;
u8 papr_enabled;
@@ -800,6 +801,7 @@ struct kvm_vcpu_arch {
 #define KVM_MMIO_REG_QPR   0x0040
 #define KVM_MMIO_REG_FQPR  0x0060
 #define KVM_MMIO_REG_VSX   0x0080
+#define KVM_MMIO_REG_VMX   0x00a0
 
 #define __KVM_HAVE_ARCH_WQP
 #define __KVM_HAVE_CREATE_DEVICE
diff --git a/arch/powerpc/include/asm/kvm_ppc.h 
b/arch/powerpc/include/asm/kvm_ppc.h
index 9db18287b5f4..7765a800ddae 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -81,6 +81,10 @@ extern int kvmppc_handle_loads(struct kvm_run *run, struct 
kvm_vcpu *vcpu,
 extern int kvmppc_handle_vsx_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
unsigned int rt, unsigned int bytes,
int is_default_endian, int mmio_sign_extend);
+extern int kvmppc_handle_load128_by2x64(struct kvm_run *run,
+   struct kvm_vcpu *vcpu, unsigned int rt, int is_default_endian);
+extern int kvmppc_handle_store128_by2x64(struct kvm_run *run,
+   struct kvm_vcpu *vcpu, unsigned int rs, int is_default_endian);
 extern int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
   u64 val, unsigned int bytes,
   int is_default_endian);
diff --git a/arch/powerpc/include/asm/ppc-opcode.h 
b/arch/powerpc/include/asm/ppc-opcode.h
index ab5c1588b487..f1083bcf449c 100644
--- a/arch/powerpc/include/asm/ppc-opcode.h
+++ b/arch/powerpc/include/asm/ppc-opcode.h
@@ -156,6 +156,12 @@
 #define OP_31_XOP_LFDX  599
 #define OP_31_XOP_LFDUX631
 
+/* VMX Vector Load Instructions */
+#define OP_31_XOP_LVX   103
+
+/* VMX Vector Store Instructions */
+#define OP_31_XOP_STVX  231
+
 #define OP_LWZ  32
 #define OP_STFS 52
 #define OP_STFSU 53
diff --git a/arch/powerpc/kvm/emulate_loadstore.c 
b/arch/powerpc/kvm/emulate_loadstore.c
index af833531af31..7c92b6867f3e 100644
--- a/arch/powerpc/kvm/emulate_loadstore.c
+++ b/arch/powerpc/kvm/emulate_loadstore.c
@@ -58,6 +58,18 @@ static bool kvmppc_check_vsx_disabled(struct kvm_vcpu *vcpu)
 }
 #endif /* CONFIG_VSX */
 
+#ifdef CONFIG_ALTIVEC
+static bool kvmppc_check_altivec_disabled(struct kvm_vcpu *vcpu)
+{
+   if (!(kvmppc_get_msr(vcpu) & MSR_VEC)) {
+   kvmppc_core_queue_vec_unavail(vcpu);
+   return true;
+   }
+
+   return false;
+}
+#endif /* CONFIG_ALTIVEC */
+
 /*
  * XXX to do:
  * lfiwax, lfiwzx
@@ -98,6 +110,7 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_NONE;
vcpu->arch.mmio_sp64_extend = 0;
vcpu->arch.mmio_sign_extend = 0;
+   vcpu->arch.mmio_vmx_copy_nums = 0;
 
switch (get_op(inst)) {
case 31:
@@ -459,6 +472,27 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 rs, 4, 1);
break;
 #endif /* CONFIG_VSX */
+
+#ifdef CONFIG_ALTIVEC
+   case OP_31_XOP_LVX:
+   if (kvmppc_check_altivec_disabled(vcpu))
+   return EMULATE_DONE;
+   vcpu->arch.vaddr_accessed = ~0xFULL;
+   vcpu->arch.mmio_vmx_copy_nums = 2;
+   emulated = kvmppc_handle_load128_by2x64(run, vcpu,
+   KVM_MMIO_REG_VMX|rt, 1);
+   break;
+
+   case OP_31_XOP_STVX:
+   if (kvmppc_check_altivec_disabled(vcpu))
+   return EMULATE_DONE;
+   vcpu->arch.vaddr_accessed = ~0xFULL;
+   vcpu->arch.mmio_vmx_copy_nums = 2;
+   emulated = kvmppc_handle_store128_by2x64(run, vcpu,
+   rs, 

[PATCH v2 0/1] Implements MMIO emulation for lvx/stvx instructions

2018-01-29 Thread Jose Ricardo Ziviani
v2:
  - kvmppc_get_vsr_word_offset() moved back to its original place
  - EA AND ~0xF, following ISA.
  - fixed BE/LE cases

TESTS:

For testing purposes I wrote a small program that performs stvx/lvx using the
program's virtual memory and using MMIO. Load/Store into virtual memory is the
model I use to check if MMIO results are correct (because only MMIO is emulated
by KVM).

Results:

HOST LE - GUEST BE
address: 0x10034850010
0x2143658778563412
io_address: 0x3fff89a2
0x2143658778563412

HOST LE - GUEST LE
address: 0x10033a20010
0x1234567887654321
io_address: 0x3fffb538
0x1234567887654321

HOST BE - GUEST BE
address: 0x1002c4a0010
0x2143658778563412
io_address: 0x3ffface4
0x2143658778563412

HOST BR - GUEST LE
address: 0x100225e0010
0x1234567887654321
io_address: 0x3fff7fcb
0x1234567887654321

This patch implements MMIO emulation for two instructions: lvx and stvx.

Jose Ricardo Ziviani (1):
  KVM: PPC: Book3S: Add MMIO emulation for VMX instructions

 arch/powerpc/include/asm/kvm_host.h   |   2 +
 arch/powerpc/include/asm/kvm_ppc.h|   4 +
 arch/powerpc/include/asm/ppc-opcode.h |   6 ++
 arch/powerpc/kvm/emulate_loadstore.c  |  34 
 arch/powerpc/kvm/powerpc.c| 153 +-
 5 files changed, 198 insertions(+), 1 deletion(-)

-- 
2.14.3



Re: [PATCH] macintosh: Add module license to ans-lcd

2018-01-29 Thread Larry Finger

On 01/29/2018 04:49 PM, Gabriel Paubert wrote:

On Mon, Jan 29, 2018 at 01:33:08PM -0600, Larry Finger wrote:

In kernel 4.15, the modprobe step on my PowerBook G5 started complaining that
 
PowerBook G5? Really, could you send a pic! :-)


That was a typo. It is a G4 Aluminum.

Larry


[PATCH] Mark ams driver as orphaned in MAINTAINERS

2018-01-29 Thread Michael Hanselmann
I no longer have any hardware with the Apple motion sensor and thus
relinquish maintainership of the driver.

Signed-off-by: Michael Hanselmann 
---
 MAINTAINERS | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/MAINTAINERS b/MAINTAINERS
index a9c9c9ff7..6a07de631 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -780,8 +780,8 @@ F:  drivers/net/ethernet/amd/xgbe/
 F: arch/arm64/boot/dts/amd/amd-seattle-xgbe*.dtsi
 
 AMS (Apple Motion Sensor) DRIVER
-M: Michael Hanselmann 
-S: Supported
+L: linuxppc-dev@lists.ozlabs.org
+S: Orphan
 F: drivers/macintosh/ams/
 
 ANALOG DEVICES INC AD9389B DRIVER
-- 
2.11.0



Re: macintosh: change some data types from int to bool

2018-01-29 Thread Michael Ellerman
"Gustavo A. R. Silva"  writes:

> Hi Michael,
>
> Quoting Michael Ellerman :
>
>> On Wed, 2018-01-24 at 01:42:28 UTC, "Gustavo A. R. Silva" wrote:
>>> Change the data type of the following variables from int to bool
>>> across all macintosh drivers:
>>>
>>> started
>>> slots_started
>>> pm121_started
>>> wf_smu_started
>>>
>>> Some of these issues were detected with the help of Coccinelle.
>>>
>>> Suggested-by: Michael Ellerman 
>>> Signed-off-by: Gustavo A. R. Silva 
>>
>> Applied to powerpc next, thanks.
>>
>> https://git.kernel.org/powerpc/c/4f256d561447c6e1bf8b70e19daae0
>>
>> cheers
>
> Awesome.
>
> If I can help out with anything else, please let me know.

Sure thing.

We have a TODO list of sorts on github, some of them are easy, some are
not, feel free to ask here or on an individual issue for help:

  https://github.com/linuxppc/linux/issues

cheers


Re: [PATCH] macintosh: Add module license to ans-lcd

2018-01-29 Thread Gabriel Paubert
On Mon, Jan 29, 2018 at 01:33:08PM -0600, Larry Finger wrote:
> In kernel 4.15, the modprobe step on my PowerBook G5 started complaining that

PowerBook G5? Really, could you send a pic! :-)

> there was no module license for ans-lcd.
> 
> Signed-off-by: Larry Finger 
> ---
>  drivers/macintosh/ans-lcd.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/drivers/macintosh/ans-lcd.c b/drivers/macintosh/ans-lcd.c
> index 1de81d922d8a..c8e078b911c7 100644
> --- a/drivers/macintosh/ans-lcd.c
> +++ b/drivers/macintosh/ans-lcd.c
> @@ -201,3 +201,4 @@ anslcd_exit(void)
>  
>  module_init(anslcd_init);
>  module_exit(anslcd_exit);
> +MODULE_LICENSE("GPL v2");
> -- 
> 2.16.1
> 


Re: linux-next: manual merge of the nvdimm tree with the powerpc tree

2018-01-29 Thread Dan Williams
On Sun, Jan 28, 2018 at 10:04 PM, Stephen Rothwell  
wrote:
> Hi Dan,
>
> Today's linux-next merge of the nvdimm tree got a conflict in:
>
>   arch/powerpc/sysdev/axonram.c
>
> between commit:
>
>   1d65b1c886be ("powerpc/cell: Remove axonram driver")
>
> from the powerpc tree and commit:
>
>   785a3fab4adb ("mm, dax: introduce pfn_t_special()")
>
> from the nvdimm tree.
>
> I fixed it up (I just removed the file) and can carry the fix as
> necessary. This is now fixed as far as linux-next is concerned, but any
> non trivial conflicts should be mentioned to your upstream maintainer
> when your tree is submitted for merging.  You may also want to consider
> cooperating with the maintainer of the conflicting tree to minimise any
> particularly complex conflicts.

Thanks Stephen, resolution looks good to me.


[PATCH for 4.16 v7 02/11] powerpc: membarrier: Skip memory barrier in switch_mm()

2018-01-29 Thread Mathieu Desnoyers
Allow PowerPC to skip the full memory barrier in switch_mm(), and
only issue the barrier when scheduling into a task belonging to a
process that has registered to use expedited private.

Threads targeting the same VM but which belong to different thread
groups is a tricky case. It has a few consequences:

It turns out that we cannot rely on get_nr_threads(p) to count the
number of threads using a VM. We can use
(atomic_read(>mm_users) == 1 && get_nr_threads(p) == 1)
instead to skip the synchronize_sched() for cases where the VM only has
a single user, and that user only has a single thread.

It also turns out that we cannot use for_each_thread() to set
thread flags in all threads using a VM, as it only iterates on the
thread group.

Therefore, test the membarrier state variable directly rather than
relying on thread flags. This means
membarrier_register_private_expedited() needs to set the
MEMBARRIER_STATE_PRIVATE_EXPEDITED flag, issue synchronize_sched(), and
only then set MEMBARRIER_STATE_PRIVATE_EXPEDITED_READY which allows
private expedited membarrier commands to succeed.
membarrier_arch_switch_mm() now tests for the
MEMBARRIER_STATE_PRIVATE_EXPEDITED flag.

Signed-off-by: Mathieu Desnoyers 
Acked-by: Peter Zijlstra (Intel) 
CC: Paul E. McKenney 
CC: Boqun Feng 
CC: Andrew Hunter 
CC: Maged Michael 
CC: Avi Kivity 
CC: Benjamin Herrenschmidt 
CC: Paul Mackerras 
CC: Michael Ellerman 
CC: Dave Watson 
CC: Alan Stern 
CC: Will Deacon 
CC: Andy Lutomirski 
CC: Ingo Molnar 
CC: Alexander Viro 
CC: Nicholas Piggin 
CC: linuxppc-dev@lists.ozlabs.org
CC: linux-a...@vger.kernel.org
---
Changes since v1:
- Use test_ti_thread_flag(next, ...) instead of test_thread_flag() in
  powerpc membarrier_arch_sched_in(), given that we want to specifically
  check the next thread state.
- Add missing ARCH_HAS_MEMBARRIER_HOOKS in Kconfig.
- Use task_thread_info() to pass thread_info from task to
  *_ti_thread_flag().

Changes since v2:
- Move membarrier_arch_sched_in() call to finish_task_switch().
- Check for NULL t->mm in membarrier_arch_fork().
- Use membarrier_sched_in() in generic code, which invokes the
  arch-specific membarrier_arch_sched_in(). This fixes allnoconfig
  build on PowerPC.
- Move asm/membarrier.h include under CONFIG_MEMBARRIER, fixing
  allnoconfig build on PowerPC.
- Build and runtime tested on PowerPC.

Changes since v3:
- Simply rely on copy_mm() to copy the membarrier_private_expedited mm
  field on fork.
- powerpc: test thread flag instead of reading
  membarrier_private_expedited in membarrier_arch_fork().
- powerpc: skip memory barrier in membarrier_arch_sched_in() if coming
  from kernel thread, since mmdrop() implies a full barrier.
- Set membarrier_private_expedited to 1 only after arch registration
  code, thus eliminating a race where concurrent commands could succeed
  when they should fail if issued concurrently with process
  registration.
- Use READ_ONCE() for membarrier_private_expedited field access in
  membarrier_private_expedited. Matches WRITE_ONCE() performed in
  process registration.

Changes since v4:
- Move powerpc hook from sched_in() to switch_mm(), based on feedback
  from Nicholas Piggin.

Changes since v5:
- Rebase on v4.14-rc6.
- Fold "Fix: membarrier: Handle CLONE_VM + !CLONE_THREAD correctly on
  powerpc (v2)"

Changes since v6:
- Rename MEMBARRIER_STATE_SWITCH_MM to MEMBARRIER_STATE_PRIVATE_EXPEDITED.
---
 MAINTAINERS   |  1 +
 arch/powerpc/Kconfig  |  1 +
 arch/powerpc/include/asm/membarrier.h | 26 ++
 arch/powerpc/mm/mmu_context.c |  7 +++
 include/linux/sched/mm.h  | 13 -
 init/Kconfig  |  3 +++
 kernel/sched/core.c   | 10 --
 kernel/sched/membarrier.c |  8 
 8 files changed, 58 insertions(+), 11 deletions(-)
 create mode 100644 arch/powerpc/include/asm/membarrier.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 845fc25812f1..34c1ecd5a5d1 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8929,6 +8929,7 @@ L:linux-ker...@vger.kernel.org
 S: Supported
 F: kernel/sched/membarrier.c
 F: include/uapi/linux/membarrier.h
+F: arch/powerpc/include/asm/membarrier.h
 
 MEMORY MANAGEMENT
 L: linux...@kvack.org
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 2ed525a44734..09b02180b8a0 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -140,6 +140,7 @@ config PPC
select ARCH_HAS_FORTIFY_SOURCE
select ARCH_HAS_GCOV_PROFILE_ALL
select ARCH_HAS_PMEM_API

[PATCH] soc/fsl/qbman: Check if CPU is offline when initializing portals

2018-01-29 Thread Roy Pledge
If the affine portal for a specific CPU is offline at boot time
affine its interrupt to CPU 0. If the CPU is later brought online
the hotplug handler will correctly adjust the affinity.

Signed-off-by: Roy Pledge 
---
 drivers/soc/fsl/qbman/bman.c | 17 +
 drivers/soc/fsl/qbman/qman.c | 18 +-
 2 files changed, 26 insertions(+), 9 deletions(-)

diff --git a/drivers/soc/fsl/qbman/bman.c b/drivers/soc/fsl/qbman/bman.c
index f9485ce..2e6e682 100644
--- a/drivers/soc/fsl/qbman/bman.c
+++ b/drivers/soc/fsl/qbman/bman.c
@@ -562,10 +562,19 @@ static int bman_create_portal(struct bman_portal *portal,
dev_err(c->dev, "request_irq() failed\n");
goto fail_irq;
}
-   if (c->cpu != -1 && irq_can_set_affinity(c->irq) &&
-   irq_set_affinity(c->irq, cpumask_of(c->cpu))) {
-   dev_err(c->dev, "irq_set_affinity() failed\n");
-   goto fail_affinity;
+   if (cpu_online(c->cpu) && c->cpu != -1 &&
+   irq_can_set_affinity(c->irq)) {
+   if (irq_set_affinity(c->irq, cpumask_of(c->cpu))) {
+   dev_err(c->dev, "irq_set_affinity() failed %d\n",
+   c->cpu);
+   goto fail_affinity;
+   }
+   } else {
+   /* CPU is offline, direct IRQ to CPU 0 */
+   if (irq_set_affinity(c->irq, cpumask_of(0))) {
+   dev_err(c->dev, "irq_set_affinity() cpu 0 failed\n");
+   goto fail_affinity;
+   }
}
 
/* Need RCR to be empty before continuing */
diff --git a/drivers/soc/fsl/qbman/qman.c b/drivers/soc/fsl/qbman/qman.c
index e4f5bb0..463e65d 100644
--- a/drivers/soc/fsl/qbman/qman.c
+++ b/drivers/soc/fsl/qbman/qman.c
@@ -935,7 +935,6 @@ static inline int qm_mc_result_timeout(struct qm_portal 
*portal,
break;
udelay(1);
} while (--timeout);
-
return timeout;
 }
 
@@ -1209,10 +1208,19 @@ static int qman_create_portal(struct qman_portal 
*portal,
dev_err(c->dev, "request_irq() failed\n");
goto fail_irq;
}
-   if (c->cpu != -1 && irq_can_set_affinity(c->irq) &&
-   irq_set_affinity(c->irq, cpumask_of(c->cpu))) {
-   dev_err(c->dev, "irq_set_affinity() failed\n");
-   goto fail_affinity;
+   if (cpu_online(c->cpu) && c->cpu != -1 &&
+   irq_can_set_affinity(c->irq)) {
+   if (irq_set_affinity(c->irq, cpumask_of(c->cpu))) {
+   dev_err(c->dev, "irq_set_affinity() failed %d\n",
+   c->cpu);
+   goto fail_affinity;
+   }
+   } else {
+   /* CPU is offline, direct IRQ to CPU 0 */
+   if (irq_set_affinity(c->irq, cpumask_of(0))) {
+   dev_err(c->dev, "irq_set_affinity() cpu 0 failed\n");
+   goto fail_affinity;
+   }
}
 
/* Need EQCR to be empty before continuing */
-- 
2.7.4



[PATCH] macintosh: Add module license to ans-lcd

2018-01-29 Thread Larry Finger
In kernel 4.15, the modprobe step on my PowerBook G5 started complaining that
there was no module license for ans-lcd.

Signed-off-by: Larry Finger 
---
 drivers/macintosh/ans-lcd.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/macintosh/ans-lcd.c b/drivers/macintosh/ans-lcd.c
index 1de81d922d8a..c8e078b911c7 100644
--- a/drivers/macintosh/ans-lcd.c
+++ b/drivers/macintosh/ans-lcd.c
@@ -201,3 +201,4 @@ anslcd_exit(void)
 
 module_init(anslcd_init);
 module_exit(anslcd_exit);
+MODULE_LICENSE("GPL v2");
-- 
2.16.1



Re: [PATCH RESEND V2 ] powerpc/numa: Invalidate numa_cpu_lookup_table on cpu remove

2018-01-29 Thread Nathan Fontenot
On 01/27/2018 02:58 AM, Michael Ellerman wrote:
> Nathan Fontenot  writes:
> 
>> When DLPAR removing a CPU, the unmapping of the cpu from a node in
>> unmap_cpu_from_node() should also invalidate the CPUs entry in the
>> numa_cpu_lookup_table. There is not a guarantee that on a subsequent
>> DLPAR add of the CPU the associativity will be the same and thus
>> could be in a different node. Invalidating the entry in the
>> numa_cpu_lookup_table causes the associativity to be read from the
>> device tree at the time of the add.
> 
> This last part seems to contradict the change log of commit d4edc5b6c480
> ("powerpc: Fix the setup of CPU-to-Node mappings during CPU online"),
> which seems to say that we shouldn't be looking at the device tree.
> 
> Can you explain to me what I'm missing?

The commit you refer to addresses CPU online/offline behavior and is correct
that we shouldn't reference the device tree. The cpu-to-node mapping shouldn't
change across a offline/online operation since the CPU remains assigned to
the partition the entire time.

This patch addresses CPUs that have been DLPAR removed, and as such the CPU
is no longer assigned to the partition. Given this we don't have a guarantee
that the CPU will have the same node-to-cpu mapping when it is assigned
back to the partition on a subsequent DLPAR add operation.

Without this patch, the CPU is put back in the node it was in previously
which may not match the node firmware states it belongs to.

> 
> Also when did this break, always? Which commit should I mark this as
> fixing?
As far as I know this has always been broken. I've looked the the git logs
for the numa and pseries cpu hotplug code and don't see a specific
commit I can point at for breaking this.

-Nathan



Re: [PATCH V2] powerpc/kernel: Add 'ibm,thread-groups' property for CPU allocation

2018-01-29 Thread Michael Bringmann
You are correct.  I found a problem with multiple cores last week,
and I have a new patch in testing.  I will resubmit it after more
testing.

Sorry for the inconvenience.

Michael

On 01/27/2018 03:52 AM, Michael Ellerman wrote:
> Michael Bringmann  writes:
> 
>> diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c
>> index b15bae2..0a49231 100644
>> --- a/arch/powerpc/kernel/prom.c
>> +++ b/arch/powerpc/kernel/prom.c
>> @@ -303,6 +306,71 @@ static void __init 
>> check_cpu_feature_properties(unsigned long node)
>>  }
>>  }
>>  
>> +static void __init early_init_setup_thread_group_mask(unsigned long node,
>> +cpumask_t *thread_group_mask)
>> +{
>> +const __be32 *thrgrp;
>> +int len, rc = 0;
>> +u32 cc_type = 0, no_split = 0, thr_per_split = 0;
>> +int j, k;
>> +
>> +cpumask_clear(thread_group_mask);
>> +
>> +thrgrp = of_get_flat_dt_prop(node, "ibm,thread-groups", );
>> +if (!thrgrp)
>> +return;
> 
> This breaks booting on all my systems.
> 
> cheers
> 

-- 
Michael W. Bringmann
Linux Technology Center
IBM Corporation
Tie-Line  363-5196
External: (512) 286-5196
Cell:   (512) 466-0650
m...@linux.vnet.ibm.com



Re: macintosh: change some data types from int to bool

2018-01-29 Thread Gustavo A. R. Silva

Hi Michael,

Quoting Michael Ellerman :


On Wed, 2018-01-24 at 01:42:28 UTC, "Gustavo A. R. Silva" wrote:

Change the data type of the following variables from int to bool
across all macintosh drivers:

started
slots_started
pm121_started
wf_smu_started

Some of these issues were detected with the help of Coccinelle.

Suggested-by: Michael Ellerman 
Signed-off-by: Gustavo A. R. Silva 


Applied to powerpc next, thanks.

https://git.kernel.org/powerpc/c/4f256d561447c6e1bf8b70e19daae0

cheers


Awesome.

If I can help out with anything else, please let me know.

Thank you
--
Gustavo








Re: [PATCH 3/3] perf trace powerpc: Use generated syscall table

2018-01-29 Thread Arnaldo Carvalho de Melo
Em Mon, Jan 29, 2018 at 02:04:17PM +0530, Ravi Bangoria escreveu:
> +++ b/tools/perf/util/syscalltbl.c
> @@ -30,6 +30,10 @@
>  #include 
>  const int syscalltbl_native_max_id = SYSCALLTBL_S390_64_MAX_ID;
>  static const char **syscalltbl_native = syscalltbl_s390_64;
> +#elif defined(__powerpc64__)
> +#include 
> +const int syscalltbl_native_max_id = SYSCALLTBL_POWERPC_64_MAX_ID;
> +static const char **syscalltbl_native = syscalltbl_powerpc_64;
>  #endif

This is so cool! Thanks!

At some point we'll remove these #elif, have all of then linked, so that
we can do cross-platform interpreting of perf.data files generated with
'perf trace record', i.e. 'perf trace -i perf.data.recorded.on.s390' on
a powerpc64 or x86 machine.

We're paving the way to that with patches like yours and those for
s/390, thanks again!

- Arnaldo


Re: [PATCH v3 4/5] powerpc/mm: Allow up to 64 low slices

2018-01-29 Thread Christophe LEROY



Le 29/01/2018 à 07:29, Aneesh Kumar K.V a écrit :

Christophe Leroy  writes:


While the implementation of the "slices" address space allows
a significant amount of high slices, it limits the number of
low slices to 16 due to the use of a single u64 low_slices_psize
element in struct mm_context_t

On the 8xx, the minimum slice size is the size of the area
covered by a single PMD entry, ie 4M in 4K pages mode and 64M in
16K pages mode. This means we could have at least 64 slices.

In order to override this limitation, this patch switches the
handling of low_slices_psize to char array as done already for
high_slices_psize. This allows to increase the number of low
slices to 64 on the 8xx.



Maybe update the subject to "make low slice also a bitmap".Also indicate
that the bitmap functions optimize the operation if the bitmapsize les
<= long ?


v3 doesn't use bitmap functions anymore for low_slices. In this version, 
only low_slices_psize has been reworked to allow up to 64 slices instead 
of 16. I have kept low_slices as is (ie as a u64), hence allowing up to 
64 slices, which is big enough.




Also switch the 8xx to higher value in the another patch?


One separate patch just for changing the value of SLICE_LOW_SHIFT from 
28 to 26 on the 8xx ?






Signed-off-by: Christophe Leroy 
---
  v2: Usign slice_bitmap_xxx() macros instead of bitmap_xxx() functions.
  v3: keep low_slices as a u64, this allows 64 slices which is enough.
  
  arch/powerpc/include/asm/book3s/64/mmu.h |  3 +-

  arch/powerpc/include/asm/mmu-8xx.h   |  7 +++-
  arch/powerpc/include/asm/paca.h  |  2 +-
  arch/powerpc/include/asm/slice.h |  1 -
  arch/powerpc/include/asm/slice_32.h  |  2 ++
  arch/powerpc/include/asm/slice_64.h  |  2 ++
  arch/powerpc/kernel/paca.c   |  3 +-
  arch/powerpc/mm/hash_utils_64.c  | 13 
  arch/powerpc/mm/slb_low.S|  8 +++--
  arch/powerpc/mm/slice.c  | 57 +---
  10 files changed, 56 insertions(+), 42 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/mmu.h 
b/arch/powerpc/include/asm/book3s/64/mmu.h
index c9448e19847a..b076a2d74c69 100644
--- a/arch/powerpc/include/asm/book3s/64/mmu.h
+++ b/arch/powerpc/include/asm/book3s/64/mmu.h
@@ -91,7 +91,8 @@ typedef struct {
struct npu_context *npu_context;
  
  #ifdef CONFIG_PPC_MM_SLICES

-   u64 low_slices_psize;   /* SLB page size encodings */
+/* SLB page size encodings*/
+   unsigned char low_slices_psize[BITS_PER_LONG / BITS_PER_BYTE];
unsigned char high_slices_psize[SLICE_ARRAY_SIZE];
unsigned long slb_addr_limit;
  #else
diff --git a/arch/powerpc/include/asm/mmu-8xx.h 
b/arch/powerpc/include/asm/mmu-8xx.h
index 5f89b6010453..5f37ba06b56c 100644
--- a/arch/powerpc/include/asm/mmu-8xx.h
+++ b/arch/powerpc/include/asm/mmu-8xx.h
@@ -164,6 +164,11 @@
   */
  #define SPRN_M_TW 799
  
+#ifdef CONFIG_PPC_MM_SLICES

+#include 
+#define SLICE_ARRAY_SIZE   (1 << (32 - SLICE_LOW_SHIFT - 1))
+#endif
+
  #ifndef __ASSEMBLY__
  typedef struct {
unsigned int id;
@@ -171,7 +176,7 @@ typedef struct {
unsigned long vdso_base;
  #ifdef CONFIG_PPC_MM_SLICES
u16 user_psize; /* page size index */
-   u64 low_slices_psize;   /* page size encodings */
+   unsigned char low_slices_psize[SLICE_ARRAY_SIZE];
unsigned char high_slices_psize[0];
unsigned long slb_addr_limit;
  #endif
diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h
index 23ac7fc0af23..a3e531fe9ac7 100644
--- a/arch/powerpc/include/asm/paca.h
+++ b/arch/powerpc/include/asm/paca.h
@@ -141,7 +141,7 @@ struct paca_struct {
  #ifdef CONFIG_PPC_BOOK3S
mm_context_id_t mm_ctx_id;
  #ifdef CONFIG_PPC_MM_SLICES
-   u64 mm_ctx_low_slices_psize;
+   unsigned char mm_ctx_low_slices_psize[BITS_PER_LONG / BITS_PER_BYTE];
unsigned char mm_ctx_high_slices_psize[SLICE_ARRAY_SIZE];
unsigned long mm_ctx_slb_addr_limit;
  #else
diff --git a/arch/powerpc/include/asm/slice.h b/arch/powerpc/include/asm/slice.h
index 2b4b70de7e71..b67ba8faa507 100644
--- a/arch/powerpc/include/asm/slice.h
+++ b/arch/powerpc/include/asm/slice.h
@@ -16,7 +16,6 @@
  #define HAVE_ARCH_UNMAPPED_AREA
  #define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN
  
-#define SLICE_LOW_SHIFT		28

  #define SLICE_LOW_TOP (0x1ull)
  #define SLICE_NUM_LOW (SLICE_LOW_TOP >> SLICE_LOW_SHIFT)
  #define GET_LOW_SLICE_INDEX(addr) ((addr) >> SLICE_LOW_SHIFT)
diff --git a/arch/powerpc/include/asm/slice_32.h 
b/arch/powerpc/include/asm/slice_32.h
index 7e27c0dfb913..349187c20100 100644
--- a/arch/powerpc/include/asm/slice_32.h
+++ b/arch/powerpc/include/asm/slice_32.h
@@ -2,6 +2,8 @@
  #ifndef _ASM_POWERPC_SLICE_32_H
  #define _ASM_POWERPC_SLICE_32_H
  
+#define SLICE_LOW_SHIFT		26	/* 64 slices */

+
  #define SLICE_HIGH_SHIFT  0
  

[PATCH 3/3] perf trace powerpc: Use generated syscall table

2018-01-29 Thread Ravi Bangoria
This should speed up accessing new system calls introduced with the
kernel rather than waiting for libaudit updates to include them.

It also enables users to specify wildcards, for example, perf trace -e
'open*', just like was already possible on x86 and s390.

Signed-off-by: Ravi Bangoria 
---
 tools/perf/Makefile.config   | 2 ++
 tools/perf/util/syscalltbl.c | 4 
 2 files changed, 6 insertions(+)

diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config
index 0dfdaa9..577a5d2 100644
--- a/tools/perf/Makefile.config
+++ b/tools/perf/Makefile.config
@@ -27,6 +27,8 @@ NO_SYSCALL_TABLE := 1
 # Additional ARCH settings for ppc
 ifeq ($(SRCARCH),powerpc)
   NO_PERF_REGS := 0
+  NO_SYSCALL_TABLE := 0
+  CFLAGS += -I$(OUTPUT)arch/powerpc/include/generated
   LIBUNWIND_LIBS := -lunwind -lunwind-ppc64
 endif
 
diff --git a/tools/perf/util/syscalltbl.c b/tools/perf/util/syscalltbl.c
index 303bdb8..b12c5f5 100644
--- a/tools/perf/util/syscalltbl.c
+++ b/tools/perf/util/syscalltbl.c
@@ -30,6 +30,10 @@
 #include 
 const int syscalltbl_native_max_id = SYSCALLTBL_S390_64_MAX_ID;
 static const char **syscalltbl_native = syscalltbl_s390_64;
+#elif defined(__powerpc64__)
+#include 
+const int syscalltbl_native_max_id = SYSCALLTBL_POWERPC_64_MAX_ID;
+static const char **syscalltbl_native = syscalltbl_powerpc_64;
 #endif
 
 struct syscall {
-- 
1.8.3.1



[PATCH 2/3] perf powerpc: Generate system call table from asm/unistd.h

2018-01-29 Thread Ravi Bangoria
This should speed up accessing new system calls introduced with
the kernel rather than waiting for libaudit updates to include
them.

Signed-off-by: Ravi Bangoria 
---
 tools/perf/arch/powerpc/Makefile   | 21 +
 .../perf/arch/powerpc/entry/syscalls/mksyscalltbl  | 35 ++
 2 files changed, 56 insertions(+)
 create mode 100755 tools/perf/arch/powerpc/entry/syscalls/mksyscalltbl

diff --git a/tools/perf/arch/powerpc/Makefile b/tools/perf/arch/powerpc/Makefile
index 42dab7c..c93e8f4 100644
--- a/tools/perf/arch/powerpc/Makefile
+++ b/tools/perf/arch/powerpc/Makefile
@@ -6,3 +6,24 @@ endif
 HAVE_KVM_STAT_SUPPORT := 1
 PERF_HAVE_ARCH_REGS_QUERY_REGISTER_OFFSET := 1
 PERF_HAVE_JITDUMP := 1
+
+#
+# Syscall table generation for perf
+#
+
+out:= $(OUTPUT)arch/powerpc/include/generated/asm
+header := $(out)/syscalls_64.c
+sysdef := $(srctree)/tools/arch/powerpc/include/uapi/asm/unistd.h
+sysprf := $(srctree)/tools/perf/arch/powerpc/entry/syscalls/
+systbl := $(sysprf)/mksyscalltbl
+
+# Create output directory if not already present
+_dummy := $(shell [ -d '$(out)' ] || mkdir -p '$(out)')
+
+$(header): $(sysdef) $(systbl)
+   $(Q)$(SHELL) '$(systbl)' '$(CC)' $(sysdef) > $@
+
+clean::
+   $(call QUIET_CLEAN, powerpc) $(RM) $(header)
+
+archheaders: $(header)
diff --git a/tools/perf/arch/powerpc/entry/syscalls/mksyscalltbl 
b/tools/perf/arch/powerpc/entry/syscalls/mksyscalltbl
new file mode 100755
index 000..975947c
--- /dev/null
+++ b/tools/perf/arch/powerpc/entry/syscalls/mksyscalltbl
@@ -0,0 +1,35 @@
+#!/bin/sh
+# SPDX-License-Identifier: GPL-2.0
+#
+# Generate system call table for perf. Derived from
+# s390 script.
+#
+# Copyright IBM Corp. 2017
+# Author(s):  Hendrik Brueckner 
+# Changed by: Ravi Bangoria 
+
+gcc=$1
+input=$2
+
+if ! test -r $input; then
+   echo "Could not read input file" >&2
+   exit 1
+fi
+
+create_table()
+{
+   local max_nr
+
+   echo 'static const char *syscalltbl_powerpc_64[] = {'
+   while read sc nr; do
+   printf '\t[%d] = "%s",\n' $nr $sc
+   max_nr=$nr
+   done
+   echo '};'
+   echo "#define SYSCALLTBL_POWERPC_64_MAX_ID $max_nr"
+}
+
+$gcc -m64 -E -dM -x c  $input \
+   |sed -ne 's/^#define __NR_//p' \
+   |sort -t' ' -k2 -nu\
+   |create_table
-- 
1.8.3.1



[PATCH 1/3] tools include powerpc: Grab a copy of arch/powerpc/include/uapi/asm/unistd.h

2018-01-29 Thread Ravi Bangoria
Will be used for generating the syscall id/string translation table.

Signed-off-by: Ravi Bangoria 
---
 tools/arch/powerpc/include/uapi/asm/unistd.h | 399 +++
 tools/perf/check-headers.sh  |   1 +
 2 files changed, 400 insertions(+)
 create mode 100644 tools/arch/powerpc/include/uapi/asm/unistd.h

diff --git a/tools/arch/powerpc/include/uapi/asm/unistd.h 
b/tools/arch/powerpc/include/uapi/asm/unistd.h
new file mode 100644
index 000..df8684f3
--- /dev/null
+++ b/tools/arch/powerpc/include/uapi/asm/unistd.h
@@ -0,0 +1,399 @@
+/* SPDX-License-Identifier: GPL-2.0+ WITH Linux-syscall-note */
+/*
+ * This file contains the system call numbers.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+#ifndef _UAPI_ASM_POWERPC_UNISTD_H_
+#define _UAPI_ASM_POWERPC_UNISTD_H_
+
+
+#define __NR_restart_syscall 0
+#define __NR_exit1
+#define __NR_fork2
+#define __NR_read3
+#define __NR_write   4
+#define __NR_open5
+#define __NR_close   6
+#define __NR_waitpid 7
+#define __NR_creat   8
+#define __NR_link9
+#define __NR_unlink 10
+#define __NR_execve 11
+#define __NR_chdir  12
+#define __NR_time   13
+#define __NR_mknod  14
+#define __NR_chmod  15
+#define __NR_lchown 16
+#define __NR_break  17
+#define __NR_oldstat18
+#define __NR_lseek  19
+#define __NR_getpid 20
+#define __NR_mount  21
+#define __NR_umount 22
+#define __NR_setuid 23
+#define __NR_getuid 24
+#define __NR_stime  25
+#define __NR_ptrace 26
+#define __NR_alarm  27
+#define __NR_oldfstat   28
+#define __NR_pause  29
+#define __NR_utime  30
+#define __NR_stty   31
+#define __NR_gtty   32
+#define __NR_access 33
+#define __NR_nice   34
+#define __NR_ftime  35
+#define __NR_sync   36
+#define __NR_kill   37
+#define __NR_rename 38
+#define __NR_mkdir  39
+#define __NR_rmdir  40
+#define __NR_dup41
+#define __NR_pipe   42
+#define __NR_times  43
+#define __NR_prof   44
+#define __NR_brk45
+#define __NR_setgid 46
+#define __NR_getgid 47
+#define __NR_signal 48
+#define __NR_geteuid49
+#define __NR_getegid50
+#define __NR_acct   51
+#define __NR_umount252
+#define __NR_lock   53
+#define __NR_ioctl  54
+#define __NR_fcntl  55
+#define __NR_mpx56
+#define __NR_setpgid57
+#define __NR_ulimit 58
+#define __NR_oldolduname59
+#define __NR_umask  60
+#define __NR_chroot 61
+#define __NR_ustat  62
+#define __NR_dup2   63
+#define __NR_getppid64
+#define __NR_getpgrp65
+#define __NR_setsid 66
+#define __NR_sigaction  67
+#define __NR_sgetmask   68
+#define __NR_ssetmask   69
+#define __NR_setreuid   70
+#define __NR_setregid   71
+#define __NR_sigsuspend 72
+#define __NR_sigpending 73
+#define __NR_sethostname74
+#define __NR_setrlimit  75
+#define __NR_getrlimit  76
+#define __NR_getrusage  77
+#define __NR_gettimeofday   78
+#define __NR_settimeofday   79
+#define __NR_getgroups  80
+#define __NR_setgroups  81
+#define __NR_select 82
+#define __NR_symlink83
+#define __NR_oldlstat   84
+#define __NR_readlink   85
+#define __NR_uselib 86
+#define __NR_swapon 87
+#define __NR_reboot 88
+#define __NR_readdir89
+#define __NR_mmap   90
+#define __NR_munmap 91
+#define __NR_truncate   92
+#define __NR_ftruncate  93
+#define __NR_fchmod 94
+#define __NR_fchown 95
+#define __NR_getpriority96
+#define __NR_setpriority97
+#define __NR_profil 98
+#define __NR_statfs 99
+#define __NR_fstatfs   100
+#define __NR_ioperm101
+#define __NR_socketcall102
+#define __NR_syslog103
+#define __NR_setitimer 104
+#define __NR_getitimer 105
+#define __NR_stat  106
+#define __NR_lstat 107
+#define 

[PATCH 0/3] perf trace powerpc: Remove libaudit dependency for syscalls

2018-01-29 Thread Ravi Bangoria
This is almost identical set of patches recently done for s390.

With this, user can run perf trace without libaudit on powerpc
as well. Ex,

  $ make
... libaudit: [ OFF ]

  $ ./perf trace ls
0.221 ( 0.005 ms): ls/43330 open(filename: 0xac1e2778, flags: CLOEXEC   ) = 
3
0.227 ( 0.003 ms): ls/43330 read(fd: 3, buf: 0x39c4d678, count: 832 ) = 
832
0.233 ( 0.002 ms): ls/43330 fstat(fd: 3, statbuf: 0x39c4d4b0) = 0
...

  $ ./perf trace -e "open*" ls
0.000 ( 0.014 ms): ls/43342 open(filename: 0x793d8978, flags: CLOEXEC   ) = 
3
0.038 ( 0.006 ms): ls/43342 open(filename: 0x793f2778, flags: CLOEXEC   ) = 
3
...

Ravi Bangoria (3):
  tools include powerpc: Grab a copy of
arch/powerpc/include/uapi/asm/unistd.h
  perf powerpc: Generate system call table from asm/unistd.h
  perf trace powerpc: Use generated syscall table

 tools/arch/powerpc/include/uapi/asm/unistd.h   | 399 +
 tools/perf/Makefile.config |   2 +
 tools/perf/arch/powerpc/Makefile   |  21 ++
 .../perf/arch/powerpc/entry/syscalls/mksyscalltbl  |  35 ++
 tools/perf/check-headers.sh|   1 +
 tools/perf/util/syscalltbl.c   |   4 +
 6 files changed, 462 insertions(+)
 create mode 100644 tools/arch/powerpc/include/uapi/asm/unistd.h
 create mode 100755 tools/perf/arch/powerpc/entry/syscalls/mksyscalltbl

-- 
1.8.3.1