Re: [PATCH] powerpc/iommu/debug: Remove redundant NULL check

2021-03-25 Thread Daniel Axtens
Hi Jiapeng Chong,  writes:

> Fix the following coccicheck warnings:
>
> ./fs/io_uring.c:5989:4-9: WARNING: NULL check before some freeing
> functions is not needed.

This looks correct to me, and matches the description of debugfs_remove
in Documentation/filesystems/debugfs.rst.

If you have a number of similar fixes it might be helpful to do them in
a single bigger patch, but I'm not sure if coccicheck reports much else
as I don't have coccinelle installed at the moment.

Reviewed-by: Daniel Axtens 

Kind regards,
Daniel


Re: [PATCH] [v2] tools: testing: Remove duplicate includes

2021-03-25 Thread Daniel Axtens
Wan Jiabing  writes:

> sched.h has been included at line 33, so remove the 
> duplicate one at line 36.

> pthread.h has been included at line 17,so remove the 
> duplicate one at line 20.

I can see that both of these are correct from the diff.

> inttypes.h has been included at line 19, so remove the 
> duplicate one at line 23.

For this one I checked the file. Indeed there is another inttypes.h, so
this is also correct.

Again, all the automated checks pass. (although I don't think any of the
automated builds include selftests.)

So:
Reviewed-by: Daniel Axtens 

Kind regards,
Daniel

>
> Signed-off-by: Wan Jiabing 
> ---
>  tools/testing/selftests/powerpc/mm/tlbie_test.c | 1 -
>  tools/testing/selftests/powerpc/tm/tm-poison.c  | 1 -
>  tools/testing/selftests/powerpc/tm/tm-vmx-unavail.c | 1 -
>  3 files changed, 3 deletions(-)
>
> diff --git a/tools/testing/selftests/powerpc/mm/tlbie_test.c 
> b/tools/testing/selftests/powerpc/mm/tlbie_test.c
> index f85a0938ab25..48344a74b212 100644
> --- a/tools/testing/selftests/powerpc/mm/tlbie_test.c
> +++ b/tools/testing/selftests/powerpc/mm/tlbie_test.c
> @@ -33,7 +33,6 @@
>  #include 
>  #include 
>  #include 
> -#include 
>  #include 
>  #include 
>  #include 
> diff --git a/tools/testing/selftests/powerpc/tm/tm-poison.c 
> b/tools/testing/selftests/powerpc/tm/tm-poison.c
> index 29e5f26af7b9..27c083a03d1f 100644
> --- a/tools/testing/selftests/powerpc/tm/tm-poison.c
> +++ b/tools/testing/selftests/powerpc/tm/tm-poison.c
> @@ -20,7 +20,6 @@
>  #include 
>  #include 
>  #include 
> -#include 
>  
>  #include "tm.h"
>  
> diff --git a/tools/testing/selftests/powerpc/tm/tm-vmx-unavail.c 
> b/tools/testing/selftests/powerpc/tm/tm-vmx-unavail.c
> index e2a0c07e8362..9ef37a9836ac 100644
> --- a/tools/testing/selftests/powerpc/tm/tm-vmx-unavail.c
> +++ b/tools/testing/selftests/powerpc/tm/tm-vmx-unavail.c
> @@ -17,7 +17,6 @@
>  #include 
>  #include 
>  #include 
> -#include 
>  
>  #include "tm.h"
>  #include "utils.h"
> -- 
> 2.25.1


Re: [PATCH] [v2] arch: powerpc: Remove duplicate includes

2021-03-25 Thread Daniel Axtens
Wan Jiabing  writes:

> mmu-hash.h: asm/bug.h has been included at line 12, so remove 
> the duplicate one at line 21.

Looking at the file I had wondered if this was due to a #ifdef being
removed, but no, the second one was just added in commit 891121e6c02c
("powerpc/mm: Differentiate between hugetlb and THP during page
walk"). How odd!

Anyway, all of these look good to me, and the automated checks at
http://patchwork.ozlabs.org/project/linuxppc-dev/patch/20210323062916.295346-1-wanjiab...@vivo.com/
have all passed.

Reviewed-by: Daniel Axtens 

Kind regards,
Daniel


Re: [PATCH v2 1/1] hotplug-cpu.c: show 'last online CPU' error in dlpar_cpu_offline()

2021-03-25 Thread Daniel Axtens
Hi Daniel,

Two small nitpicks:

> This patch adds a 'last online' check in dlpar_cpu_offline() to catch
> the 'last online CPU' offline error, eturning a more informative error
   ^--- s/eturning/returning/;


> + /* device_offline() will return -EBUSY (via cpu_down())
> +  * if there is only one CPU left. Check it here to fail
> +  * earlier and with a more informative error message,
> +  * while also retaining the cpu_add_remove_lock to be 
> sure
> +  * that no CPUs are being online/offlined during this
> +  * check. */

Checkpatch has a small issue with this comment:

WARNING: Block comments use a trailing */ on a separate line
#50: FILE: arch/powerpc/platforms/pseries/hotplug-cpu.c:279:
+* check. */

Apart from that, this patch seems sane to me, but I haven't been able to
test it.

Kind regards,
Daniel


Re: [PATCH -next] powerpc/eeh: Remove unused inline function eeh_dev_phb_init_dynamic()

2021-03-25 Thread Daniel Axtens
Hi,

> commit 475028efc708 ("powerpc/eeh: Remove eeh_dev_phb_init_dynamic()")
> left behind this, so can remove it.

I had a look: the inline that you are removing here is for the
!CONFIG_EEH case, which explains why it was missed the first time.

This looks like a good change. Out of interest, what tool are you using
to find these unused inlines? If there are many more, it might make
sense to combine future patches removing them into a single patch, but
I'm not sure.

checkpatch likes this patch, so that's also good :)

Reviewed-by: Daniel Axtens 

Kind regards,
Daniel


Re: [PATCH -next] powerpc/smp: Remove unused inline functions

2021-03-25 Thread Daniel Axtens
Hi,

> commit 441c19c8a290 ("powerpc/kvm/book3s_hv: Rework the secondary inhibit 
> code")
> left behind this, so can remove it.
>

Interesting: that commit removed some instances of
(un)inhibit_secondary_onlining, but it seems to have missed the ones for
the uni-processor case, which your patch removes. This seems like a good
change.

Checkpatch does have one small complaint about your commit message:

| WARNING:COMMIT_LOG_LONG_LINE: Possible unwrapped commit description (prefer a 
maximum 75 chars per line)
| #6: 
| commit 441c19c8a290 ("powerpc/kvm/book3s_hv: Rework the secondary inhibit 
code")

I don't think this warrants another revision, I think leaving the commit
name on one line makes sense.

Reviewed-by: Daniel Axtens 

Kind regards,
Daniel

> Signed-off-by: YueHaibing 
> ---
>  arch/powerpc/include/asm/smp.h | 2 --
>  1 file changed, 2 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/smp.h b/arch/powerpc/include/asm/smp.h
> index 7a13bc20f0a0..ad7129a19e8f 100644
> --- a/arch/powerpc/include/asm/smp.h
> +++ b/arch/powerpc/include/asm/smp.h
> @@ -189,8 +189,6 @@ extern void __cpu_die(unsigned int cpu);
>  #define hard_smp_processor_id()  get_hard_smp_processor_id(0)
>  #define smp_setup_cpu_maps()
>  #define thread_group_shares_l2  0
> -static inline void inhibit_secondary_onlining(void) {}
> -static inline void uninhibit_secondary_onlining(void) {}
>  static inline const struct cpumask *cpu_sibling_mask(int cpu)
>  {
>   return cpumask_of(cpu);
> -- 
> 2.17.1


Re: [PATCH] selftests: powerpc: unmark non-kernel-doc comments

2021-03-25 Thread Daniel Axtens
Randy Dunlap  writes:

> Drop the 'beginning of kernel-doc' notation markers (/**)
> in places that are not in kernel-doc format.

This looks good to me. Arguably we don't need the comments at all, but
it doesn't seem to hurt to keep them.

checkpatch is OK with the entire file, so there's nothing else we'd
really want to clean up while you're doing cleanups.

Reviewed-by: Daniel Axtens 

Kind regards,
Daniel

>
> Signed-off-by: Randy Dunlap 
> Cc: Michael Ellerman 
> Cc: linuxppc-dev@lists.ozlabs.org
> ---
>  tools/testing/selftests/powerpc/tm/tm-trap.c |4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> --- linux-next-20210323.orig/tools/testing/selftests/powerpc/tm/tm-trap.c
> +++ linux-next-20210323/tools/testing/selftests/powerpc/tm/tm-trap.c
> @@ -66,7 +66,7 @@ void trap_signal_handler(int signo, sigi
>   /* Get thread endianness: extract bit LE from MSR */
>   thread_endianness = MSR_LE & ucp->uc_mcontext.gp_regs[PT_MSR];
>  
> - /***
> + /*
>* Little-Endian Machine
>*/
>  
> @@ -126,7 +126,7 @@ void trap_signal_handler(int signo, sigi
>   }
>   }
>  
> - /***
> + /*
>* Big-Endian Machine
>*/
>  


Re: [PATCH] soc/fsl: qbman: fix conflicting alignment attributes

2021-03-25 Thread Li Yang
On Tue, Mar 23, 2021 at 8:17 AM Arnd Bergmann  wrote:
>
> From: Arnd Bergmann 
>
> When building with W=1, gcc points out that the __packed attribute
> on struct qm_eqcr_entry conflicts with the 8-byte alignment
> attribute on struct qm_fd inside it:
>
> drivers/soc/fsl/qbman/qman.c:189:1: error: alignment 1 of 'struct 
> qm_eqcr_entry' is less than 8 [-Werror=packed-not-aligned]
>
> I assume that the alignment attribute is the correct one, and
> that qm_eqcr_entry cannot actually be unaligned in memory,
> so add the same alignment on the outer struct.
>
> Fixes: c535e923bb97 ("soc/fsl: Introduce DPAA 1.x QMan device driver")
> Signed-off-by: Arnd Bergmann 
> ---
>  drivers/soc/fsl/qbman/qman.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/soc/fsl/qbman/qman.c b/drivers/soc/fsl/qbman/qman.c
> index a1b9be1d105a..fde4edd83c14 100644
> --- a/drivers/soc/fsl/qbman/qman.c
> +++ b/drivers/soc/fsl/qbman/qman.c
> @@ -186,7 +186,7 @@ struct qm_eqcr_entry {
> __be32 tag;
> struct qm_fd fd;
> u8 __reserved3[32];
> -} __packed;
> +} __packed __aligned(8);

The EQCR structure is actually aligned on 64-byte from the manual.
But probably 8 is enough to let the compiler not complain.

>  #define QM_EQCR_VERB_VBIT  0x80
>  #define QM_EQCR_VERB_CMD_MASK  0x61/* but only one value; */
>  #define QM_EQCR_VERB_CMD_ENQUEUE   0x01
> --
> 2.29.2
>


Re: [PATCH v4 24/46] KVM: PPC: Book3S HV P9: Use large decrementer for HDEC

2021-03-25 Thread Alexey Kardashevskiy




On 23/03/2021 12:02, Nicholas Piggin wrote:

On processors that don't suppress the HDEC exceptions when LPCR[HDICE]=0,
this could help reduce needless guest exits due to leftover exceptions on
entering the guest.

Reviewed-by: Alexey Kardashevskiy 
Signed-off-by: Nicholas Piggin 



ERROR: modpost: "decrementer_max" [arch/powerpc/kvm/kvm-hv.ko] undefined!


need this:

--- a/arch/powerpc/kernel/time.c
+++ b/arch/powerpc/kernel/time.c
@@ -89,6 +89,7 @@ static struct clocksource clocksource_timebase = {

 #define DECREMENTER_DEFAULT_MAX 0x7FFF
 u64 decrementer_max = DECREMENTER_DEFAULT_MAX;
+EXPORT_SYMBOL_GPL(decrementer_max);



---
  arch/powerpc/include/asm/time.h | 2 ++
  arch/powerpc/kvm/book3s_hv.c| 3 ++-
  2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/time.h b/arch/powerpc/include/asm/time.h
index 8dd3cdb25338..68d94711811e 100644
--- a/arch/powerpc/include/asm/time.h
+++ b/arch/powerpc/include/asm/time.h
@@ -18,6 +18,8 @@
  #include 
  
  /* time.c */

+extern u64 decrementer_max;
+
  extern unsigned long tb_ticks_per_jiffy;
  extern unsigned long tb_ticks_per_usec;
  extern unsigned long tb_ticks_per_sec;
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 8215430e6d5e..bb30c5ab53d1 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -3658,7 +3658,8 @@ static int kvmhv_load_hv_regs_and_go(struct kvm_vcpu 
*vcpu, u64 time_limit,
vc->tb_offset_applied = 0;
}
  
-	mtspr(SPRN_HDEC, 0x7fff);

+   /* HDEC must be at least as large as DEC, so decrementer_max fits */
+   mtspr(SPRN_HDEC, decrementer_max);
  
  	switch_mmu_to_host_radix(kvm, host_pidr);
  



--
Alexey


Re: [PATCH v2 3/7] powerpc: convert config files to generic cmdline

2021-03-25 Thread Rob Herring
On Thu, Mar 25, 2021 at 2:00 PM Daniel Walker  wrote:
>
> On Thu, Mar 25, 2021 at 01:03:55PM +0100, Christophe Leroy wrote:
> >
> > Ok, so you agree we don't need to provide two CMDLINE, one to be appended 
> > and one to be prepended.
> >
> > Let's only provide once CMDLINE as of today, and ask the user to select
> > whether he wants it appended or prepended or replacee. Then no need to
> > change all existing config to rename CONFIG_CMDLINE into either of the new
> > ones.
> >
> > That's the main difference between my series and Daniel's series. So I'll
> > finish taking Will's comment into account and we'll send out a v3 soon.
>
> It doesn't solve the needs of Cisco, I've stated many times your changes have
> little value. Please stop submitting them.

Can you please outline what those needs are which aren't met?

Rob


Re: [PATCH v2 07/15] powerpc/uaccess: Call might_fault() inconditionaly

2021-03-25 Thread Daniel Axtens
Daniel Axtens  writes:

> Hi Christophe,
>
>> Commit 6bfd93c32a50 ("powerpc: Fix incorrect might_sleep in
>> __get_user/__put_user on kernel addresses") added a check to not call
>> might_sleep() on kernel addresses. This was to enable the use of
>> __get_user() in the alignment exception handler for any address.
>>
>> Then commit 95156f0051cb ("lockdep, mm: fix might_fault() annotation")
>> added a check of the address space in might_fault(), based on
>> set_fs() logic. But this didn't solve the powerpc alignment exception
>> case as it didn't call set_fs(KERNEL_DS).
>>
>> Nowadays, set_fs() is gone, previous patch fixed the alignment
>> exception handler and __get_user/__put_user are not supposed to be
>> used anymore to read kernel memory.
>>
>> Therefore the is_kernel_addr() check has become useless and can be
>> removed.
>
> While I agree that __get_user/__put_user should not be used on kernel
> memory, I'm not sure that we have covered every case where they might be
> used on kernel memory yet. I did a git grep for __get_user - there are
> several callers in arch/powerpc and it looks like at least lib/sstep.c
> might be using __get_user to read kernel memory while single-stepping.
>
> I am not sure if might_sleep has got logic to cover the powerpc case -
> it uses uaccess_kernel, but we don't supply a definition for that on
> powerpc, so if we do end up calling __get_user on a kernel address, I
> think we might now throw a warning. (Unless we are saved by
> pagefault_disabled()?)

Ah, I just re-read some of my earlier emails and was reminded that yes,
if we are calling __get/put, we must have disabled page faults.

So yes, this is good.

>
> (But I haven't tested this yet, so it's possible I misunderstood
> something.)
>
> Do you expect any consequences if we've missed a case where
> __(get|put)_user is called on a kernel address because it hasn't been
> converted to use better helpers yet?
>
> Kind regards,
> Daniel
>
>>
>> Signed-off-by: Christophe Leroy 
>> ---
>>  arch/powerpc/include/asm/uaccess.h | 9 -
>>  1 file changed, 4 insertions(+), 5 deletions(-)
>>
>> diff --git a/arch/powerpc/include/asm/uaccess.h 
>> b/arch/powerpc/include/asm/uaccess.h
>> index eaa828a6a419..c4bbc64758a0 100644
>> --- a/arch/powerpc/include/asm/uaccess.h
>> +++ b/arch/powerpc/include/asm/uaccess.h
>> @@ -77,8 +77,7 @@ __pu_failed:   
>> \
>>  __typeof__(*(ptr)) __pu_val = (x);  \
>>  __typeof__(size) __pu_size = (size);\
>>  \
>> -if (!is_kernel_addr((unsigned long)__pu_addr))  \
>> -might_fault();  \
>> +might_fault();  \
>>  __chk_user_ptr(__pu_addr);  \
>>  __put_user_size(__pu_val, __pu_addr, __pu_size, __pu_err);  \
>>  \
>> @@ -238,12 +237,12 @@ do {   
>> \
>>  __typeof__(size) __gu_size = (size);\
>>  \
>>  __chk_user_ptr(__gu_addr);  \
>> -if (do_allow && !is_kernel_addr((unsigned long)__gu_addr)) \
>> +if (do_allow) { 
>> \
>>  might_fault();  \
>> -if (do_allow)   
>> \
>>  __get_user_size(__gu_val, __gu_addr, __gu_size, __gu_err);  
>> \
>> -else
>> \
>> +} else {
>> \
>>  __get_user_size_allowed(__gu_val, __gu_addr, __gu_size, 
>> __gu_err); \
>> +}   
>> \

One microscopic nit: these changes throw the '\'s further out of
alignment.

Reviewed-by: Daniel Axtens 

Kind regards,
Daniel

>>  (x) = (__typeof__(*(ptr)))__gu_val; \
>>  \
>>  __gu_err;   \
>> -- 
>> 2.25.0


Re: [PATCH v2 07/15] powerpc/uaccess: Call might_fault() inconditionaly

2021-03-25 Thread Daniel Axtens
Hi Christophe,

> Commit 6bfd93c32a50 ("powerpc: Fix incorrect might_sleep in
> __get_user/__put_user on kernel addresses") added a check to not call
> might_sleep() on kernel addresses. This was to enable the use of
> __get_user() in the alignment exception handler for any address.
>
> Then commit 95156f0051cb ("lockdep, mm: fix might_fault() annotation")
> added a check of the address space in might_fault(), based on
> set_fs() logic. But this didn't solve the powerpc alignment exception
> case as it didn't call set_fs(KERNEL_DS).
>
> Nowadays, set_fs() is gone, previous patch fixed the alignment
> exception handler and __get_user/__put_user are not supposed to be
> used anymore to read kernel memory.
>
> Therefore the is_kernel_addr() check has become useless and can be
> removed.

While I agree that __get_user/__put_user should not be used on kernel
memory, I'm not sure that we have covered every case where they might be
used on kernel memory yet. I did a git grep for __get_user - there are
several callers in arch/powerpc and it looks like at least lib/sstep.c
might be using __get_user to read kernel memory while single-stepping.

I am not sure if might_sleep has got logic to cover the powerpc case -
it uses uaccess_kernel, but we don't supply a definition for that on
powerpc, so if we do end up calling __get_user on a kernel address, I
think we might now throw a warning. (Unless we are saved by
pagefault_disabled()?)

(But I haven't tested this yet, so it's possible I misunderstood
something.)

Do you expect any consequences if we've missed a case where
__(get|put)_user is called on a kernel address because it hasn't been
converted to use better helpers yet?

Kind regards,
Daniel

>
> Signed-off-by: Christophe Leroy 
> ---
>  arch/powerpc/include/asm/uaccess.h | 9 -
>  1 file changed, 4 insertions(+), 5 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/uaccess.h 
> b/arch/powerpc/include/asm/uaccess.h
> index eaa828a6a419..c4bbc64758a0 100644
> --- a/arch/powerpc/include/asm/uaccess.h
> +++ b/arch/powerpc/include/asm/uaccess.h
> @@ -77,8 +77,7 @@ __pu_failed:
> \
>   __typeof__(*(ptr)) __pu_val = (x);  \
>   __typeof__(size) __pu_size = (size);\
>   \
> - if (!is_kernel_addr((unsigned long)__pu_addr))  \
> - might_fault();  \
> + might_fault();  \
>   __chk_user_ptr(__pu_addr);  \
>   __put_user_size(__pu_val, __pu_addr, __pu_size, __pu_err);  \
>   \
> @@ -238,12 +237,12 @@ do {
> \
>   __typeof__(size) __gu_size = (size);\
>   \
>   __chk_user_ptr(__gu_addr);  \
> - if (do_allow && !is_kernel_addr((unsigned long)__gu_addr)) \
> + if (do_allow) { 
> \
>   might_fault();  \
> - if (do_allow)   
> \
>   __get_user_size(__gu_val, __gu_addr, __gu_size, __gu_err);  
> \
> - else
> \
> + } else {
> \
>   __get_user_size_allowed(__gu_val, __gu_addr, __gu_size, 
> __gu_err); \
> + }   
> \
>   (x) = (__typeof__(*(ptr)))__gu_val; \
>   \
>   __gu_err;   \
> -- 
> 2.25.0


Re: [PATCH v2 06/15] powerpc/align: Don't use __get_user_instr() on kernel addresses

2021-03-25 Thread Daniel Axtens
Christophe Leroy  writes:

> In the old days, when we didn't have kernel userspace access
> protection and had set_fs(), it was wise to use __get_user()
> and friends to read kernel memory.
>
> Nowadays, get_user() is granting userspace access and is exclusively
> for userspace access.
>
> In alignment exception handler, use probe_kernel_read_inst()
> instead of __get_user_instr() for reading instructions in kernel.
>
> This will allow to remove the is_kernel_addr() check in
> __get/put_user() in a following patch.
>

Looks good to me!

Reviewed-by: Daniel Axtens 

Kind regards,
Daniel

> Signed-off-by: Christophe Leroy 
> ---
>  arch/powerpc/kernel/align.c | 6 +-
>  1 file changed, 5 insertions(+), 1 deletion(-)
>
> diff --git a/arch/powerpc/kernel/align.c b/arch/powerpc/kernel/align.c
> index c4d7b445b459..8d4c7af262e2 100644
> --- a/arch/powerpc/kernel/align.c
> +++ b/arch/powerpc/kernel/align.c
> @@ -310,7 +310,11 @@ int fix_alignment(struct pt_regs *regs)
>*/
>   CHECK_FULL_REGS(regs);
>  
> - if (unlikely(__get_user_instr(instr, (void __user *)regs->nip)))
> + if (is_kernel_addr(regs->nip))
> + r = probe_kernel_read_inst(, (void *)regs->nip);
> + else
> + r = __get_user_instr(instr, (void __user *)regs->nip);
> + if (unlikely(r))
>   return -EFAULT;
>   if ((regs->msr & MSR_LE) != (MSR_KERNEL & MSR_LE)) {
>   /* We don't handle PPC little-endian any more... */
> -- 
> 2.25.0


Re: [PATCH v2 05/15] powerpc/uaccess: Move get_user_instr helpers in asm/inst.h

2021-03-25 Thread Daniel Axtens
Hi Christophe,

> Those helpers use get_user helpers but they don't participate
> in their implementation, so they do not belong to asm/uaccess.h
>
> Move them in asm/inst.h

Hmm, is asm/inst.h the right place for this?

asm/inst.h seems to be entirely concerned with the ppc_inst type:
converting things to and from ppc_inst, print ppc_inst as a string,
dealing with prefixed instructs, etc., etc. The only things currently
that look at memory are the probe_user_read_inst and
probe_kernel_read_inst prototypes...

Having said that, I'm not sure quite where else to put it, and none of
the other places in arch/powerpc/include that currently reference
ppc_inst seem any better...

If we do use asm/inst.h, I think maybe it makes sense to put the
code towards the end rather than at the start, as uses structs and calls
macros that are defined later on in the function.

Kind regards,
Daniel

>
> Signed-off-by: Christophe Leroy 
> ---
>  arch/powerpc/include/asm/inst.h| 34 ++
>  arch/powerpc/include/asm/uaccess.h | 34 --
>  2 files changed, 34 insertions(+), 34 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/inst.h b/arch/powerpc/include/asm/inst.h
> index cc73c1267572..19e18af2fac9 100644
> --- a/arch/powerpc/include/asm/inst.h
> +++ b/arch/powerpc/include/asm/inst.h
> @@ -4,6 +4,40 @@
>  
>  #include 
>  
> +#ifdef CONFIG_PPC64
> +
> +#define ___get_user_instr(gu_op, dest, ptr)  \
> +({   \
> + long __gui_ret = 0; \
> + unsigned long __gui_ptr = (unsigned long)ptr;   \
> + struct ppc_inst __gui_inst; \
> + unsigned int __prefix, __suffix;\
> + __gui_ret = gu_op(__prefix, (unsigned int __user *)__gui_ptr);  \
> + if (__gui_ret == 0) {   \
> + if ((__prefix >> 26) == OP_PREFIX) {\
> + __gui_ret = gu_op(__suffix, \
> + (unsigned int __user *)__gui_ptr + 1);  \
> + __gui_inst = ppc_inst_prefix(__prefix,  \
> +  __suffix); \
> + } else {\
> + __gui_inst = ppc_inst(__prefix);\
> + }   \
> + if (__gui_ret == 0) \
> + (dest) = __gui_inst;\
> + }   \
> + __gui_ret;  \
> +})
> +#else /* !CONFIG_PPC64 */
> +#define ___get_user_instr(gu_op, dest, ptr)  \
> + gu_op((dest).val, (u32 __user *)(ptr))
> +#endif /* CONFIG_PPC64 */
> +
> +#define get_user_instr(x, ptr) \
> + ___get_user_instr(get_user, x, ptr)
> +
> +#define __get_user_instr(x, ptr) \
> + ___get_user_instr(__get_user, x, ptr)
> +
>  /*
>   * Instruction data type for POWER
>   */
> diff --git a/arch/powerpc/include/asm/uaccess.h 
> b/arch/powerpc/include/asm/uaccess.h
> index 01aea0df4dd0..eaa828a6a419 100644
> --- a/arch/powerpc/include/asm/uaccess.h
> +++ b/arch/powerpc/include/asm/uaccess.h
> @@ -53,40 +53,6 @@ static inline bool __access_ok(unsigned long addr, 
> unsigned long size)
>  #define __put_user(x, ptr) \
>   __put_user_nocheck((__typeof__(*(ptr)))(x), (ptr), sizeof(*(ptr)))
>  
> -#ifdef CONFIG_PPC64
> -
> -#define ___get_user_instr(gu_op, dest, ptr)  \
> -({   \
> - long __gui_ret = 0; \
> - unsigned long __gui_ptr = (unsigned long)ptr;   \
> - struct ppc_inst __gui_inst; \
> - unsigned int __prefix, __suffix;\
> - __gui_ret = gu_op(__prefix, (unsigned int __user *)__gui_ptr);  \
> - if (__gui_ret == 0) {   \
> - if ((__prefix >> 26) == OP_PREFIX) {\
> - __gui_ret = gu_op(__suffix, \
> - (unsigned int __user *)__gui_ptr + 1);  \
> - __gui_inst = ppc_inst_prefix(__prefix,  \
> -  __suffix); \
> - } else {\
> - __gui_inst = ppc_inst(__prefix);\
> - }   \
> - if (__gui_ret == 0)   

[PATCH] selftests: powerpc: unmark non-kernel-doc comments

2021-03-25 Thread Randy Dunlap
Drop the 'beginning of kernel-doc' notation markers (/**)
in places that are not in kernel-doc format.

Signed-off-by: Randy Dunlap 
Cc: Michael Ellerman 
Cc: linuxppc-dev@lists.ozlabs.org
---
 tools/testing/selftests/powerpc/tm/tm-trap.c |4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

--- linux-next-20210323.orig/tools/testing/selftests/powerpc/tm/tm-trap.c
+++ linux-next-20210323/tools/testing/selftests/powerpc/tm/tm-trap.c
@@ -66,7 +66,7 @@ void trap_signal_handler(int signo, sigi
/* Get thread endianness: extract bit LE from MSR */
thread_endianness = MSR_LE & ucp->uc_mcontext.gp_regs[PT_MSR];
 
-   /***
+   /*
 * Little-Endian Machine
 */
 
@@ -126,7 +126,7 @@ void trap_signal_handler(int signo, sigi
}
}
 
-   /***
+   /*
 * Big-Endian Machine
 */
 


Re: [PATCH v2 4/7] CMDLINE: powerpc: convert to generic builtin command line

2021-03-25 Thread Daniel Walker
On Wed, Mar 24, 2021 at 04:31:35PM +0100, Christophe Leroy wrote:
> 
> 
> Le 09/03/2021 à 22:40, Daniel Walker a écrit :
> > On Tue, Mar 09, 2021 at 08:56:47AM +0100, Christophe Leroy wrote:
> > > 
> > > 
> > > Le 09/03/2021 à 01:02, Daniel Walker a écrit :
> > > > This updates the powerpc code to use the CONFIG_GENERIC_CMDLINE
> > > > option.
> > > > 
> > > > Cc: xe-linux-exter...@cisco.com
> > > > Signed-off-by: Ruslan Ruslichenko 
> > > > Signed-off-by: Ruslan Bilovol 
> > > > Signed-off-by: Daniel Walker 
> > > > ---
> > > >arch/powerpc/Kconfig| 37 
> > > > +
> > > >arch/powerpc/kernel/prom.c  |  1 +
> > > >arch/powerpc/kernel/prom_init.c | 35 ++-
> > > >3 files changed, 23 insertions(+), 50 deletions(-)
> > > > 
> > > > diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> > > > index 107bb4319e0e..276b06d5c961 100644
> > > > --- a/arch/powerpc/Kconfig
> > > > +++ b/arch/powerpc/Kconfig
> > > > @@ -167,6 +167,7 @@ config PPC
> > > > select EDAC_SUPPORT
> > > > select GENERIC_ATOMIC64 if PPC32
> > > > select GENERIC_CLOCKEVENTS_BROADCASTif SMP
> > > > +   select GENERIC_CMDLINE
> > > > select GENERIC_CMOS_UPDATE
> > > > select GENERIC_CPU_AUTOPROBE
> > > > select GENERIC_CPU_VULNERABILITIES  if PPC_BARRIER_NOSPEC
> > > > @@ -906,42 +907,6 @@ config PPC_DENORMALISATION
> > > >   Add support for handling denormalisation of single precision
> > > >   values.  Useful for bare metal only.  If unsure say Y here.
> > > > -config CMDLINE
> > > > -   string "Initial kernel command string"
> > > > -   default ""
> > > > -   help
> > > > - On some platforms, there is currently no way for the boot 
> > > > loader to
> > > > - pass arguments to the kernel. For these platforms, you can 
> > > > supply
> > > > - some command-line options at build time by entering them 
> > > > here.  In
> > > > - most cases you will need to specify the root device here.
> > > > -
> > > > -choice
> > > > -   prompt "Kernel command line type" if CMDLINE != ""
> > > > -   default CMDLINE_FROM_BOOTLOADER
> > > > -
> > > > -config CMDLINE_FROM_BOOTLOADER
> > > > -   bool "Use bootloader kernel arguments if available"
> > > > -   help
> > > > - Uses the command-line options passed by the boot loader. If
> > > > - the boot loader doesn't provide any, the default kernel 
> > > > command
> > > > - string provided in CMDLINE will be used.
> 
> I can't see how the above is supported in the generic builtin.
> 
> Taking into account that it is the default on powerpc, I'm having hardtime 
> with that.

Hmm, so this ignores the built in changes. You just don't enable it, or you
don't add PREPEND or APPEND.

> Any feedback on the proposed changes I made on the 13th ? I know it is
> partly buggy but that was more for the principle. I can make clean working
> patch if it helps.


The reason I added it into the function parameters is because I can get free
type checking on the functions. If you use macro's then you don't know if the
function is compatible.

Daniel


Re: [PATCH v2 3/7] powerpc: convert config files to generic cmdline

2021-03-25 Thread Daniel Walker
On Thu, Mar 25, 2021 at 01:03:55PM +0100, Christophe Leroy wrote:
> 
> Ok, so you agree we don't need to provide two CMDLINE, one to be appended and 
> one to be prepended.
> 
> Let's only provide once CMDLINE as of today, and ask the user to select
> whether he wants it appended or prepended or replacee. Then no need to
> change all existing config to rename CONFIG_CMDLINE into either of the new
> ones.
> 
> That's the main difference between my series and Daniel's series. So I'll
> finish taking Will's comment into account and we'll send out a v3 soon.

It doesn't solve the needs of Cisco, I've stated many times your changes have
little value. Please stop submitting them.

Daniel



Re: [PATCH v2 3/7] powerpc: convert config files to generic cmdline

2021-03-25 Thread Daniel Walker
On Wed, Mar 24, 2021 at 05:59:59PM +0100, Christophe Leroy wrote:
> > I think my changes maintain most of this due to the override of
> > CONFIG_CMDLINE_PREPEND. This is an upgrade and the inflexibility in powerpc 
> > is
> > an example of why these changes were created in the first place.
> 
> "inflexibility in powerpc" : Can you elaborate ?
 
the prom environment.

> > 
> > For example , say the default command line is "root=/dev/issblk0" from 
> > iss476
> > platform. And the bootloader adds "root=/dev/sda1"
> > 
> > The result is .
> 
> 
> I'm still having hard time understanding the benefit of having both  
> and .
> Could you please provide a complete exemple from real life, ie what exactly
> the problem is and what it solves ?
 
Say the boot loader of an old product is released with a command line of
"root=/dev/sda" and per the needs of the company or product the boot loader can
not be upgraded to change this command line. To change this behavior you would
need append or EXTEND.

Below I detail an example of PREPEND due to your list question.

> > 
> > Then you have,
> > 
> > root=/dev/issblk0 root=/dev/sda1
> > 
> > and the bootloader has precedent over the default command line. So root= in 
> > the
> > above cases is defined by the bootloader.

A person could input a command line into a boot loader, and it would override
the PREPEND values.

Can you imagine you have a default command line which makes root=/dev/issblk0 ,
but that doesn't work for you testing purpose. So you input into the boot loader
root=/dev/sda1 , since you have the default input in the bootloader OVERRIDEABLE
you can do this without re-compiling and just input the single root= command
into the bootloader.

Daniel


Re: [PATCH v2 6/7] cmdline: Gives architectures opportunity to use generically defined boot cmdline manipulation

2021-03-25 Thread Will Deacon
On Thu, Mar 25, 2021 at 12:18:38PM +0100, Christophe Leroy wrote:
> 
> 
> Le 03/03/2021 à 18:57, Will Deacon a écrit :
> > On Tue, Mar 02, 2021 at 05:25:22PM +, Christophe Leroy wrote:
> > > Most architectures have similar boot command line manipulation
> > > options. This patchs adds the definition in init/Kconfig, gated by
> > > CONFIG_HAVE_CMDLINE that the architectures can select to use them.
> > > 
> > > In order to use this, a few architectures will have to change their
> > > CONFIG options:
> > > - riscv has to replace CMDLINE_FALLBACK by CMDLINE_FROM_BOOTLOADER
> > > - architectures using CONFIG_CMDLINE_OVERRIDE or
> > > CONFIG_CMDLINE_OVERWRITE have to replace them by CONFIG_CMDLINE_FORCE.
> > > 
> > > Architectures also have to define CONFIG_DEFAULT_CMDLINE.
> > > 
> > > Signed-off-by: Christophe Leroy 
> > > ---
> > >   init/Kconfig | 56 
> > >   1 file changed, 56 insertions(+)
> > > 
> > > diff --git a/init/Kconfig b/init/Kconfig
> > > index 22946fe5ded9..a0f2ad9467df 100644
> > > --- a/init/Kconfig
> > > +++ b/init/Kconfig
> > > @@ -117,6 +117,62 @@ config INIT_ENV_ARG_LIMIT
> > > Maximum of each of the number of arguments and environment
> > > variables passed to init from the kernel command line.
> > > +config HAVE_CMDLINE
> > > + bool
> > > +
> > > +config CMDLINE_BOOL
> > > + bool "Default bootloader kernel arguments"
> > > + depends on HAVE_CMDLINE
> > > + help
> > > +   On some platforms, there is currently no way for the boot loader to
> > > +   pass arguments to the kernel. For these platforms, you can supply
> > > +   some command-line options at build time by entering them here.  In
> > > +   most cases you will need to specify the root device here.
> > 
> > Why is this needed as well as CMDLINE_FROM_BOOTLOADER? IIUC, the latter
> > will use CONFIG_CMDLINE if it fails to get anything from the bootloader,
> > which sounds like the same scenario.
> > 
> > > +config CMDLINE
> > > + string "Initial kernel command string"
> > 
> > s/Initial/Default
> > 
> > which is then consistent with the rest of the text here.
> > 
> > > + depends on CMDLINE_BOOL
> > 
> > Ah, so this is a bit different and I don't think lines-up with the
> > CMDLINE_BOOL help text.
> > 
> > > + default DEFAULT_CMDLINE
> > > + help
> > > +   On some platforms, there is currently no way for the boot loader to
> > > +   pass arguments to the kernel. For these platforms, you can supply
> > > +   some command-line options at build time by entering them here.  In
> > > +   most cases you will need to specify the root device here.
> > 
> > (same stale text)
> > 
> > > +choice
> > > + prompt "Kernel command line type" if CMDLINE != ""
> > > + default CMDLINE_FROM_BOOTLOADER
> > > + help
> > > +   Selects the way you want to use the default kernel arguments.
> > 
> > How about:
> > 
> > "Determines how the default kernel arguments are combined with any
> >   arguments passed by the bootloader"
> > 
> > > +config CMDLINE_FROM_BOOTLOADER
> > > + bool "Use bootloader kernel arguments if available"
> > > + help
> > > +   Uses the command-line options passed by the boot loader. If
> > > +   the boot loader doesn't provide any, the default kernel command
> > > +   string provided in CMDLINE will be used.
> > > +
> > > +config CMDLINE_EXTEND
> > 
> > Can we rename this to CMDLINE_APPEND, please? There is code in the tree
> > which disagrees about what CMDLINE_EXTEND means, so that will need be
> > to be updated to be consistent (e.g. the EFI stub parsing order). Having
> > the generic option with a different name means we won't accidentally end
> > up with the same inconsistent behaviours.
> 
> Argh, yes. Seems like the problem is even larger than that IIUC:
> 
> - For ARM it means to append the bootloader arguments to the CONFIG_CMDLINE
> - For Powerpc it means to append the CONFIG_CMDLINE to the bootloader 
> arguments
> - For SH  it means to append the CONFIG_CMDLINE to the bootloader arguments
> - For EFI it means to append the bootloader arguments to the CONFIG_CMDLINE
> - For OF it means to append the CONFIG_CMDLINE to the bootloader arguments
> 
> So what happens on ARM for instance when it selects CONFIG_OF for instance ?

I think ARM gets different behaviour depending on whether it uses ATAGs or
FDT.

> Or should we consider that EXTEND means APPEND or PREPEND, no matter which ?
> Because EXTEND is for instance used for:
> 
>   config INITRAMFS_FORCE
>   bool "Ignore the initramfs passed by the bootloader"
>   depends on CMDLINE_EXTEND || CMDLINE_FORCE

Oh man, I didn't spot that one :(

I think I would make the generic options explicit: either APPEND or PREPEND.
Then architectures which choose to define CMDLINE_EXTEND in their Kconfigs
can select the generic option that matches their behaviour.

INITRAMFS_FORCE sounds like it should depend on APPEND (assuming that means
CONFIG_CMDLINE is appended to the bootloader 

Re: VDSO ELF header

2021-03-25 Thread Laurent Dufour

Le 25/03/2021 à 17:56, Laurent Dufour a écrit :

Le 25/03/2021 à 17:46, Christophe Leroy a écrit :

Hi Laurent

Le 25/03/2021 à 17:11, Laurent Dufour a écrit :

Hi Christophe,

Since v5.11 and the changes you made to the VDSO code, it no more exposing 
the ELF header at the beginning of the VDSO mapping in user space.


This is confusing CRIU which is checking for this ELF header cookie 
(https://github.com/checkpoint-restore/criu/issues/1417).


How does it do on other architectures ?


Good question, I'll double check the CRIU code.


On x86, there are 2 VDSO entries:
77fcb000-77fce000 r--p  00:00 0  [vvar]
77fce000-77fcf000 r-xp  00:00 0  [vdso]

And the VDSO is starting with the ELF header.







I'm not an expert in loading and ELF part and reading the change you made, I 
can't identify how this could work now as I'm expecting the loader to need 
that ELF header to do the relocation.


I think the loader is able to find it at the expected place.


Actually, it seems the loader relies on the AUX vector AT_SYSINFO_EHDR. I guess 
CRIU should do the same.




 From my investigation it seems that the first bytes of the VDSO area are now 
the vdso_arch_data.


Is the ELF header put somewhere else?
How could the loader process the VDSO without that ELF header?



Like most other architectures, we now have the data section as first page and 
the text section follows. So you will likely find the elf header on the second 
page.


I'm wondering if the data section you're refering to is the vvar section I can 
see on x86.





Done in this commit: 
https://github.com/linuxppc/linux/commit/511157ab641eb6bedd00d62673388e78a4f871cf


I'll double check on x86, but anyway, I think CRIU should rely on 
AT_SYSINFO_EHDR and not assume that the ELF header is at the beginning of VDSO 
mapping.


Thanks for your help.
Laurent.





Re: [PATCH] docs: powerpc: Fix a typo

2021-03-25 Thread Jonathan Corbet
Bhaskar Chowdhury  writes:

> s/struture/structure/
>
> Signed-off-by: Bhaskar Chowdhury 
> ---
>  Documentation/powerpc/firmware-assisted-dump.rst | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/Documentation/powerpc/firmware-assisted-dump.rst 
> b/Documentation/powerpc/firmware-assisted-dump.rst
> index 20ea8cdee0aa..6c0ae070ba67 100644
> --- a/Documentation/powerpc/firmware-assisted-dump.rst
> +++ b/Documentation/powerpc/firmware-assisted-dump.rst
> @@ -171,7 +171,7 @@ that were present in CMA region::
> (meta area)|
>|
>|
> -  Metadata: This area holds a metadata struture whose
> +  Metadata: This area holds a metadata structure whose
>address is registered with f/w and retrieved in the
>second kernel after crash, on platforms that support

Applied, thanks.

jon


Re: VDSO ELF header

2021-03-25 Thread Laurent Dufour

Le 25/03/2021 à 17:46, Christophe Leroy a écrit :

Hi Laurent

Le 25/03/2021 à 17:11, Laurent Dufour a écrit :

Hi Christophe,

Since v5.11 and the changes you made to the VDSO code, it no more exposing the 
ELF header at the beginning of the VDSO mapping in user space.


This is confusing CRIU which is checking for this ELF header cookie 
(https://github.com/checkpoint-restore/criu/issues/1417).


How does it do on other architectures ?


Good question, I'll double check the CRIU code.





I'm not an expert in loading and ELF part and reading the change you made, I 
can't identify how this could work now as I'm expecting the loader to need 
that ELF header to do the relocation.


I think the loader is able to find it at the expected place.


Actually, it seems the loader relies on the AUX vector AT_SYSINFO_EHDR. I guess 
CRIU should do the same.




 From my investigation it seems that the first bytes of the VDSO area are now 
the vdso_arch_data.


Is the ELF header put somewhere else?
How could the loader process the VDSO without that ELF header?



Like most other architectures, we now have the data section as first page and 
the text section follows. So you will likely find the elf header on the second 
page.


Done in this commit: 
https://github.com/linuxppc/linux/commit/511157ab641eb6bedd00d62673388e78a4f871cf


I'll double check on x86, but anyway, I think CRIU should rely on 
AT_SYSINFO_EHDR and not assume that the ELF header is at the beginning of VDSO 
mapping.


Thanks for your help.
Laurent.



Re: VDSO ELF header

2021-03-25 Thread Christophe Leroy

Hi Laurent

Le 25/03/2021 à 17:11, Laurent Dufour a écrit :

Hi Christophe,

Since v5.11 and the changes you made to the VDSO code, it no more exposing the ELF header at the 
beginning of the VDSO mapping in user space.


This is confusing CRIU which is checking for this ELF header cookie 
(https://github.com/checkpoint-restore/criu/issues/1417).


How does it do on other architectures ?



I'm not an expert in loading and ELF part and reading the change you made, I can't identify how this 
could work now as I'm expecting the loader to need that ELF header to do the relocation.


I think the loader is able to find it at the expected place.



 From my investigation it seems that the first bytes of the VDSO area are now 
the vdso_arch_data.

Is the ELF header put somewhere else?
How could the loader process the VDSO without that ELF header?



Like most other architectures, we now have the data section as first page and the text section 
follows. So you will likely find the elf header on the second page.


Done in this commit: 
https://github.com/linuxppc/linux/commit/511157ab641eb6bedd00d62673388e78a4f871cf


Christophe


Re: [PATCH V2 1/5] powerpc/perf: Expose processor pipeline stage cycles using PERF_SAMPLE_WEIGHT_STRUCT

2021-03-25 Thread Arnaldo



On March 25, 2021 11:38:01 AM GMT-03:00, Peter Zijlstra  
wrote:
>On Thu, Mar 25, 2021 at 10:01:35AM -0300, Arnaldo Carvalho de Melo
>wrote:.
>> > > Also for CPU_FTR_ARCH_31, capture the two cycle counter
>information in
>> > > two 16 bit fields of perf_sample_weight structure.
>> > 
>> > Changes looks fine to me.
>> > 
>> > Reviewed-by: Madhavan Srinivasan 
>> 
>> So who will process the kernel bits? I'm merging the tooling parts,
>
>I was sorta expecting these to go through the powerpc tree. Let me know
>if you want them in tip/perf/core instead.

Shouldn't matter by which tree it gets upstream, as long as it gets picked :-)

- Arnaldo

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.


Re: [PATCH V2 1/5] powerpc/perf: Expose processor pipeline stage cycles using PERF_SAMPLE_WEIGHT_STRUCT

2021-03-25 Thread Peter Zijlstra
On Thu, Mar 25, 2021 at 10:01:35AM -0300, Arnaldo Carvalho de Melo wrote:
> Em Wed, Mar 24, 2021 at 10:05:23AM +0530, Madhavan Srinivasan escreveu:
> > 
> > On 3/22/21 8:27 PM, Athira Rajeev wrote:
> > > Performance Monitoring Unit (PMU) registers in powerpc provides
> > > information on cycles elapsed between different stages in the
> > > pipeline. This can be used for application tuning. On ISA v3.1
> > > platform, this information is exposed by sampling registers.
> > > Patch adds kernel support to capture two of the cycle counters
> > > as part of perf sample using the sample type:
> > > PERF_SAMPLE_WEIGHT_STRUCT.
> > > 
> > > The power PMU function 'get_mem_weight' currently uses 64 bit weight
> > > field of perf_sample_data to capture memory latency. But following the
> > > introduction of PERF_SAMPLE_WEIGHT_TYPE, weight field could contain
> > > 64-bit or 32-bit value depending on the architexture support for
> > > PERF_SAMPLE_WEIGHT_STRUCT. Patches uses WEIGHT_STRUCT to expose the
> > > pipeline stage cycles info. Hence update the ppmu functions to work for
> > > 64-bit and 32-bit weight values.
> > > 
> > > If the sample type is PERF_SAMPLE_WEIGHT, use the 64-bit weight field.
> > > if the sample type is PERF_SAMPLE_WEIGHT_STRUCT, memory subsystem
> > > latency is stored in the low 32bits of perf_sample_weight structure.
> > > Also for CPU_FTR_ARCH_31, capture the two cycle counter information in
> > > two 16 bit fields of perf_sample_weight structure.
> > 
> > Changes looks fine to me.
> > 
> > Reviewed-by: Madhavan Srinivasan 
> 
> So who will process the kernel bits? I'm merging the tooling parts,

I was sorta expecting these to go through the powerpc tree. Let me know
if you want them in tip/perf/core instead.


Re: [PATCH v2 3/7] powerpc: convert config files to generic cmdline

2021-03-25 Thread Rob Herring
On Thu, Mar 25, 2021 at 6:06 AM Christophe Leroy
 wrote:
>
>
>
> Le 24/03/2021 à 18:32, Rob Herring a écrit :
> > On Wed, Mar 24, 2021 at 11:01 AM Christophe Leroy
> >  wrote:
> >>
> >>
> >>
> >> Le 09/03/2021 à 22:29, Daniel Walker a écrit :
> >>> On Tue, Mar 09, 2021 at 08:47:09AM +0100, Christophe Leroy wrote:
> 
> 
>  Le 09/03/2021 à 01:02, Daniel Walker a écrit :
> > This is a scripted mass convert of the config files to use
> > the new generic cmdline. There is a bit of a trim effect here.
> > It would seems that some of the config haven't been trimmed in
> > a while.
> 
>  If you do that in a separate patch, you loose bisectability.
> 
>  I think it would have been better to do things in a different way, more 
>  or less like I did in my series:
>  1/ Provide GENERIC cmdline at the same functionnality level as what is
>  spread in the different architectures
>  2/ Convert architectures to the generic with least churn.
>  3/ Add new features to the generic
> >>>
> >>> You have to have the churn eventually, no matter how you do it. The only 
> >>> way you
> >>> don't have churn is if you never upgrade the feature set.
> >>>
> >>>
> >
> > The bash script used to convert is as follows,
> >
> > if [[ -z "$1" || -z "$2" ]]; then
> >echo "Two arguments are needed."
> >exit 1
> > fi
> > mkdir $1
> > cp $2 $1/.config
> > sed -i 
> > 's/CONFIG_CMDLINE=/CONFIG_CMDLINE_BOOL=y\nCONFIG_CMDLINE_PREPEND=/g' 
> > $1/.config
> 
>  This is not correct.
> 
>  By default, on powerpc the provided command line is used only if the 
>  bootloader doesn't provide one.
> 
>  Otherwise:
>  - the builtin command line is appended to the one provided by the 
>  bootloader
>  if CONFIG_CMDLINE_EXTEND is selected
>  - the builtin command line replaces to the one provided by the 
>  bootloader if
>  CONFIG_CMDLINE_FORCE is selected
> >>>
> >>> I think my changes maintain most of this due to the override of
> >>> CONFIG_CMDLINE_PREPEND. This is an upgrade and the inflexibility in 
> >>> powerpc is
> >>> an example of why these changes were created in the first place.
> >>
> >> "inflexibility in powerpc" : Can you elaborate ?
> >>
> >>>
> >>> For example , say the default command line is "root=/dev/issblk0" from 
> >>> iss476
> >>> platform. And the bootloader adds "root=/dev/sda1"
> >>>
> >>> The result is .
> >>
> >>
> >> I'm still having hard time understanding the benefit of having both 
> >>  and .
> >> Could you please provide a complete exemple from real life, ie what 
> >> exactly the problem is and what
> >> it solves ?
> >
> > It doesn't matter. We already have both cases and 'extend' has meant either 
> > one.
> >
> > What someone wants is policy and the kernel shouldn't be defining the 
> > policy.
> >
>
> Ok, so you agree we don't need to provide two CMDLINE, one to be appended and 
> one to be prepended.

Well, I wasn't thinking about that part of it, but yes as long as no
arch currently needs that.

> Let's only provide once CMDLINE as of today, and ask the user to select 
> whether he wants it appended
> or prepended or replacee. Then no need to change all existing config to 
> rename CONFIG_CMDLINE into
> either of the new ones.
>
> That's the main difference between my series and Daniel's series. So I'll 
> finish taking Will's
> comment into account and we'll send out a v3 soon.

Great.

Rob


Re: [PATCH V2 1/5] powerpc/perf: Expose processor pipeline stage cycles using PERF_SAMPLE_WEIGHT_STRUCT

2021-03-25 Thread Arnaldo Carvalho de Melo
Em Wed, Mar 24, 2021 at 10:05:23AM +0530, Madhavan Srinivasan escreveu:
> 
> On 3/22/21 8:27 PM, Athira Rajeev wrote:
> > Performance Monitoring Unit (PMU) registers in powerpc provides
> > information on cycles elapsed between different stages in the
> > pipeline. This can be used for application tuning. On ISA v3.1
> > platform, this information is exposed by sampling registers.
> > Patch adds kernel support to capture two of the cycle counters
> > as part of perf sample using the sample type:
> > PERF_SAMPLE_WEIGHT_STRUCT.
> > 
> > The power PMU function 'get_mem_weight' currently uses 64 bit weight
> > field of perf_sample_data to capture memory latency. But following the
> > introduction of PERF_SAMPLE_WEIGHT_TYPE, weight field could contain
> > 64-bit or 32-bit value depending on the architexture support for
> > PERF_SAMPLE_WEIGHT_STRUCT. Patches uses WEIGHT_STRUCT to expose the
> > pipeline stage cycles info. Hence update the ppmu functions to work for
> > 64-bit and 32-bit weight values.
> > 
> > If the sample type is PERF_SAMPLE_WEIGHT, use the 64-bit weight field.
> > if the sample type is PERF_SAMPLE_WEIGHT_STRUCT, memory subsystem
> > latency is stored in the low 32bits of perf_sample_weight structure.
> > Also for CPU_FTR_ARCH_31, capture the two cycle counter information in
> > two 16 bit fields of perf_sample_weight structure.
> 
> Changes looks fine to me.

You mean just the kernel part or can I add your Reviewed-by to all the
patchset?
 
> Reviewed-by: Madhavan Srinivasan 
> 
> 
> > Signed-off-by: Athira Rajeev 
> > ---
> >   arch/powerpc/include/asm/perf_event_server.h |  2 +-
> >   arch/powerpc/perf/core-book3s.c  |  4 ++--
> >   arch/powerpc/perf/isa207-common.c| 29 
> > +---
> >   arch/powerpc/perf/isa207-common.h|  6 +-
> >   4 files changed, 34 insertions(+), 7 deletions(-)
> > 
> > diff --git a/arch/powerpc/include/asm/perf_event_server.h 
> > b/arch/powerpc/include/asm/perf_event_server.h
> > index 00e7e671bb4b..112cf092d7b3 100644
> > --- a/arch/powerpc/include/asm/perf_event_server.h
> > +++ b/arch/powerpc/include/asm/perf_event_server.h
> > @@ -43,7 +43,7 @@ struct power_pmu {
> > u64 alt[]);
> > void(*get_mem_data_src)(union perf_mem_data_src *dsrc,
> > u32 flags, struct pt_regs *regs);
> > -   void(*get_mem_weight)(u64 *weight);
> > +   void(*get_mem_weight)(u64 *weight, u64 type);
> > unsigned long   group_constraint_mask;
> > unsigned long   group_constraint_val;
> > u64 (*bhrb_filter_map)(u64 branch_sample_type);
> > diff --git a/arch/powerpc/perf/core-book3s.c 
> > b/arch/powerpc/perf/core-book3s.c
> > index 766f064f00fb..6936763246bd 100644
> > --- a/arch/powerpc/perf/core-book3s.c
> > +++ b/arch/powerpc/perf/core-book3s.c
> > @@ -2206,9 +2206,9 @@ static void record_and_restart(struct perf_event 
> > *event, unsigned long val,
> > ppmu->get_mem_data_src)
> > ppmu->get_mem_data_src(_src, ppmu->flags, 
> > regs);
> > -   if (event->attr.sample_type & PERF_SAMPLE_WEIGHT &&
> > +   if (event->attr.sample_type & PERF_SAMPLE_WEIGHT_TYPE &&
> > ppmu->get_mem_weight)
> > -   ppmu->get_mem_weight();
> > +   ppmu->get_mem_weight(, 
> > event->attr.sample_type);
> > if (perf_event_overflow(event, , regs))
> > power_pmu_stop(event, 0);
> > diff --git a/arch/powerpc/perf/isa207-common.c 
> > b/arch/powerpc/perf/isa207-common.c
> > index e4f577da33d8..5dcbdbd54598 100644
> > --- a/arch/powerpc/perf/isa207-common.c
> > +++ b/arch/powerpc/perf/isa207-common.c
> > @@ -284,8 +284,10 @@ void isa207_get_mem_data_src(union perf_mem_data_src 
> > *dsrc, u32 flags,
> > }
> >   }
> > -void isa207_get_mem_weight(u64 *weight)
> > +void isa207_get_mem_weight(u64 *weight, u64 type)
> >   {
> > +   union perf_sample_weight *weight_fields;
> > +   u64 weight_lat;
> > u64 mmcra = mfspr(SPRN_MMCRA);
> > u64 exp = MMCRA_THR_CTR_EXP(mmcra);
> > u64 mantissa = MMCRA_THR_CTR_MANT(mmcra);
> > @@ -296,9 +298,30 @@ void isa207_get_mem_weight(u64 *weight)
> > mantissa = P10_MMCRA_THR_CTR_MANT(mmcra);
> > if (val == 0 || val == 7)
> > -   *weight = 0;
> > +   weight_lat = 0;
> > else
> > -   *weight = mantissa << (2 * exp);
> > +   weight_lat = mantissa << (2 * exp);
> > +
> > +   /*
> > +* Use 64 bit weight field (full) if sample type is
> > +* WEIGHT.
> > +*
> > +* if sample type is WEIGHT_STRUCT:
> > +* - store memory latency in the lower 32 bits.
> > +* - For ISA v3.1, use remaining two 16 bit fields of
> > +*   perf_sample_weight to store cycle counter values
> > +*   from sier2.
> > +*/
> > +   weight_fields = (union perf_sample_weight 

Re: [PATCH V2 1/5] powerpc/perf: Expose processor pipeline stage cycles using PERF_SAMPLE_WEIGHT_STRUCT

2021-03-25 Thread Arnaldo Carvalho de Melo
Em Wed, Mar 24, 2021 at 10:05:23AM +0530, Madhavan Srinivasan escreveu:
> 
> On 3/22/21 8:27 PM, Athira Rajeev wrote:
> > Performance Monitoring Unit (PMU) registers in powerpc provides
> > information on cycles elapsed between different stages in the
> > pipeline. This can be used for application tuning. On ISA v3.1
> > platform, this information is exposed by sampling registers.
> > Patch adds kernel support to capture two of the cycle counters
> > as part of perf sample using the sample type:
> > PERF_SAMPLE_WEIGHT_STRUCT.
> > 
> > The power PMU function 'get_mem_weight' currently uses 64 bit weight
> > field of perf_sample_data to capture memory latency. But following the
> > introduction of PERF_SAMPLE_WEIGHT_TYPE, weight field could contain
> > 64-bit or 32-bit value depending on the architexture support for
> > PERF_SAMPLE_WEIGHT_STRUCT. Patches uses WEIGHT_STRUCT to expose the
> > pipeline stage cycles info. Hence update the ppmu functions to work for
> > 64-bit and 32-bit weight values.
> > 
> > If the sample type is PERF_SAMPLE_WEIGHT, use the 64-bit weight field.
> > if the sample type is PERF_SAMPLE_WEIGHT_STRUCT, memory subsystem
> > latency is stored in the low 32bits of perf_sample_weight structure.
> > Also for CPU_FTR_ARCH_31, capture the two cycle counter information in
> > two 16 bit fields of perf_sample_weight structure.
> 
> Changes looks fine to me.
> 
> Reviewed-by: Madhavan Srinivasan 

So who will process the kernel bits? I'm merging the tooling parts,

Thanks,

- Arnaldo
 
> 
> > Signed-off-by: Athira Rajeev 
> > ---
> >   arch/powerpc/include/asm/perf_event_server.h |  2 +-
> >   arch/powerpc/perf/core-book3s.c  |  4 ++--
> >   arch/powerpc/perf/isa207-common.c| 29 
> > +---
> >   arch/powerpc/perf/isa207-common.h|  6 +-
> >   4 files changed, 34 insertions(+), 7 deletions(-)
> > 
> > diff --git a/arch/powerpc/include/asm/perf_event_server.h 
> > b/arch/powerpc/include/asm/perf_event_server.h
> > index 00e7e671bb4b..112cf092d7b3 100644
> > --- a/arch/powerpc/include/asm/perf_event_server.h
> > +++ b/arch/powerpc/include/asm/perf_event_server.h
> > @@ -43,7 +43,7 @@ struct power_pmu {
> > u64 alt[]);
> > void(*get_mem_data_src)(union perf_mem_data_src *dsrc,
> > u32 flags, struct pt_regs *regs);
> > -   void(*get_mem_weight)(u64 *weight);
> > +   void(*get_mem_weight)(u64 *weight, u64 type);
> > unsigned long   group_constraint_mask;
> > unsigned long   group_constraint_val;
> > u64 (*bhrb_filter_map)(u64 branch_sample_type);
> > diff --git a/arch/powerpc/perf/core-book3s.c 
> > b/arch/powerpc/perf/core-book3s.c
> > index 766f064f00fb..6936763246bd 100644
> > --- a/arch/powerpc/perf/core-book3s.c
> > +++ b/arch/powerpc/perf/core-book3s.c
> > @@ -2206,9 +2206,9 @@ static void record_and_restart(struct perf_event 
> > *event, unsigned long val,
> > ppmu->get_mem_data_src)
> > ppmu->get_mem_data_src(_src, ppmu->flags, 
> > regs);
> > -   if (event->attr.sample_type & PERF_SAMPLE_WEIGHT &&
> > +   if (event->attr.sample_type & PERF_SAMPLE_WEIGHT_TYPE &&
> > ppmu->get_mem_weight)
> > -   ppmu->get_mem_weight();
> > +   ppmu->get_mem_weight(, 
> > event->attr.sample_type);
> > if (perf_event_overflow(event, , regs))
> > power_pmu_stop(event, 0);
> > diff --git a/arch/powerpc/perf/isa207-common.c 
> > b/arch/powerpc/perf/isa207-common.c
> > index e4f577da33d8..5dcbdbd54598 100644
> > --- a/arch/powerpc/perf/isa207-common.c
> > +++ b/arch/powerpc/perf/isa207-common.c
> > @@ -284,8 +284,10 @@ void isa207_get_mem_data_src(union perf_mem_data_src 
> > *dsrc, u32 flags,
> > }
> >   }
> > -void isa207_get_mem_weight(u64 *weight)
> > +void isa207_get_mem_weight(u64 *weight, u64 type)
> >   {
> > +   union perf_sample_weight *weight_fields;
> > +   u64 weight_lat;
> > u64 mmcra = mfspr(SPRN_MMCRA);
> > u64 exp = MMCRA_THR_CTR_EXP(mmcra);
> > u64 mantissa = MMCRA_THR_CTR_MANT(mmcra);
> > @@ -296,9 +298,30 @@ void isa207_get_mem_weight(u64 *weight)
> > mantissa = P10_MMCRA_THR_CTR_MANT(mmcra);
> > if (val == 0 || val == 7)
> > -   *weight = 0;
> > +   weight_lat = 0;
> > else
> > -   *weight = mantissa << (2 * exp);
> > +   weight_lat = mantissa << (2 * exp);
> > +
> > +   /*
> > +* Use 64 bit weight field (full) if sample type is
> > +* WEIGHT.
> > +*
> > +* if sample type is WEIGHT_STRUCT:
> > +* - store memory latency in the lower 32 bits.
> > +* - For ISA v3.1, use remaining two 16 bit fields of
> > +*   perf_sample_weight to store cycle counter values
> > +*   from sier2.
> > +*/
> > +   weight_fields = (union 

Re: [PATCH v7] powerpc/irq: Inline call_do_irq() and call_do_softirq()

2021-03-25 Thread Segher Boessenkool
On Wed, Mar 24, 2021 at 11:26:01PM +1100, Michael Ellerman wrote:
> Christophe Leroy  writes:
> > Hmm. It is the first time we use named parameters in powerpc assembly, 
> > isn't it ?

> Yeah I'd like us to use it more, I think it helps readability a lot.

..in some cases.  Not in most cases :-(

> > Wondering, how would the below look like with named parameters (from 
> > __put_user_asm2_goto) ?
> >
> > stw%X1 %L0, %L1
> 
> Not sure, possibly that's too complicated for it :)

asm("stw%X[name1] %L[name0],%L[name1]" :: [name0]"r"(x), [name1]"m"(p));

Yes, it is not more readable *at all*.


Segher


Re: [PATCH v2 3/7] powerpc: convert config files to generic cmdline

2021-03-25 Thread Christophe Leroy




Le 24/03/2021 à 18:32, Rob Herring a écrit :

On Wed, Mar 24, 2021 at 11:01 AM Christophe Leroy
 wrote:




Le 09/03/2021 à 22:29, Daniel Walker a écrit :

On Tue, Mar 09, 2021 at 08:47:09AM +0100, Christophe Leroy wrote:



Le 09/03/2021 à 01:02, Daniel Walker a écrit :

This is a scripted mass convert of the config files to use
the new generic cmdline. There is a bit of a trim effect here.
It would seems that some of the config haven't been trimmed in
a while.


If you do that in a separate patch, you loose bisectability.

I think it would have been better to do things in a different way, more or less 
like I did in my series:
1/ Provide GENERIC cmdline at the same functionnality level as what is
spread in the different architectures
2/ Convert architectures to the generic with least churn.
3/ Add new features to the generic


You have to have the churn eventually, no matter how you do it. The only way you
don't have churn is if you never upgrade the feature set.




The bash script used to convert is as follows,

if [[ -z "$1" || -z "$2" ]]; then
   echo "Two arguments are needed."
   exit 1
fi
mkdir $1
cp $2 $1/.config
sed -i 's/CONFIG_CMDLINE=/CONFIG_CMDLINE_BOOL=y\nCONFIG_CMDLINE_PREPEND=/g' 
$1/.config


This is not correct.

By default, on powerpc the provided command line is used only if the bootloader 
doesn't provide one.

Otherwise:
- the builtin command line is appended to the one provided by the bootloader
if CONFIG_CMDLINE_EXTEND is selected
- the builtin command line replaces to the one provided by the bootloader if
CONFIG_CMDLINE_FORCE is selected


I think my changes maintain most of this due to the override of
CONFIG_CMDLINE_PREPEND. This is an upgrade and the inflexibility in powerpc is
an example of why these changes were created in the first place.


"inflexibility in powerpc" : Can you elaborate ?



For example , say the default command line is "root=/dev/issblk0" from iss476
platform. And the bootloader adds "root=/dev/sda1"

The result is .



I'm still having hard time understanding the benefit of having both  and 
.
Could you please provide a complete exemple from real life, ie what exactly the 
problem is and what
it solves ?


It doesn't matter. We already have both cases and 'extend' has meant either one.

What someone wants is policy and the kernel shouldn't be defining the policy.



Ok, so you agree we don't need to provide two CMDLINE, one to be appended and 
one to be prepended.

Let's only provide once CMDLINE as of today, and ask the user to select whether he wants it appended 
or prepended or replacee. Then no need to change all existing config to rename CONFIG_CMDLINE into 
either of the new ones.


That's the main difference between my series and Daniel's series. So I'll finish taking Will's 
comment into account and we'll send out a v3 soon.


Thanks
Christophe


[PATCH v3 1/2] powerpc/perf: Infrastructure to support checking of attr.config*

2021-03-25 Thread Madhavan Srinivasan
Introduce code to support the checking of attr.config* for
values which are reserved for a given platform.
Performance Monitoring Unit (PMU) configuration registers
have fields that are reserved and some specific values for
bit fields are reserved. For ex., MMCRA[61:62] is
Random Sampling Mode (SM) and value of 0b11 for this field
is reserved.

Writing non-zero or invalid values in these fields will
have unknown behaviours.

Patch adds a generic call-back function "check_attr_config"
in "struct power_pmu", to be called in event_init to
check for attr.config* values for a given platform.
"check_attr_config" is valid only for raw event type.

Signed-off-by: Madhavan Srinivasan 
---
Changelog v2:
-Fixed commit message

Changelog v1:
-Fixed commit message and in-code comments

 arch/powerpc/include/asm/perf_event_server.h |  6 ++
 arch/powerpc/perf/core-book3s.c  | 14 ++
 2 files changed, 20 insertions(+)

diff --git a/arch/powerpc/include/asm/perf_event_server.h 
b/arch/powerpc/include/asm/perf_event_server.h
index 00e7e671bb4b..dde97d7d9253 100644
--- a/arch/powerpc/include/asm/perf_event_server.h
+++ b/arch/powerpc/include/asm/perf_event_server.h
@@ -67,6 +67,12 @@ struct power_pmu {
 * the pmu supports extended perf regs capability
 */
int capabilities;
+   /*
+* Function to check event code for values which are
+* reserved. Function takes struct perf_event as input,
+* since event code could be spread in attr.config*
+*/
+   int (*check_attr_config)(struct perf_event *ev);
 };
 
 /*
diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
index 6817331e22ff..c6eeb4fdc5fd 100644
--- a/arch/powerpc/perf/core-book3s.c
+++ b/arch/powerpc/perf/core-book3s.c
@@ -1958,6 +1958,20 @@ static int power_pmu_event_init(struct perf_event *event)
 
if (ppmu->blacklist_ev && is_event_blacklisted(ev))
return -EINVAL;
+   /*
+* PMU config registers have fields that are
+* reserved and specific value to bit field as reserved.
+* For ex., MMCRA[61:62] is Randome Sampling Mode (SM)
+* and value of 0b11 to this field is reserved.
+*
+* This check is needed only for raw event type,
+* since tools like fuzzer use raw event type to
+* provide randomized event code values for test.
+*
+*/
+   if (ppmu->check_attr_config &&
+   ppmu->check_attr_config(event))
+   return -EINVAL;
break;
default:
return -ENOENT;
-- 
2.26.2



[PATCH v3 2/2] powerpc/perf: Add platform specific check_attr_config

2021-03-25 Thread Madhavan Srinivasan
Add platform specific attr.config value checks. Patch
includes checks for both power9 and power10.

Signed-off-by: Madhavan Srinivasan 
---
Changelog v2:
- Changed function name as suggested.
- Added name of source document referred for reserved values

Changelog v1:
- No changes
 arch/powerpc/perf/isa207-common.c | 42 +++
 arch/powerpc/perf/isa207-common.h |  2 ++
 arch/powerpc/perf/power10-pmu.c   | 13 ++
 arch/powerpc/perf/power9-pmu.c| 13 ++
 4 files changed, 70 insertions(+)

diff --git a/arch/powerpc/perf/isa207-common.c 
b/arch/powerpc/perf/isa207-common.c
index e4f577da33d8..358a0e95ba5f 100644
--- a/arch/powerpc/perf/isa207-common.c
+++ b/arch/powerpc/perf/isa207-common.c
@@ -694,3 +694,45 @@ int isa207_get_alternatives(u64 event, u64 alt[], int 
size, unsigned int flags,
 
return num_alt;
 }
+
+int isa3XX_check_attr_config(struct perf_event *ev)
+{
+   u64 val, sample_mode;
+   u64 event = ev->attr.config;
+
+   val = (event >> EVENT_SAMPLE_SHIFT) & EVENT_SAMPLE_MASK;
+   sample_mode = val & 0x3;
+
+   /*
+* MMCRA[61:62] is Random Sampling Mode (SM).
+* value of 0b11 is reserved.
+*/
+   if (sample_mode == 0x3)
+   return -EINVAL;
+
+   /*
+* Check for all reserved value
+* Source: Performance Monitoring Unit User Guide
+*/
+   switch (val) {
+   case 0x5:
+   case 0x9:
+   case 0xD:
+   case 0x19:
+   case 0x1D:
+   case 0x1A:
+   case 0x1E:
+   return -EINVAL;
+   }
+
+   /*
+* MMCRA[48:51]/[52:55]) Threshold Start/Stop
+* Events Selection.
+* 0b/0b is reserved.
+*/
+   val = (event >> EVENT_THR_CTL_SHIFT) & EVENT_THR_CTL_MASK;
+   if (((val & 0xF0) == 0xF0) || ((val & 0xF) == 0xF))
+   return -EINVAL;
+
+   return 0;
+}
diff --git a/arch/powerpc/perf/isa207-common.h 
b/arch/powerpc/perf/isa207-common.h
index 1af0e8c97ac7..b4d2a2b2b346 100644
--- a/arch/powerpc/perf/isa207-common.h
+++ b/arch/powerpc/perf/isa207-common.h
@@ -280,4 +280,6 @@ void isa207_get_mem_data_src(union perf_mem_data_src *dsrc, 
u32 flags,
struct pt_regs *regs);
 void isa207_get_mem_weight(u64 *weight);
 
+int isa3XX_check_attr_config(struct perf_event *ev);
+
 #endif
diff --git a/arch/powerpc/perf/power10-pmu.c b/arch/powerpc/perf/power10-pmu.c
index a901c1348cad..f9d64c63bb4a 100644
--- a/arch/powerpc/perf/power10-pmu.c
+++ b/arch/powerpc/perf/power10-pmu.c
@@ -106,6 +106,18 @@ static int power10_get_alternatives(u64 event, unsigned 
int flags, u64 alt[])
return num_alt;
 }
 
+static int power10_check_attr_config(struct perf_event *ev)
+{
+   u64 val;
+   u64 event = ev->attr.config;
+
+   val = (event >> EVENT_SAMPLE_SHIFT) & EVENT_SAMPLE_MASK;
+   if (val == 0x10 || isa3XX_check_attr_config(ev))
+   return -EINVAL;
+
+   return 0;
+}
+
 GENERIC_EVENT_ATTR(cpu-cycles, PM_RUN_CYC);
 GENERIC_EVENT_ATTR(instructions,   PM_RUN_INST_CMPL);
 GENERIC_EVENT_ATTR(branch-instructions,PM_BR_CMPL);
@@ -559,6 +571,7 @@ static struct power_pmu power10_pmu = {
.attr_groups= power10_pmu_attr_groups,
.bhrb_nr= 32,
.capabilities   = PERF_PMU_CAP_EXTENDED_REGS,
+   .check_attr_config  = power10_check_attr_config,
 };
 
 int init_power10_pmu(void)
diff --git a/arch/powerpc/perf/power9-pmu.c b/arch/powerpc/perf/power9-pmu.c
index 2a57e93a79dc..ff3382140d7e 100644
--- a/arch/powerpc/perf/power9-pmu.c
+++ b/arch/powerpc/perf/power9-pmu.c
@@ -151,6 +151,18 @@ static int power9_get_alternatives(u64 event, unsigned int 
flags, u64 alt[])
return num_alt;
 }
 
+static int power9_check_attr_config(struct perf_event *ev)
+{
+   u64 val;
+   u64 event = ev->attr.config;
+
+   val = (event >> EVENT_SAMPLE_SHIFT) & EVENT_SAMPLE_MASK;
+   if (val == 0xC || isa3XX_check_attr_config(ev))
+   return -EINVAL;
+
+   return 0;
+}
+
 GENERIC_EVENT_ATTR(cpu-cycles, PM_CYC);
 GENERIC_EVENT_ATTR(stalled-cycles-frontend,PM_ICT_NOSLOT_CYC);
 GENERIC_EVENT_ATTR(stalled-cycles-backend, PM_CMPLU_STALL);
@@ -437,6 +449,7 @@ static struct power_pmu power9_pmu = {
.attr_groups= power9_pmu_attr_groups,
.bhrb_nr= 32,
.capabilities   = PERF_PMU_CAP_EXTENDED_REGS,
+   .check_attr_config  = power9_check_attr_config,
 };
 
 int init_power9_pmu(void)
-- 
2.26.2



[PATCH AUTOSEL 4.9 06/13] powerpc: Force inlining of cpu_has_feature() to avoid build failure

2021-03-25 Thread Sasha Levin
From: Christophe Leroy 

[ Upstream commit eed5fae00593ab9d261a0c1ffc1bdb786a87a55a ]

The code relies on constant folding of cpu_has_feature() based
on possible and always true values as defined per
CPU_FTRS_ALWAYS and CPU_FTRS_POSSIBLE.

Build failure is encountered with for instance
book3e_all_defconfig on kisskb in the AMDGPU driver which uses
cpu_has_feature(CPU_FTR_VSX_COMP) to decide whether calling
kernel_enable_vsx() or not.

The failure is due to cpu_has_feature() not being inlined with
that configuration with gcc 4.9.

In the same way as commit acdad8fb4a15 ("powerpc: Force inlining of
mmu_has_feature to fix build failure"), for inlining of
cpu_has_feature().

Signed-off-by: Christophe Leroy 
Signed-off-by: Michael Ellerman 
Link: 
https://lore.kernel.org/r/b231dfa040ce4cc37f702f5c3a595fdeabfe0462.1615378209.git.christophe.le...@csgroup.eu
Signed-off-by: Sasha Levin 
---
 arch/powerpc/include/asm/cpu_has_feature.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/include/asm/cpu_has_feature.h 
b/arch/powerpc/include/asm/cpu_has_feature.h
index 6e834caa3720..7b10b3ef7739 100644
--- a/arch/powerpc/include/asm/cpu_has_feature.h
+++ b/arch/powerpc/include/asm/cpu_has_feature.h
@@ -6,7 +6,7 @@
 #include 
 #include 
 
-static inline bool early_cpu_has_feature(unsigned long feature)
+static __always_inline bool early_cpu_has_feature(unsigned long feature)
 {
return !!((CPU_FTRS_ALWAYS & feature) ||
  (CPU_FTRS_POSSIBLE & cur_cpu_spec->cpu_features & feature));
@@ -45,7 +45,7 @@ static __always_inline bool cpu_has_feature(unsigned long 
feature)
return static_branch_likely(_feature_keys[i]);
 }
 #else
-static inline bool cpu_has_feature(unsigned long feature)
+static __always_inline bool cpu_has_feature(unsigned long feature)
 {
return early_cpu_has_feature(feature);
 }
-- 
2.30.1



[PATCH AUTOSEL 4.14 09/16] powerpc: Force inlining of cpu_has_feature() to avoid build failure

2021-03-25 Thread Sasha Levin
From: Christophe Leroy 

[ Upstream commit eed5fae00593ab9d261a0c1ffc1bdb786a87a55a ]

The code relies on constant folding of cpu_has_feature() based
on possible and always true values as defined per
CPU_FTRS_ALWAYS and CPU_FTRS_POSSIBLE.

Build failure is encountered with for instance
book3e_all_defconfig on kisskb in the AMDGPU driver which uses
cpu_has_feature(CPU_FTR_VSX_COMP) to decide whether calling
kernel_enable_vsx() or not.

The failure is due to cpu_has_feature() not being inlined with
that configuration with gcc 4.9.

In the same way as commit acdad8fb4a15 ("powerpc: Force inlining of
mmu_has_feature to fix build failure"), for inlining of
cpu_has_feature().

Signed-off-by: Christophe Leroy 
Signed-off-by: Michael Ellerman 
Link: 
https://lore.kernel.org/r/b231dfa040ce4cc37f702f5c3a595fdeabfe0462.1615378209.git.christophe.le...@csgroup.eu
Signed-off-by: Sasha Levin 
---
 arch/powerpc/include/asm/cpu_has_feature.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/include/asm/cpu_has_feature.h 
b/arch/powerpc/include/asm/cpu_has_feature.h
index 7897d16e0990..727d4b321937 100644
--- a/arch/powerpc/include/asm/cpu_has_feature.h
+++ b/arch/powerpc/include/asm/cpu_has_feature.h
@@ -7,7 +7,7 @@
 #include 
 #include 
 
-static inline bool early_cpu_has_feature(unsigned long feature)
+static __always_inline bool early_cpu_has_feature(unsigned long feature)
 {
return !!((CPU_FTRS_ALWAYS & feature) ||
  (CPU_FTRS_POSSIBLE & cur_cpu_spec->cpu_features & feature));
@@ -46,7 +46,7 @@ static __always_inline bool cpu_has_feature(unsigned long 
feature)
return static_branch_likely(_feature_keys[i]);
 }
 #else
-static inline bool cpu_has_feature(unsigned long feature)
+static __always_inline bool cpu_has_feature(unsigned long feature)
 {
return early_cpu_has_feature(feature);
 }
-- 
2.30.1



[PATCH AUTOSEL 4.19 11/20] powerpc: Force inlining of cpu_has_feature() to avoid build failure

2021-03-25 Thread Sasha Levin
From: Christophe Leroy 

[ Upstream commit eed5fae00593ab9d261a0c1ffc1bdb786a87a55a ]

The code relies on constant folding of cpu_has_feature() based
on possible and always true values as defined per
CPU_FTRS_ALWAYS and CPU_FTRS_POSSIBLE.

Build failure is encountered with for instance
book3e_all_defconfig on kisskb in the AMDGPU driver which uses
cpu_has_feature(CPU_FTR_VSX_COMP) to decide whether calling
kernel_enable_vsx() or not.

The failure is due to cpu_has_feature() not being inlined with
that configuration with gcc 4.9.

In the same way as commit acdad8fb4a15 ("powerpc: Force inlining of
mmu_has_feature to fix build failure"), for inlining of
cpu_has_feature().

Signed-off-by: Christophe Leroy 
Signed-off-by: Michael Ellerman 
Link: 
https://lore.kernel.org/r/b231dfa040ce4cc37f702f5c3a595fdeabfe0462.1615378209.git.christophe.le...@csgroup.eu
Signed-off-by: Sasha Levin 
---
 arch/powerpc/include/asm/cpu_has_feature.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/include/asm/cpu_has_feature.h 
b/arch/powerpc/include/asm/cpu_has_feature.h
index 7897d16e0990..727d4b321937 100644
--- a/arch/powerpc/include/asm/cpu_has_feature.h
+++ b/arch/powerpc/include/asm/cpu_has_feature.h
@@ -7,7 +7,7 @@
 #include 
 #include 
 
-static inline bool early_cpu_has_feature(unsigned long feature)
+static __always_inline bool early_cpu_has_feature(unsigned long feature)
 {
return !!((CPU_FTRS_ALWAYS & feature) ||
  (CPU_FTRS_POSSIBLE & cur_cpu_spec->cpu_features & feature));
@@ -46,7 +46,7 @@ static __always_inline bool cpu_has_feature(unsigned long 
feature)
return static_branch_likely(_feature_keys[i]);
 }
 #else
-static inline bool cpu_has_feature(unsigned long feature)
+static __always_inline bool cpu_has_feature(unsigned long feature)
 {
return early_cpu_has_feature(feature);
 }
-- 
2.30.1



[PATCH AUTOSEL 5.4 15/24] powerpc: Force inlining of cpu_has_feature() to avoid build failure

2021-03-25 Thread Sasha Levin
From: Christophe Leroy 

[ Upstream commit eed5fae00593ab9d261a0c1ffc1bdb786a87a55a ]

The code relies on constant folding of cpu_has_feature() based
on possible and always true values as defined per
CPU_FTRS_ALWAYS and CPU_FTRS_POSSIBLE.

Build failure is encountered with for instance
book3e_all_defconfig on kisskb in the AMDGPU driver which uses
cpu_has_feature(CPU_FTR_VSX_COMP) to decide whether calling
kernel_enable_vsx() or not.

The failure is due to cpu_has_feature() not being inlined with
that configuration with gcc 4.9.

In the same way as commit acdad8fb4a15 ("powerpc: Force inlining of
mmu_has_feature to fix build failure"), for inlining of
cpu_has_feature().

Signed-off-by: Christophe Leroy 
Signed-off-by: Michael Ellerman 
Link: 
https://lore.kernel.org/r/b231dfa040ce4cc37f702f5c3a595fdeabfe0462.1615378209.git.christophe.le...@csgroup.eu
Signed-off-by: Sasha Levin 
---
 arch/powerpc/include/asm/cpu_has_feature.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/include/asm/cpu_has_feature.h 
b/arch/powerpc/include/asm/cpu_has_feature.h
index 7897d16e0990..727d4b321937 100644
--- a/arch/powerpc/include/asm/cpu_has_feature.h
+++ b/arch/powerpc/include/asm/cpu_has_feature.h
@@ -7,7 +7,7 @@
 #include 
 #include 
 
-static inline bool early_cpu_has_feature(unsigned long feature)
+static __always_inline bool early_cpu_has_feature(unsigned long feature)
 {
return !!((CPU_FTRS_ALWAYS & feature) ||
  (CPU_FTRS_POSSIBLE & cur_cpu_spec->cpu_features & feature));
@@ -46,7 +46,7 @@ static __always_inline bool cpu_has_feature(unsigned long 
feature)
return static_branch_likely(_feature_keys[i]);
 }
 #else
-static inline bool cpu_has_feature(unsigned long feature)
+static __always_inline bool cpu_has_feature(unsigned long feature)
 {
return early_cpu_has_feature(feature);
 }
-- 
2.30.1



[PATCH AUTOSEL 5.10 21/39] powerpc: Force inlining of cpu_has_feature() to avoid build failure

2021-03-25 Thread Sasha Levin
From: Christophe Leroy 

[ Upstream commit eed5fae00593ab9d261a0c1ffc1bdb786a87a55a ]

The code relies on constant folding of cpu_has_feature() based
on possible and always true values as defined per
CPU_FTRS_ALWAYS and CPU_FTRS_POSSIBLE.

Build failure is encountered with for instance
book3e_all_defconfig on kisskb in the AMDGPU driver which uses
cpu_has_feature(CPU_FTR_VSX_COMP) to decide whether calling
kernel_enable_vsx() or not.

The failure is due to cpu_has_feature() not being inlined with
that configuration with gcc 4.9.

In the same way as commit acdad8fb4a15 ("powerpc: Force inlining of
mmu_has_feature to fix build failure"), for inlining of
cpu_has_feature().

Signed-off-by: Christophe Leroy 
Signed-off-by: Michael Ellerman 
Link: 
https://lore.kernel.org/r/b231dfa040ce4cc37f702f5c3a595fdeabfe0462.1615378209.git.christophe.le...@csgroup.eu
Signed-off-by: Sasha Levin 
---
 arch/powerpc/include/asm/cpu_has_feature.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/include/asm/cpu_has_feature.h 
b/arch/powerpc/include/asm/cpu_has_feature.h
index 7897d16e0990..727d4b321937 100644
--- a/arch/powerpc/include/asm/cpu_has_feature.h
+++ b/arch/powerpc/include/asm/cpu_has_feature.h
@@ -7,7 +7,7 @@
 #include 
 #include 
 
-static inline bool early_cpu_has_feature(unsigned long feature)
+static __always_inline bool early_cpu_has_feature(unsigned long feature)
 {
return !!((CPU_FTRS_ALWAYS & feature) ||
  (CPU_FTRS_POSSIBLE & cur_cpu_spec->cpu_features & feature));
@@ -46,7 +46,7 @@ static __always_inline bool cpu_has_feature(unsigned long 
feature)
return static_branch_likely(_feature_keys[i]);
 }
 #else
-static inline bool cpu_has_feature(unsigned long feature)
+static __always_inline bool cpu_has_feature(unsigned long feature)
 {
return early_cpu_has_feature(feature);
 }
-- 
2.30.1



[PATCH AUTOSEL 5.11 22/44] powerpc: Force inlining of cpu_has_feature() to avoid build failure

2021-03-25 Thread Sasha Levin
From: Christophe Leroy 

[ Upstream commit eed5fae00593ab9d261a0c1ffc1bdb786a87a55a ]

The code relies on constant folding of cpu_has_feature() based
on possible and always true values as defined per
CPU_FTRS_ALWAYS and CPU_FTRS_POSSIBLE.

Build failure is encountered with for instance
book3e_all_defconfig on kisskb in the AMDGPU driver which uses
cpu_has_feature(CPU_FTR_VSX_COMP) to decide whether calling
kernel_enable_vsx() or not.

The failure is due to cpu_has_feature() not being inlined with
that configuration with gcc 4.9.

In the same way as commit acdad8fb4a15 ("powerpc: Force inlining of
mmu_has_feature to fix build failure"), for inlining of
cpu_has_feature().

Signed-off-by: Christophe Leroy 
Signed-off-by: Michael Ellerman 
Link: 
https://lore.kernel.org/r/b231dfa040ce4cc37f702f5c3a595fdeabfe0462.1615378209.git.christophe.le...@csgroup.eu
Signed-off-by: Sasha Levin 
---
 arch/powerpc/include/asm/cpu_has_feature.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/include/asm/cpu_has_feature.h 
b/arch/powerpc/include/asm/cpu_has_feature.h
index 7897d16e0990..727d4b321937 100644
--- a/arch/powerpc/include/asm/cpu_has_feature.h
+++ b/arch/powerpc/include/asm/cpu_has_feature.h
@@ -7,7 +7,7 @@
 #include 
 #include 
 
-static inline bool early_cpu_has_feature(unsigned long feature)
+static __always_inline bool early_cpu_has_feature(unsigned long feature)
 {
return !!((CPU_FTRS_ALWAYS & feature) ||
  (CPU_FTRS_POSSIBLE & cur_cpu_spec->cpu_features & feature));
@@ -46,7 +46,7 @@ static __always_inline bool cpu_has_feature(unsigned long 
feature)
return static_branch_likely(_feature_keys[i]);
 }
 #else
-static inline bool cpu_has_feature(unsigned long feature)
+static __always_inline bool cpu_has_feature(unsigned long feature)
 {
return early_cpu_has_feature(feature);
 }
-- 
2.30.1



Re: [PATCH v2 6/7] cmdline: Gives architectures opportunity to use generically defined boot cmdline manipulation

2021-03-25 Thread Christophe Leroy




Le 03/03/2021 à 18:57, Will Deacon a écrit :

On Tue, Mar 02, 2021 at 05:25:22PM +, Christophe Leroy wrote:

Most architectures have similar boot command line manipulation
options. This patchs adds the definition in init/Kconfig, gated by
CONFIG_HAVE_CMDLINE that the architectures can select to use them.

In order to use this, a few architectures will have to change their
CONFIG options:
- riscv has to replace CMDLINE_FALLBACK by CMDLINE_FROM_BOOTLOADER
- architectures using CONFIG_CMDLINE_OVERRIDE or
CONFIG_CMDLINE_OVERWRITE have to replace them by CONFIG_CMDLINE_FORCE.

Architectures also have to define CONFIG_DEFAULT_CMDLINE.

Signed-off-by: Christophe Leroy 
---
  init/Kconfig | 56 
  1 file changed, 56 insertions(+)

diff --git a/init/Kconfig b/init/Kconfig
index 22946fe5ded9..a0f2ad9467df 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -117,6 +117,62 @@ config INIT_ENV_ARG_LIMIT
  Maximum of each of the number of arguments and environment
  variables passed to init from the kernel command line.
  
+config HAVE_CMDLINE

+   bool
+
+config CMDLINE_BOOL
+   bool "Default bootloader kernel arguments"
+   depends on HAVE_CMDLINE
+   help
+ On some platforms, there is currently no way for the boot loader to
+ pass arguments to the kernel. For these platforms, you can supply
+ some command-line options at build time by entering them here.  In
+ most cases you will need to specify the root device here.


Why is this needed as well as CMDLINE_FROM_BOOTLOADER? IIUC, the latter
will use CONFIG_CMDLINE if it fails to get anything from the bootloader,
which sounds like the same scenario.


+config CMDLINE
+   string "Initial kernel command string"


s/Initial/Default

which is then consistent with the rest of the text here.


+   depends on CMDLINE_BOOL


Ah, so this is a bit different and I don't think lines-up with the
CMDLINE_BOOL help text.


+   default DEFAULT_CMDLINE
+   help
+ On some platforms, there is currently no way for the boot loader to
+ pass arguments to the kernel. For these platforms, you can supply
+ some command-line options at build time by entering them here.  In
+ most cases you will need to specify the root device here.


(same stale text)


+choice
+   prompt "Kernel command line type" if CMDLINE != ""
+   default CMDLINE_FROM_BOOTLOADER
+   help
+ Selects the way you want to use the default kernel arguments.


How about:

"Determines how the default kernel arguments are combined with any
  arguments passed by the bootloader"


+config CMDLINE_FROM_BOOTLOADER
+   bool "Use bootloader kernel arguments if available"
+   help
+ Uses the command-line options passed by the boot loader. If
+ the boot loader doesn't provide any, the default kernel command
+ string provided in CMDLINE will be used.
+
+config CMDLINE_EXTEND


Can we rename this to CMDLINE_APPEND, please? There is code in the tree
which disagrees about what CMDLINE_EXTEND means, so that will need be
to be updated to be consistent (e.g. the EFI stub parsing order). Having
the generic option with a different name means we won't accidentally end
up with the same inconsistent behaviours.


Argh, yes. Seems like the problem is even larger than that IIUC:

- For ARM it means to append the bootloader arguments to the CONFIG_CMDLINE
- For Powerpc it means to append the CONFIG_CMDLINE to the bootloader arguments
- For SH  it means to append the CONFIG_CMDLINE to the bootloader arguments
- For EFI it means to append the bootloader arguments to the CONFIG_CMDLINE
- For OF it means to append the CONFIG_CMDLINE to the bootloader arguments

So what happens on ARM for instance when it selects CONFIG_OF for instance ?
Or should we consider that EXTEND means APPEND or PREPEND, no matter which ?
Because EXTEND is for instance used for:

config INITRAMFS_FORCE
bool "Ignore the initramfs passed by the bootloader"
depends on CMDLINE_EXTEND || CMDLINE_FORCE


Christophe


[PATCH v2] pseries: prevent free CPU ids to be reused on another node

2021-03-25 Thread Laurent Dufour
When a CPU is hot added, the CPU ids are taken from the available mask from
the lower possible set. If that set of values was previously used for CPU
attached to a different node, this seems to application like if these CPUs
have migrated from a node to another one which is not expected in real
life.

To prevent this, it is needed to record the CPU ids used for each node and
to not reuse them on another node. However, to prevent CPU hot plug to
fail, in the case the CPU ids is starved on a node, the capability to reuse
other nodes’ free CPU ids is kept. A warning is displayed in such a case
to warn the user.

A new CPU bit mask (node_recorded_ids_map) is introduced for each possible
node. It is populated with the CPU onlined at boot time, and then when a
CPU is hot plug to a node. The bits in that mask remain when the CPU is hot
unplugged, to remind this CPU ids have been used for this node.

If no id set was found, a retry is made without removing the ids used on
the other nodes to try reusing them. This is the way ids have been
allocated prior to this patch.

The effect of this patch can be seen by removing and adding CPUs using the
Qemu monitor. In the following case, the first CPU from the node 2 is
removed, then the first one from the node 1 is removed too. Later, the
first CPU of the node 2 is added back. Without that patch, the kernel will
numbered these CPUs using the first CPU ids available which are the ones
freed when removing the second CPU of the node 0. This leads to the CPU ids
16-23 to move from the node 1 to the node 2. With the patch applied, the
CPU ids 32-39 are used since they are the lowest free ones which have not
been used on another node.

At boot time:
[root@vm40 ~]# numactl -H | grep cpus
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
node 1 cpus: 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
node 2 cpus: 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47

Vanilla kernel, after the CPU hot unplug/plug operations:
[root@vm40 ~]# numactl -H | grep cpus
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
node 1 cpus: 24 25 26 27 28 29 30 31
node 2 cpus: 16 17 18 19 20 21 22 23 40 41 42 43 44 45 46 47

Patched kernel, after the CPU hot unplug/plug operations:
[root@vm40 ~]# numactl -H | grep cpus
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
node 1 cpus: 24 25 26 27 28 29 30 31
node 2 cpus: 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47

Changes since V1 (no functional changes):
 - update the test's output in the commit's description
 - node_recorded_ids_map should be static

Signed-off-by: Laurent Dufour 
---
 arch/powerpc/platforms/pseries/hotplug-cpu.c | 83 ++--
 1 file changed, 76 insertions(+), 7 deletions(-)

diff --git a/arch/powerpc/platforms/pseries/hotplug-cpu.c 
b/arch/powerpc/platforms/pseries/hotplug-cpu.c
index 12cbffd3c2e3..48c7943b25b0 100644
--- a/arch/powerpc/platforms/pseries/hotplug-cpu.c
+++ b/arch/powerpc/platforms/pseries/hotplug-cpu.c
@@ -39,6 +39,8 @@
 /* This version can't take the spinlock, because it never returns */
 static int rtas_stop_self_token = RTAS_UNKNOWN_SERVICE;
 
+static cpumask_var_t node_recorded_ids_map[MAX_NUMNODES];
+
 static void rtas_stop_self(void)
 {
static struct rtas_args args;
@@ -151,29 +153,61 @@ static void pseries_cpu_die(unsigned int cpu)
  */
 static int pseries_add_processor(struct device_node *np)
 {
-   unsigned int cpu;
+   unsigned int cpu, node;
cpumask_var_t candidate_mask, tmp;
-   int err = -ENOSPC, len, nthreads, i;
+   int err = -ENOSPC, len, nthreads, i, nid;
const __be32 *intserv;
+   bool force_reusing = false;
 
intserv = of_get_property(np, "ibm,ppc-interrupt-server#s", );
if (!intserv)
return 0;
 
-   zalloc_cpumask_var(_mask, GFP_KERNEL);
-   zalloc_cpumask_var(, GFP_KERNEL);
+   alloc_cpumask_var(_mask, GFP_KERNEL);
+   alloc_cpumask_var(, GFP_KERNEL);
+
+   /*
+* Fetch from the DT nodes read by dlpar_configure_connector() the NUMA
+* node id the added CPU belongs to.
+*/
+   nid = of_node_to_nid(np);
+   if (nid < 0 || !node_possible(nid))
+   nid = first_online_node;
 
nthreads = len / sizeof(u32);
-   for (i = 0; i < nthreads; i++)
-   cpumask_set_cpu(i, tmp);
 
cpu_maps_update_begin();
 
BUG_ON(!cpumask_subset(cpu_present_mask, cpu_possible_mask));
 
+again:
+   cpumask_clear(candidate_mask);
+   cpumask_clear(tmp);
+   for (i = 0; i < nthreads; i++)
+   cpumask_set_cpu(i, tmp);
+
/* Get a bitmap of unoccupied slots. */
cpumask_xor(candidate_mask, cpu_possible_mask, cpu_present_mask);
+
+   /*
+* Remove free ids previously assigned on the other nodes. We can walk
+* only online nodes because once a node became online it is not turned
+* offlined back.
+*/
+   if (!force_reusing)
+   for_each_online_node(node) {
+ 

Re: [PATCH V3 -next] powerpc: kernel/time.c - cleanup warnings

2021-03-25 Thread Alexandre Belloni
On 24/03/2021 17:46:19+0800, heying (H) wrote:
> Many thanks for your suggestion. As you suggest, rtc_lock should be local to
> platforms.
> 
> Does it mean not only powerpc but also all other platforms should adapt this
> change?

Not all the other ones, in the current state, x86 still needs it. I'll
work on that. Again, the patch is fine as is.

Reviewed-by: Alexandre Belloni 



-- 
Alexandre Belloni, co-owner and COO, Bootlin
Embedded Linux and Kernel engineering
https://bootlin.com


Re: [PATCH 15/17] iommu: remove DOMAIN_ATTR_NESTING

2021-03-25 Thread Christoph Hellwig
On Thu, Mar 25, 2021 at 06:12:37AM +, Tian, Kevin wrote:
> Agree. The vSVA series is still undergoing a refactor according to Jason's
> comment thus won't be ready in short term. It's better to let this one
> go in first.

Would be great to get a few more reviews while we're at it :)


RE: [PATCH 15/17] iommu: remove DOMAIN_ATTR_NESTING

2021-03-25 Thread Tian, Kevin
> From: Auger Eric
> Sent: Monday, March 15, 2021 3:52 PM
> To: Christoph Hellwig 
> Cc: k...@vger.kernel.org; Will Deacon ; linuxppc-
> d...@lists.ozlabs.org; dri-de...@lists.freedesktop.org; Li Yang
> ; io...@lists.linux-foundation.org;
> 
> Hi Christoph,
> 
> On 3/14/21 4:58 PM, Christoph Hellwig wrote:
> > On Sun, Mar 14, 2021 at 11:44:52AM +0100, Auger Eric wrote:
> >> As mentionned by Robin, there are series planning to use
> >> DOMAIN_ATTR_NESTING to get info about the nested caps of the iommu
> (ARM
> >> and Intel):
> >>
> >> [Patch v8 00/10] vfio: expose virtual Shared Virtual Addressing to VMs
> >> patches 1, 2, 3
> >>
> >> Is the plan to introduce a new domain_get_nesting_info ops then?
> >
> > The plan as usual would be to add it the series adding that support.
> > Not sure what the merge plans are - if the series is ready to be
> > merged I could rebase on top of it, otherwise that series will need
> > to add the method.
> OK I think your series may be upstreamed first.
> 

Agree. The vSVA series is still undergoing a refactor according to Jason's
comment thus won't be ready in short term. It's better to let this one
go in first.

Thanks
Kevin