Re: arch/powerpc/kvm/book3s_hv_nested.c:264:6: error: stack frame size of 2304 bytes in function 'kvmhv_enter_nested_guest'

2021-06-20 Thread Nathan Chancellor

On 6/20/2021 4:59 PM, Nicholas Piggin wrote:

Excerpts from kernel test robot's message of April 3, 2021 8:47 pm:

tree:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git 
master
head:   d93a0d43e3d0ba9e19387be4dae4a8d5b175a8d7
commit: 97e4910232fa1f81e806aa60c25a0450276d99a2 linux/compiler-clang.h: define 
HAVE_BUILTIN_BSWAP*
date:   3 weeks ago
config: powerpc64-randconfig-r006-20210403 (attached as .config)
compiler: clang version 13.0.0 (https://github.com/llvm/llvm-project 
0fe8af94688aa03c01913c2001d6a1a911f42ce6)
reproduce (this is a W=1 build):
 wget 
https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O 
~/bin/make.cross
 chmod +x ~/bin/make.cross
 # install powerpc64 cross compiling tool for clang build
 # apt-get install binutils-powerpc64-linux-gnu
 # 
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=97e4910232fa1f81e806aa60c25a0450276d99a2
 git remote add linus 
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
 git fetch --no-tags linus master
 git checkout 97e4910232fa1f81e806aa60c25a0450276d99a2
 # save the attached .config to linux build tree
 COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross 
ARCH=powerpc64

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot 

All errors (new ones prefixed by >>):


arch/powerpc/kvm/book3s_hv_nested.c:264:6: error: stack frame size of 2304 
bytes in function 'kvmhv_enter_nested_guest' [-Werror,-Wframe-larger-than=]

long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu)
 ^
1 error generated.


vim +/kvmhv_enter_nested_guest +264 arch/powerpc/kvm/book3s_hv_nested.c


Not much changed here recently. It's not that big a concern because it's
only called in the KVM ioctl path, not in any deep IO paths or anything,
and doesn't recurse. Might be a bit of inlining or stack spilling put it
over the edge.


It appears to be the fact that LLVM's PowerPC backend does not emit 
efficient byteswap assembly:


https://github.com/ClangBuiltLinux/linux/issues/1292

https://bugs.llvm.org/show_bug.cgi?id=49610


powerpc does make it an error though, would be good to avoid that so the
robot doesn't keep tripping over.


Marking byteswap_pt_regs as 'noinline_for_stack' drastically reduces the 
stack usage. If that is an acceptable solution, I can send it along 
tomorrow.


Cheers,
Nathan


Thanks,
Nick




afe75049303f75 Ravi Bangoria2020-12-16  263
360cae313702cd Paul Mackerras   2018-10-08 @264  long 
kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu)
360cae313702cd Paul Mackerras   2018-10-08  265  {
360cae313702cd Paul Mackerras   2018-10-08  266 long int err, r;
360cae313702cd Paul Mackerras   2018-10-08  267 struct kvm_nested_guest 
*l2;
360cae313702cd Paul Mackerras   2018-10-08  268 struct pt_regs l2_regs, 
saved_l1_regs;
afe75049303f75 Ravi Bangoria2020-12-16  269 struct hv_guest_state 
l2_hv = {0}, saved_l1_hv;
360cae313702cd Paul Mackerras   2018-10-08  270 struct kvmppc_vcore *vc = 
vcpu->arch.vcore;
360cae313702cd Paul Mackerras   2018-10-08  271 u64 hv_ptr, regs_ptr;
360cae313702cd Paul Mackerras   2018-10-08  272 u64 hdec_exp;
360cae313702cd Paul Mackerras   2018-10-08  273 s64 delta_purr, 
delta_spurr, delta_ic, delta_vtb;
360cae313702cd Paul Mackerras   2018-10-08  274 u64 mask;
360cae313702cd Paul Mackerras   2018-10-08  275 unsigned long lpcr;
360cae313702cd Paul Mackerras   2018-10-08  276
360cae313702cd Paul Mackerras   2018-10-08  277 if 
(vcpu->kvm->arch.l1_ptcr == 0)
360cae313702cd Paul Mackerras   2018-10-08  278 return 
H_NOT_AVAILABLE;
360cae313702cd Paul Mackerras   2018-10-08  279
360cae313702cd Paul Mackerras   2018-10-08  280 /* copy parameters in */
360cae313702cd Paul Mackerras   2018-10-08  281 hv_ptr = 
kvmppc_get_gpr(vcpu, 4);
1508c22f112ce1 Alexey Kardashevskiy 2020-06-09  282 regs_ptr = 
kvmppc_get_gpr(vcpu, 5);
1508c22f112ce1 Alexey Kardashevskiy 2020-06-09  283 vcpu->srcu_idx = 
srcu_read_lock(>kvm->srcu);
afe75049303f75 Ravi Bangoria2020-12-16  284 err = 
kvmhv_read_guest_state_and_regs(vcpu, _hv, _regs,
afe75049303f75 Ravi Bangoria2020-12-16  285 
  hv_ptr, regs_ptr);
1508c22f112ce1 Alexey Kardashevskiy 2020-06-09  286 
srcu_read_unlock(>kvm->srcu, vcpu->srcu_idx);
360cae313702cd Paul Mackerras   2018-10-08  287 if (err)
360cae313702cd Paul Mackerras   2018-10-08  288 return 
H_PARAMETER;
1508c22f112ce1 Alexey Kardashevskiy 2020-06-09  289
10b5022db7861a Suraj Jitindar Singh 2018-10-08  290 if 
(kvmppc_need_byteswap(vcpu))
10b5022db7861a Suraj Jitindar Singh 2018-10-08  291 
byteswap_hv_regs(_hv);
afe75049303f75 Ravi Bangoria2020-12-16  292 

Re: [PATCH 0/2] powerpc/perf: Add instruction and data address registers to extended regs

2021-06-20 Thread Nageswara Sastry




On 20/06/21 8:15 pm, Athira Rajeev wrote:

Patch set adds PMU registers namely Sampled Instruction Address Register
(SIAR) and Sampled Data Address Register (SDAR) as part of extended regs
in PowerPC. These registers provides the instruction/data address and
adding these to extended regs helps in debug purposes.

Patch 1/2 adds SIAR and SDAR as part of the extended regs mask.
Patch 2/2 includes perf tools side changes to add the SPRs to
sample_reg_mask to use with -I? option.

Athira Rajeev (2):
   powerpc/perf: Expose instruction and data address registers as part of
 extended regs
   tools/perf: Add perf tools support to expose instruction and data
 address registers as part of extended regs



Tested with the following scenarios on P9, P10 - PowerVM environment
1. perf record -I? - shows added - sdar, siar
2. perf record -I  and perf report -D - shows added - sdar, 
siar with and with out counts.


Tested-by: Nageswara R Sastry 



  arch/powerpc/include/uapi/asm/perf_regs.h   | 12 +++-
  arch/powerpc/perf/perf_regs.c   |  4 
  tools/arch/powerpc/include/uapi/asm/perf_regs.h | 12 +++-
  tools/perf/arch/powerpc/include/perf_regs.h |  2 ++
  tools/perf/arch/powerpc/util/perf_regs.c|  2 ++
  5 files changed, 22 insertions(+), 10 deletions(-)



--
Thanks and Regards
R.Nageswara Sastry


[powerpc:next-test] BUILD SUCCESS 41075908e941f30636a607e841c08d7941966e1b

2021-06-20 Thread kernel test robot
tree/branch: https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git 
next-test
branch HEAD: 41075908e941f30636a607e841c08d7941966e1b  powerpc: Enable KFENCE 
on BOOK3S/64

elapsed time: 723m

configs tested: 132
configs skipped: 3

The following configs have been built successfully.
More configs may be tested in the coming days.

gcc tested configs:
arm defconfig
arm64allyesconfig
arm64   defconfig
arm  allyesconfig
arm  allmodconfig
powerpc  chrp32_defconfig
arm  lpd270_defconfig
riscv  rv32_defconfig
powerpc  makalu_defconfig
m68kdefconfig
mips loongson2k_defconfig
powerpcamigaone_defconfig
powerpc asp8347_defconfig
powerpc sbc8548_defconfig
powerpc tqm8560_defconfig
powerpc mpc8315_rdb_defconfig
powerpc mpc5200_defconfig
h8300allyesconfig
sh   se7724_defconfig
m68k amcore_defconfig
m68kmvme147_defconfig
mipsqi_lb60_defconfig
riscv nommu_k210_sdcard_defconfig
mips allyesconfig
powerpcwarp_defconfig
powerpc mpc83xx_defconfig
armmulti_v5_defconfig
arm   sunxi_defconfig
armzeus_defconfig
sh sh03_defconfig
powerpc  ppc64e_defconfig
powerpcmpc7448_hpc2_defconfig
ia64 bigsur_defconfig
sh sh7710voipgw_defconfig
sh espt_defconfig
powerpcfsp2_defconfig
archsdk_defconfig
cskydefconfig
powerpc kilauea_defconfig
arm eseries_pxa_defconfig
arm  tct_hammer_defconfig
sparc64 defconfig
riscv   defconfig
nios2alldefconfig
powerpc mpc8540_ads_defconfig
xtensasmp_lx200_defconfig
arm lpc32xx_defconfig
powerpc stx_gp3_defconfig
sh  rsk7203_defconfig
arm   aspeed_g5_defconfig
powerpc mpc8560_ads_defconfig
armmvebu_v7_defconfig
sh kfr2r09-romimage_defconfig
m68k   m5475evb_defconfig
sh  r7780mp_defconfig
powerpc  arches_defconfig
armdove_defconfig
x86_64allnoconfig
ia64 allmodconfig
ia64defconfig
ia64 allyesconfig
m68k allmodconfig
m68k allyesconfig
nios2   defconfig
arc  allyesconfig
nds32 allnoconfig
nds32   defconfig
nios2allyesconfig
alpha   defconfig
alphaallyesconfig
xtensa   allyesconfig
arc defconfig
sh   allmodconfig
parisc  defconfig
s390 allyesconfig
s390 allmodconfig
parisc   allyesconfig
s390defconfig
i386 allyesconfig
sparcallyesconfig
sparc   defconfig
i386defconfig
mips allmodconfig
powerpc  allyesconfig
powerpc  allmodconfig
powerpc   allnoconfig
i386 randconfig-a001-20210620
i386 randconfig-a002-20210620
i386 randconfig-a003-20210620
i386 randconfig-a006-20210620
i386 randconfig-a005-20210620
i386 randconfig-a004-20210620
x86_64   randconfig-a012-20210620
x86_64   randconfig-a016-20210620
x86_64   randconfig-a015-20210620
x86_64   randconfig-a014-20210620
x86_64   randconfig-a013-20210620
x86_64   randconfig-a011-20210620
i386 randconfig-a011-20210620
i386 randconfig-a014-20210620
i386 randconfig

Re: [RESEND PATCH v4 08/11] powerpc: Initialize and use a temporary mm for patching

2021-06-20 Thread Daniel Axtens
Hi Chris,

> + /*
> +  * Choose a randomized, page-aligned address from the range:
> +  * [PAGE_SIZE, DEFAULT_MAP_WINDOW - PAGE_SIZE]
> +  * The lower address bound is PAGE_SIZE to avoid the zero-page.
> +  * The upper address bound is DEFAULT_MAP_WINDOW - PAGE_SIZE to stay
> +  * under DEFAULT_MAP_WINDOW with the Book3s64 Hash MMU.
> +  */
> + patching_addr = PAGE_SIZE + ((get_random_long() & PAGE_MASK)
> + % (DEFAULT_MAP_WINDOW - 2 * PAGE_SIZE));

I checked and poking_init() comes after the functions that init the RNG,
so this should be fine. The maths - while a bit fiddly to reason about -
does check out.

> +
> + /*
> +  * PTE allocation uses GFP_KERNEL which means we need to pre-allocate
> +  * the PTE here. We cannot do the allocation during patching with IRQs
> +  * disabled (ie. "atomic" context).
> +  */
> + ptep = get_locked_pte(patching_mm, patching_addr, );
> + BUG_ON(!ptep);
> + pte_unmap_unlock(ptep, ptl);
> +}
>  
>  #if IS_BUILTIN(CONFIG_LKDTM)
>  unsigned long read_cpu_patching_addr(unsigned int cpu)
>  {
> - return (unsigned long)(per_cpu(text_poke_area, cpu))->addr;
> + return patching_addr;
>  }
>  #endif
>  
> -static int text_area_cpu_up(unsigned int cpu)
> +struct patch_mapping {
> + spinlock_t *ptl; /* for protecting pte table */
> + pte_t *ptep;
> + struct temp_mm temp_mm;
> +};
> +
> +#ifdef CONFIG_PPC_BOOK3S_64
> +
> +static inline int hash_prefault_mapping(pgprot_t pgprot)
>  {
> - struct vm_struct *area;
> + int err;
>  
> - area = get_vm_area(PAGE_SIZE, VM_ALLOC);
> - if (!area) {
> - WARN_ONCE(1, "Failed to create text area for cpu %d\n",
> - cpu);
> - return -1;
> - }
> - this_cpu_write(text_poke_area, area);
> + if (radix_enabled())
> + return 0;
>  
> - return 0;
> -}
> + err = slb_allocate_user(patching_mm, patching_addr);
> + if (err)
> + pr_warn("map patch: failed to allocate slb entry\n");
>  

Here if slb_allocate_user() fails, you'll print a warning and then fall
through to the rest of the function. You do return err, but there's a
later call to hash_page_mm() that also sets err. Can slb_allocate_user()
fail while hash_page_mm() succeeds, and would that be a problem?

> -static int text_area_cpu_down(unsigned int cpu)
> -{
> - free_vm_area(this_cpu_read(text_poke_area));
> - return 0;
> + err = hash_page_mm(patching_mm, patching_addr, pgprot_val(pgprot), 0,
> +HPTE_USE_KERNEL_KEY);
> + if (err)
> + pr_warn("map patch: failed to insert hashed page\n");
> +
> + /* See comment in switch_slb() in mm/book3s64/slb.c */
> + isync();
> +

The comment reads:

/*
 * Synchronize slbmte preloads with possible subsequent user memory
 * address accesses by the kernel (user mode won't happen until
 * rfid, which is safe).
 */
 isync();

I have to say having read the description of isync I'm not 100% sure why
that's enough (don't we also need stores to complete?) but I'm happy to
take commit 5434ae74629a ("powerpc/64s/hash: Add a SLB preload cache")
on trust here!

I think it does make sense for you to have that barrier here: you are
potentially about to start poking at the memory mapped through that SLB
entry so you should make sure you're fully synchronised.

> + return err;
>  }
>  

> + init_temp_mm(_mapping->temp_mm, patching_mm);
> + use_temporary_mm(_mapping->temp_mm);
>  
> - pmdp = pmd_offset(pudp, addr);
> - if (unlikely(!pmdp))
> - return -EINVAL;
> + /*
> +  * On Book3s64 with the Hash MMU we have to manually insert the SLB
> +  * entry and HPTE to prevent taking faults on the patching_addr later.
> +  */
> + return(hash_prefault_mapping(pgprot));

hmm, `return hash_prefault_mapping(pgprot);` or
`return (hash_prefault_mapping((pgprot));` maybe?

Kind regards,
Daniel


Re: [RESEND PATCH v4 05/11] powerpc/64s: Add ability to skip SLB preload

2021-06-20 Thread Daniel Axtens
"Christopher M. Riedl"  writes:

> Switching to a different mm with Hash translation causes SLB entries to
> be preloaded from the current thread_info. This reduces SLB faults, for
> example when threads share a common mm but operate on different address
> ranges.
>
> Preloading entries from the thread_info struct may not always be
> appropriate - such as when switching to a temporary mm. Introduce a new
> boolean in mm_context_t to skip the SLB preload entirely. Also move the
> SLB preload code into a separate function since switch_slb() is already
> quite long. The default behavior (preloading SLB entries from the
> current thread_info struct) remains unchanged.
>
> Signed-off-by: Christopher M. Riedl 
>
> ---
>
> v4:  * New to series.
> ---
>  arch/powerpc/include/asm/book3s/64/mmu.h |  3 ++
>  arch/powerpc/include/asm/mmu_context.h   | 13 ++
>  arch/powerpc/mm/book3s64/mmu_context.c   |  2 +
>  arch/powerpc/mm/book3s64/slb.c   | 56 ++--
>  4 files changed, 50 insertions(+), 24 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/book3s/64/mmu.h 
> b/arch/powerpc/include/asm/book3s/64/mmu.h
> index eace8c3f7b0a1..b23a9dcdee5af 100644
> --- a/arch/powerpc/include/asm/book3s/64/mmu.h
> +++ b/arch/powerpc/include/asm/book3s/64/mmu.h
> @@ -130,6 +130,9 @@ typedef struct {
>   u32 pkey_allocation_map;
>   s16 execute_only_pkey; /* key holding execute-only protection */
>  #endif
> +
> + /* Do not preload SLB entries from thread_info during switch_slb() */
> + bool skip_slb_preload;
>  } mm_context_t;
>  
>  static inline u16 mm_ctx_user_psize(mm_context_t *ctx)
> diff --git a/arch/powerpc/include/asm/mmu_context.h 
> b/arch/powerpc/include/asm/mmu_context.h
> index 4bc45d3ed8b0e..264787e90b1a1 100644
> --- a/arch/powerpc/include/asm/mmu_context.h
> +++ b/arch/powerpc/include/asm/mmu_context.h
> @@ -298,6 +298,19 @@ static inline int arch_dup_mmap(struct mm_struct *oldmm,
>   return 0;
>  }
>  
> +#ifdef CONFIG_PPC_BOOK3S_64
> +
> +static inline void skip_slb_preload_mm(struct mm_struct *mm)
> +{
> + mm->context.skip_slb_preload = true;
> +}
> +
> +#else
> +
> +static inline void skip_slb_preload_mm(struct mm_struct *mm) {}
> +
> +#endif /* CONFIG_PPC_BOOK3S_64 */
> +
>  #include 
>  
>  #endif /* __KERNEL__ */
> diff --git a/arch/powerpc/mm/book3s64/mmu_context.c 
> b/arch/powerpc/mm/book3s64/mmu_context.c
> index c10fc8a72fb37..3479910264c59 100644
> --- a/arch/powerpc/mm/book3s64/mmu_context.c
> +++ b/arch/powerpc/mm/book3s64/mmu_context.c
> @@ -202,6 +202,8 @@ int init_new_context(struct task_struct *tsk, struct 
> mm_struct *mm)
>   atomic_set(>context.active_cpus, 0);
>   atomic_set(>context.copros, 0);
>  
> + mm->context.skip_slb_preload = false;
> +
>   return 0;
>  }
>  
> diff --git a/arch/powerpc/mm/book3s64/slb.c b/arch/powerpc/mm/book3s64/slb.c
> index c91bd85eb90e3..da0836cb855af 100644
> --- a/arch/powerpc/mm/book3s64/slb.c
> +++ b/arch/powerpc/mm/book3s64/slb.c
> @@ -441,10 +441,39 @@ static void slb_cache_slbie_user(unsigned int index)
>   asm volatile("slbie %0" : : "r" (slbie_data));
>  }
>  
> +static void preload_slb_entries(struct task_struct *tsk, struct mm_struct 
> *mm)
Should this be explicitly inline or even __always_inline? I'm thinking
switch_slb is probably a fairly hot path on hash?

> +{
> + struct thread_info *ti = task_thread_info(tsk);
> + unsigned char i;
> +
> + /*
> +  * We gradually age out SLBs after a number of context switches to
> +  * reduce reload overhead of unused entries (like we do with FP/VEC
> +  * reload). Each time we wrap 256 switches, take an entry out of the
> +  * SLB preload cache.
> +  */
> + tsk->thread.load_slb++;
> + if (!tsk->thread.load_slb) {
> + unsigned long pc = KSTK_EIP(tsk);
> +
> + preload_age(ti);
> + preload_add(ti, pc);
> + }
> +
> + for (i = 0; i < ti->slb_preload_nr; i++) {
> + unsigned char idx;
> + unsigned long ea;
> +
> + idx = (ti->slb_preload_tail + i) % SLB_PRELOAD_NR;
> + ea = (unsigned long)ti->slb_preload_esid[idx] << SID_SHIFT;
> +
> + slb_allocate_user(mm, ea);
> + }
> +}
> +
>  /* Flush all user entries from the segment table of the current processor. */
>  void switch_slb(struct task_struct *tsk, struct mm_struct *mm)
>  {
> - struct thread_info *ti = task_thread_info(tsk);
>   unsigned char i;
>  
>   /*
> @@ -502,29 +531,8 @@ void switch_slb(struct task_struct *tsk, struct 
> mm_struct *mm)
>  
>   copy_mm_to_paca(mm);
>  
> - /*
> -  * We gradually age out SLBs after a number of context switches to
> -  * reduce reload overhead of unused entries (like we do with FP/VEC
> -  * reload). Each time we wrap 256 switches, take an entry out of the
> -  * SLB preload cache.
> -  */
> - tsk->thread.load_slb++;
> - if (!tsk->thread.load_slb) {
> -   

Re: [PATCH] watchdog: Remove MV64x60 watchdog driver

2021-06-20 Thread gituser
Hi All,

On Mon, Jun 07, 2021 at 04:29:50AM -0700, Guenter Roeck wrote:
> On Mon, Jun 07, 2021 at 11:43:26AM +1000, Michael Ellerman wrote:
> > Guenter Roeck  writes:
> > > On 5/17/21 4:17 AM, Michael Ellerman wrote:
> > >> Guenter Roeck  writes:
> > >>> On 3/18/21 10:25 AM, Christophe Leroy wrote:
> >  Commit 92c8c16f3457 ("powerpc/embedded6xx: Remove C2K board support")
> >  removed the last selector of CONFIG_MV64X60.
> > 
> >  Therefore CONFIG_MV64X60_WDT cannot be selected anymore and
> >  can be removed.
> > 
> >  Signed-off-by: Christophe Leroy 
> > >>>
> > >>> Reviewed-by: Guenter Roeck 
> > >>>
> >  ---
> >    drivers/watchdog/Kconfig   |   4 -
> >    drivers/watchdog/Makefile  |   1 -
> >    drivers/watchdog/mv64x60_wdt.c | 324 
> >  -
> >    include/linux/mv643xx.h|   8 -
> >    4 files changed, 337 deletions(-)
> >    delete mode 100644 drivers/watchdog/mv64x60_wdt.c
> > >> 
> > >> I assumed this would go via the watchdog tree, but seems like I
> > >> misinterpreted.
> > >> 
> > >
> > > Wim didn't send a pull request this time around.
> > >
> > > Guenter
> > >
> > >> Should I take this via the powerpc tree for v5.14 ?
> > 
> > I still don't see this in the watchdog tree, should I take it?
> > 
> It is in my personal watchdog-next tree, but afaics Wim hasn't picked any
> of it up yet. Wim ?

Picking it up right now.

Kind regards,
Wim.



Re: arch/powerpc/kvm/book3s_hv_nested.c:264:6: error: stack frame size of 2304 bytes in function 'kvmhv_enter_nested_guest'

2021-06-20 Thread Nicholas Piggin
Excerpts from kernel test robot's message of April 3, 2021 8:47 pm:
> tree:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git 
> master
> head:   d93a0d43e3d0ba9e19387be4dae4a8d5b175a8d7
> commit: 97e4910232fa1f81e806aa60c25a0450276d99a2 linux/compiler-clang.h: 
> define HAVE_BUILTIN_BSWAP*
> date:   3 weeks ago
> config: powerpc64-randconfig-r006-20210403 (attached as .config)
> compiler: clang version 13.0.0 (https://github.com/llvm/llvm-project 
> 0fe8af94688aa03c01913c2001d6a1a911f42ce6)
> reproduce (this is a W=1 build):
> wget 
> https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O 
> ~/bin/make.cross
> chmod +x ~/bin/make.cross
> # install powerpc64 cross compiling tool for clang build
> # apt-get install binutils-powerpc64-linux-gnu
> # 
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=97e4910232fa1f81e806aa60c25a0450276d99a2
> git remote add linus 
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
> git fetch --no-tags linus master
> git checkout 97e4910232fa1f81e806aa60c25a0450276d99a2
> # save the attached .config to linux build tree
> COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross 
> ARCH=powerpc64 
> 
> If you fix the issue, kindly add following tag as appropriate
> Reported-by: kernel test robot 
> 
> All errors (new ones prefixed by >>):
> 
>>> arch/powerpc/kvm/book3s_hv_nested.c:264:6: error: stack frame size of 2304 
>>> bytes in function 'kvmhv_enter_nested_guest' [-Werror,-Wframe-larger-than=]
>long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu)
> ^
>1 error generated.
> 
> 
> vim +/kvmhv_enter_nested_guest +264 arch/powerpc/kvm/book3s_hv_nested.c

Not much changed here recently. It's not that big a concern because it's 
only called in the KVM ioctl path, not in any deep IO paths or anything,
and doesn't recurse. Might be a bit of inlining or stack spilling put it
over the edge.

powerpc does make it an error though, would be good to avoid that so the
robot doesn't keep tripping over.

Thanks,
Nick


> 
> afe75049303f75 Ravi Bangoria2020-12-16  263  
> 360cae313702cd Paul Mackerras   2018-10-08 @264  long 
> kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu)
> 360cae313702cd Paul Mackerras   2018-10-08  265  {
> 360cae313702cd Paul Mackerras   2018-10-08  266   long int err, r;
> 360cae313702cd Paul Mackerras   2018-10-08  267   struct kvm_nested_guest 
> *l2;
> 360cae313702cd Paul Mackerras   2018-10-08  268   struct pt_regs l2_regs, 
> saved_l1_regs;
> afe75049303f75 Ravi Bangoria2020-12-16  269   struct hv_guest_state 
> l2_hv = {0}, saved_l1_hv;
> 360cae313702cd Paul Mackerras   2018-10-08  270   struct kvmppc_vcore *vc 
> = vcpu->arch.vcore;
> 360cae313702cd Paul Mackerras   2018-10-08  271   u64 hv_ptr, regs_ptr;
> 360cae313702cd Paul Mackerras   2018-10-08  272   u64 hdec_exp;
> 360cae313702cd Paul Mackerras   2018-10-08  273   s64 delta_purr, 
> delta_spurr, delta_ic, delta_vtb;
> 360cae313702cd Paul Mackerras   2018-10-08  274   u64 mask;
> 360cae313702cd Paul Mackerras   2018-10-08  275   unsigned long lpcr;
> 360cae313702cd Paul Mackerras   2018-10-08  276  
> 360cae313702cd Paul Mackerras   2018-10-08  277   if 
> (vcpu->kvm->arch.l1_ptcr == 0)
> 360cae313702cd Paul Mackerras   2018-10-08  278   return 
> H_NOT_AVAILABLE;
> 360cae313702cd Paul Mackerras   2018-10-08  279  
> 360cae313702cd Paul Mackerras   2018-10-08  280   /* copy parameters in */
> 360cae313702cd Paul Mackerras   2018-10-08  281   hv_ptr = 
> kvmppc_get_gpr(vcpu, 4);
> 1508c22f112ce1 Alexey Kardashevskiy 2020-06-09  282   regs_ptr = 
> kvmppc_get_gpr(vcpu, 5);
> 1508c22f112ce1 Alexey Kardashevskiy 2020-06-09  283   vcpu->srcu_idx = 
> srcu_read_lock(>kvm->srcu);
> afe75049303f75 Ravi Bangoria2020-12-16  284   err = 
> kvmhv_read_guest_state_and_regs(vcpu, _hv, _regs,
> afe75049303f75 Ravi Bangoria2020-12-16  285   
>   hv_ptr, regs_ptr);
> 1508c22f112ce1 Alexey Kardashevskiy 2020-06-09  286   
> srcu_read_unlock(>kvm->srcu, vcpu->srcu_idx);
> 360cae313702cd Paul Mackerras   2018-10-08  287   if (err)
> 360cae313702cd Paul Mackerras   2018-10-08  288   return 
> H_PARAMETER;
> 1508c22f112ce1 Alexey Kardashevskiy 2020-06-09  289  
> 10b5022db7861a Suraj Jitindar Singh 2018-10-08  290   if 
> (kvmppc_need_byteswap(vcpu))
> 10b5022db7861a Suraj Jitindar Singh 2018-10-08  291   
> byteswap_hv_regs(_hv);
> afe75049303f75 Ravi Bangoria2020-12-16  292   if (l2_hv.version > 
> HV_GUEST_STATE_VERSION)
> 360cae313702cd Paul Mackerras   2018-10-08  293   return H_P2;
> 360cae313702cd Paul Mackerras   2018-10-08  294  
> 10b5022db7861a Suraj Jitindar Singh 2018-10-08  295   if 
> (kvmppc_need_byteswap(vcpu))
> 10b5022db7861a Suraj 

Re: [PATCH v2 2/9] powerpc: Add Microwatt device tree

2021-06-20 Thread Paul Mackerras
On Sat, Jun 19, 2021 at 09:26:16AM -0500, Segher Boessenkool wrote:
> On Fri, Jun 18, 2021 at 01:44:16PM +1000, Paul Mackerras wrote:
> > Microwatt currently runs with MSR[HV] = 0,
> 
> That isn't compliant though?  If your implementation does not have LPAR
> it must set MSR[HV]=1 always.

True - but if I actually do that, Linux starts trying to use hrfid
(for example in masked_Hinterrupt), which Microwatt doesn't have.
Something for Nick to fix. :)

Paul.


Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.13-6 tag

2021-06-20 Thread pr-tracker-bot
The pull request you sent on Sun, 20 Jun 2021 09:40:38 +1000:

> https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git 
> tags/powerpc-5.13-6

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/b84a7c286cecf0604a5f8bd5dfcd5e1ca7233e15

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/prtracker.html


[PATCH 0/2] powerpc/perf: Add instruction and data address registers to extended regs

2021-06-20 Thread Athira Rajeev
Patch set adds PMU registers namely Sampled Instruction Address Register
(SIAR) and Sampled Data Address Register (SDAR) as part of extended regs
in PowerPC. These registers provides the instruction/data address and
adding these to extended regs helps in debug purposes.

Patch 1/2 adds SIAR and SDAR as part of the extended regs mask.
Patch 2/2 includes perf tools side changes to add the SPRs to
sample_reg_mask to use with -I? option.

Athira Rajeev (2):
  powerpc/perf: Expose instruction and data address registers as part of
extended regs
  tools/perf: Add perf tools support to expose instruction and data
address registers as part of extended regs

 arch/powerpc/include/uapi/asm/perf_regs.h   | 12 +++-
 arch/powerpc/perf/perf_regs.c   |  4 
 tools/arch/powerpc/include/uapi/asm/perf_regs.h | 12 +++-
 tools/perf/arch/powerpc/include/perf_regs.h |  2 ++
 tools/perf/arch/powerpc/util/perf_regs.c|  2 ++
 5 files changed, 22 insertions(+), 10 deletions(-)

-- 
1.8.3.1



[PATCH 1/2] powerpc/perf: Expose instruction and data address registers as part of extended regs

2021-06-20 Thread Athira Rajeev
Patch adds support to include Sampled Instruction Address Register
(SIAR) and Sampled Data Address Register (SDAR) SPRs as part of extended
registers. Update the definition of PERF_REG_PMU_MASK_300/31 and
PERF_REG_EXTENDED_MAX to include these SPR's.

Signed-off-by: Athira Rajeev 
---
 arch/powerpc/include/uapi/asm/perf_regs.h | 12 +++-
 arch/powerpc/perf/perf_regs.c |  4 
 2 files changed, 11 insertions(+), 5 deletions(-)

diff --git a/arch/powerpc/include/uapi/asm/perf_regs.h 
b/arch/powerpc/include/uapi/asm/perf_regs.h
index 578b3ee..cf5eee5 100644
--- a/arch/powerpc/include/uapi/asm/perf_regs.h
+++ b/arch/powerpc/include/uapi/asm/perf_regs.h
@@ -61,6 +61,8 @@ enum perf_event_powerpc_regs {
PERF_REG_POWERPC_PMC4,
PERF_REG_POWERPC_PMC5,
PERF_REG_POWERPC_PMC6,
+   PERF_REG_POWERPC_SDAR,
+   PERF_REG_POWERPC_SIAR,
/* Max regs without the extended regs */
PERF_REG_POWERPC_MAX = PERF_REG_POWERPC_MMCRA + 1,
 };
@@ -72,16 +74,16 @@ enum perf_event_powerpc_regs {
 
 /*
  * PERF_REG_EXTENDED_MASK value for CPU_FTR_ARCH_300
- * includes 9 SPRS from MMCR0 to PMC6 excluding the
+ * includes 11 SPRS from MMCR0 to SIAR excluding the
  * unsupported SPRS in PERF_EXCLUDE_REG_EXT_300.
  */
-#define PERF_REG_PMU_MASK_300   ((0xfffULL << PERF_REG_POWERPC_MMCR0) - 
PERF_EXCLUDE_REG_EXT_300)
+#define PERF_REG_PMU_MASK_300   ((0x3fffULL << PERF_REG_POWERPC_MMCR0) - 
PERF_EXCLUDE_REG_EXT_300)
 
 /*
  * PERF_REG_EXTENDED_MASK value for CPU_FTR_ARCH_31
- * includes 12 SPRs from MMCR0 to PMC6.
+ * includes 14 SPRs from MMCR0 to SIAR.
  */
-#define PERF_REG_PMU_MASK_31   (0xfffULL << PERF_REG_POWERPC_MMCR0)
+#define PERF_REG_PMU_MASK_31   (0x3fffULL << PERF_REG_POWERPC_MMCR0)
 
-#define PERF_REG_EXTENDED_MAX  (PERF_REG_POWERPC_PMC6 + 1)
+#define PERF_REG_EXTENDED_MAX  (PERF_REG_POWERPC_SIAR + 1)
 #endif /* _UAPI_ASM_POWERPC_PERF_REGS_H */
diff --git a/arch/powerpc/perf/perf_regs.c b/arch/powerpc/perf/perf_regs.c
index b931eed..51d31b6 100644
--- a/arch/powerpc/perf/perf_regs.c
+++ b/arch/powerpc/perf/perf_regs.c
@@ -90,7 +90,11 @@ static u64 get_ext_regs_value(int idx)
return mfspr(SPRN_SIER2);
case PERF_REG_POWERPC_SIER3:
return mfspr(SPRN_SIER3);
+   case PERF_REG_POWERPC_SDAR:
+   return mfspr(SPRN_SDAR);
 #endif
+   case PERF_REG_POWERPC_SIAR:
+   return mfspr(SPRN_SIAR);
default: return 0;
}
 }
-- 
1.8.3.1



[PATCH 2/2] tools/perf: Add perf tools support to expose instruction and data address registers as part of extended regs

2021-06-20 Thread Athira Rajeev
Patch enables presenting of Sampled Instruction Address Register (SIAR)
and Sampled Data Address Register (SDAR) SPRs as part of extended regsiters
for perf tool. Add these SPR's to sample_reg_mask in the tool side (to use
with -I? option).

Signed-off-by: Athira Rajeev 
---
 tools/arch/powerpc/include/uapi/asm/perf_regs.h | 12 +++-
 tools/perf/arch/powerpc/include/perf_regs.h |  2 ++
 tools/perf/arch/powerpc/util/perf_regs.c|  2 ++
 3 files changed, 11 insertions(+), 5 deletions(-)

diff --git a/tools/arch/powerpc/include/uapi/asm/perf_regs.h 
b/tools/arch/powerpc/include/uapi/asm/perf_regs.h
index 578b3ee..cf5eee5 100644
--- a/tools/arch/powerpc/include/uapi/asm/perf_regs.h
+++ b/tools/arch/powerpc/include/uapi/asm/perf_regs.h
@@ -61,6 +61,8 @@ enum perf_event_powerpc_regs {
PERF_REG_POWERPC_PMC4,
PERF_REG_POWERPC_PMC5,
PERF_REG_POWERPC_PMC6,
+   PERF_REG_POWERPC_SDAR,
+   PERF_REG_POWERPC_SIAR,
/* Max regs without the extended regs */
PERF_REG_POWERPC_MAX = PERF_REG_POWERPC_MMCRA + 1,
 };
@@ -72,16 +74,16 @@ enum perf_event_powerpc_regs {
 
 /*
  * PERF_REG_EXTENDED_MASK value for CPU_FTR_ARCH_300
- * includes 9 SPRS from MMCR0 to PMC6 excluding the
+ * includes 11 SPRS from MMCR0 to SIAR excluding the
  * unsupported SPRS in PERF_EXCLUDE_REG_EXT_300.
  */
-#define PERF_REG_PMU_MASK_300   ((0xfffULL << PERF_REG_POWERPC_MMCR0) - 
PERF_EXCLUDE_REG_EXT_300)
+#define PERF_REG_PMU_MASK_300   ((0x3fffULL << PERF_REG_POWERPC_MMCR0) - 
PERF_EXCLUDE_REG_EXT_300)
 
 /*
  * PERF_REG_EXTENDED_MASK value for CPU_FTR_ARCH_31
- * includes 12 SPRs from MMCR0 to PMC6.
+ * includes 14 SPRs from MMCR0 to SIAR.
  */
-#define PERF_REG_PMU_MASK_31   (0xfffULL << PERF_REG_POWERPC_MMCR0)
+#define PERF_REG_PMU_MASK_31   (0x3fffULL << PERF_REG_POWERPC_MMCR0)
 
-#define PERF_REG_EXTENDED_MAX  (PERF_REG_POWERPC_PMC6 + 1)
+#define PERF_REG_EXTENDED_MAX  (PERF_REG_POWERPC_SIAR + 1)
 #endif /* _UAPI_ASM_POWERPC_PERF_REGS_H */
diff --git a/tools/perf/arch/powerpc/include/perf_regs.h 
b/tools/perf/arch/powerpc/include/perf_regs.h
index 04e5dc0..93339d1 100644
--- a/tools/perf/arch/powerpc/include/perf_regs.h
+++ b/tools/perf/arch/powerpc/include/perf_regs.h
@@ -77,6 +77,8 @@
[PERF_REG_POWERPC_PMC4] = "pmc4",
[PERF_REG_POWERPC_PMC5] = "pmc5",
[PERF_REG_POWERPC_PMC6] = "pmc6",
+   [PERF_REG_POWERPC_SDAR] = "sdar",
+   [PERF_REG_POWERPC_SIAR] = "siar",
 };
 
 static inline const char *__perf_reg_name(int id)
diff --git a/tools/perf/arch/powerpc/util/perf_regs.c 
b/tools/perf/arch/powerpc/util/perf_regs.c
index 8116a25..8d07a78 100644
--- a/tools/perf/arch/powerpc/util/perf_regs.c
+++ b/tools/perf/arch/powerpc/util/perf_regs.c
@@ -74,6 +74,8 @@
SMPL_REG(pmc4, PERF_REG_POWERPC_PMC4),
SMPL_REG(pmc5, PERF_REG_POWERPC_PMC5),
SMPL_REG(pmc6, PERF_REG_POWERPC_PMC6),
+   SMPL_REG(sdar, PERF_REG_POWERPC_SDAR),
+   SMPL_REG(siar, PERF_REG_POWERPC_SIAR),
SMPL_REG_END
 };
 
-- 
1.8.3.1



[powerpc:merge] BUILD SUCCESS 7f030e9d57b8ff6025bde4162f42378e6081126a

2021-06-20 Thread kernel test robot
tree/branch: https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git 
merge
branch HEAD: 7f030e9d57b8ff6025bde4162f42378e6081126a  Automatic merge of 
'fixes' into merge (2021-06-20 09:27)

elapsed time: 723m

configs tested: 114
configs skipped: 2

The following configs have been built successfully.
More configs may be tested in the coming days.

gcc tested configs:
arm defconfig
arm  allyesconfig
arm  allmodconfig
arm64allyesconfig
arm64   defconfig
sh   se7206_defconfig
sh apsh4a3a_defconfig
sparc   sparc64_defconfig
arm mv78xx0_defconfig
shmigor_defconfig
powerpc akebono_defconfig
powerpcklondike_defconfig
armneponset_defconfig
powerpcicon_defconfig
powerpc mpc834x_itx_defconfig
arm at91_dt_defconfig
x86_64allnoconfig
sh  ul2_defconfig
arm   sunxi_defconfig
arm  imote2_defconfig
mips loongson2k_defconfig
mipsmalta_qemu_32r6_defconfig
sh   secureedge5410_defconfig
powerpccell_defconfig
powerpc   maple_defconfig
arm   stm32_defconfig
mips   rs90_defconfig
arm assabet_defconfig
arm davinci_all_defconfig
arc haps_hs_defconfig
armmvebu_v5_defconfig
sparc64  alldefconfig
sh  sh7785lcr_32bit_defconfig
arm shannon_defconfig
powerpc tqm8548_defconfig
sh  landisk_defconfig
powerpc  ppc6xx_defconfig
mips decstation_r4k_defconfig
powerpc mpc8540_ads_defconfig
pariscgeneric-64bit_defconfig
m68k amcore_defconfig
ia64 allmodconfig
ia64defconfig
ia64 allyesconfig
m68k allmodconfig
m68kdefconfig
m68k allyesconfig
nds32   defconfig
nios2allyesconfig
cskydefconfig
alpha   defconfig
alphaallyesconfig
xtensa   allyesconfig
h8300allyesconfig
arc defconfig
sh   allmodconfig
parisc  defconfig
s390 allyesconfig
s390 allmodconfig
parisc   allyesconfig
s390defconfig
i386 allyesconfig
sparcallyesconfig
sparc   defconfig
i386defconfig
nios2   defconfig
arc  allyesconfig
nds32 allnoconfig
mips allyesconfig
mips allmodconfig
powerpc  allyesconfig
powerpc  allmodconfig
powerpc   allnoconfig
i386 randconfig-a001-20210620
i386 randconfig-a002-20210620
i386 randconfig-a003-20210620
i386 randconfig-a006-20210620
i386 randconfig-a005-20210620
i386 randconfig-a004-20210620
x86_64   randconfig-a016-20210620
x86_64   randconfig-a015-20210620
x86_64   randconfig-a014-20210620
x86_64   randconfig-a012-20210620
x86_64   randconfig-a013-20210620
x86_64   randconfig-a011-20210620
i386 randconfig-a011-20210620
i386 randconfig-a014-20210620
i386 randconfig-a013-20210620
i386 randconfig-a015-20210620
i386 randconfig-a012-20210620
i386 randconfig-a016-20210620
riscvnommu_k210_defconfig
riscvnommu_virt_defconfig
riscv  rv32_defconfig
riscvallyesconfig
riscv allnoconfig
riscv   defconfig
riscvallmodconfig
um   x86_64_defconfig
um i386_defconfig
um

[powerpc:next-test] BUILD REGRESSION 77ba1e2abc7474c5321cbf8d90366ec69150d0a2

2021-06-20 Thread kernel test robot
tree/branch: https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git 
next-test
branch HEAD: 77ba1e2abc7474c5321cbf8d90366ec69150d0a2  powerpc: Enable KFENCE 
on BOOK3S/64

Error/Warning in current branch:

arch/powerpc/kernel/interrupt.c:36:20: error: unused function 
'exit_must_hard_disable' [-Werror,-Wunused-function]
arch/powerpc/lib/code-patching.c:76:12: error: no previous prototype for 
'poking_init' [-Werror=missing-prototypes]
arch/powerpc/lib/code-patching.c:76:12: warning: no previous prototype for 
'poking_init' [-Wmissing-prototypes]

Error/Warning ids grouped by kconfigs:

gcc_recent_errors
|-- powerpc-mpc885_ads_defconfig
|   `-- 
arch-powerpc-lib-code-patching.c:error:no-previous-prototype-for-poking_init
`-- powerpc64-randconfig-r011-20210620
`-- 
arch-powerpc-lib-code-patching.c:warning:no-previous-prototype-for-poking_init

clang_recent_errors
`-- powerpc-randconfig-r034-20210620
`-- 
arch-powerpc-kernel-interrupt.c:error:unused-function-exit_must_hard_disable-Werror-Wunused-function

elapsed time: 723m

configs tested: 108
configs skipped: 5

gcc tested configs:
arm64   defconfig
arm defconfig
arm64allyesconfig
arm  allyesconfig
arm  allmodconfig
powerpc  mpc885_ads_defconfig
sh   j2_defconfig
sh   se7712_defconfig
mips  rb532_defconfig
xtensa   common_defconfig
powerpc akebono_defconfig
powerpcklondike_defconfig
armneponset_defconfig
powerpcicon_defconfig
powerpc  obs600_defconfig
arm   aspeed_g5_defconfig
powerpc skiroot_defconfig
arm  imote2_defconfig
powerpc linkstation_defconfig
powerpc   currituck_defconfig
microblaze  mmu_defconfig
mips loongson2k_defconfig
mipsmalta_qemu_32r6_defconfig
sh   secureedge5410_defconfig
powerpccell_defconfig
sh  urquell_defconfig
x86_64   allyesconfig
arm   imx_v6_v7_defconfig
arm  pxa168_defconfig
mips decstation_r4k_defconfig
arm davinci_all_defconfig
powerpc redwood_defconfig
ia64 bigsur_defconfig
powerpc mpc834x_mds_defconfig
mipsnlm_xlp_defconfig
x86_64allnoconfig
ia64 allmodconfig
ia64defconfig
ia64 allyesconfig
m68k allmodconfig
m68kdefconfig
m68k allyesconfig
nds32   defconfig
nios2allyesconfig
cskydefconfig
alpha   defconfig
alphaallyesconfig
xtensa   allyesconfig
h8300allyesconfig
arc defconfig
sh   allmodconfig
parisc  defconfig
s390 allyesconfig
s390 allmodconfig
parisc   allyesconfig
s390defconfig
nios2   defconfig
arc  allyesconfig
nds32 allnoconfig
i386 allyesconfig
sparcallyesconfig
sparc   defconfig
i386defconfig
mips allyesconfig
mips allmodconfig
powerpc  allyesconfig
powerpc  allmodconfig
powerpc   allnoconfig
i386 randconfig-a001-20210620
i386 randconfig-a002-20210620
i386 randconfig-a003-20210620
i386 randconfig-a006-20210620
i386 randconfig-a005-20210620
i386 randconfig-a004-20210620
x86_64   randconfig-a012-20210620
x86_64   randconfig-a014-20210620
x86_64   randconfig-a013-20210620
x86_64   randconfig-a011-20210620
x86_64   randconfig-a016-20210620
x86_64   randconfig-a015-20210620
i386 randconfig-a011-20210620
i386 randconfig-a014-20210620
i386 randconfig-a013-20210620
i386 randconfig-a012-20210620
i386

[powerpc:fixes-test] BUILD SUCCESS 60b7ed54a41b550d50caf7f2418db4a7e75b5bdc

2021-06-20 Thread kernel test robot
-20210620
i386 randconfig-a002-20210620
i386 randconfig-a003-20210620
i386 randconfig-a005-20210620
i386 randconfig-a004-20210620
i386 randconfig-a006-20210620
x86_64   randconfig-a016-20210620
x86_64   randconfig-a015-20210620
x86_64   randconfig-a014-20210620
x86_64   randconfig-a012-20210620
x86_64   randconfig-a013-20210620
x86_64   randconfig-a011-20210620
i386 randconfig-a016-20210620
i386 randconfig-a014-20210620
i386 randconfig-a015-20210620
i386 randconfig-a011-20210620
i386 randconfig-a013-20210620
i386 randconfig-a012-20210620
riscvnommu_k210_defconfig
riscv allnoconfig
riscvallmodconfig
riscvallyesconfig
riscv   defconfig
riscvnommu_virt_defconfig
riscv  rv32_defconfig
um   x86_64_defconfig
um i386_defconfig
umkunit_defconfig
x86_64   allyesconfig
x86_64rhel-8.3-kselftests
x86_64  defconfig
x86_64   rhel-8.3
x86_64  rhel-8.3-kbuiltin
x86_64  kexec

clang tested configs:
x86_64   randconfig-b001-20210620
x86_64   randconfig-a005-20210620
x86_64   randconfig-a004-20210620
x86_64   randconfig-a006-20210620
x86_64   randconfig-a002-20210620
x86_64   randconfig-a001-20210620
x86_64   randconfig-a003-20210620

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-...@lists.01.org


Re: [PATCH v15 4/4] kasan: use MAX_PTRS_PER_* for early shadow tables

2021-06-20 Thread Andrey Konovalov
On Thu, Jun 17, 2021 at 12:30 PM Daniel Axtens  wrote:
>
> powerpc has a variable number of PTRS_PER_*, set at runtime based
> on the MMU that the kernel is booted under.
>
> This means the PTRS_PER_* are no longer constants, and therefore
> breaks the build. Switch to using MAX_PTRS_PER_*, which are constant.
>
> Suggested-by: Christophe Leroy 
> Suggested-by: Balbir Singh 
> Reviewed-by: Christophe Leroy 
> Reviewed-by: Balbir Singh 
> Reviewed-by: Marco Elver 
> Signed-off-by: Daniel Axtens 
> ---
>  include/linux/kasan.h | 6 +++---
>  mm/kasan/init.c   | 6 +++---
>  2 files changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index 768d7d342757..5310e217bd74 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -41,9 +41,9 @@ struct kunit_kasan_expectation {
>  #endif
>
>  extern unsigned char kasan_early_shadow_page[PAGE_SIZE];
> -extern pte_t kasan_early_shadow_pte[PTRS_PER_PTE + PTE_HWTABLE_PTRS];
> -extern pmd_t kasan_early_shadow_pmd[PTRS_PER_PMD];
> -extern pud_t kasan_early_shadow_pud[PTRS_PER_PUD];
> +extern pte_t kasan_early_shadow_pte[MAX_PTRS_PER_PTE + PTE_HWTABLE_PTRS];
> +extern pmd_t kasan_early_shadow_pmd[MAX_PTRS_PER_PMD];
> +extern pud_t kasan_early_shadow_pud[MAX_PTRS_PER_PUD];
>  extern p4d_t kasan_early_shadow_p4d[MAX_PTRS_PER_P4D];
>
>  int kasan_populate_early_shadow(const void *shadow_start,
> diff --git a/mm/kasan/init.c b/mm/kasan/init.c
> index 348f31d15a97..cc64ed6858c6 100644
> --- a/mm/kasan/init.c
> +++ b/mm/kasan/init.c
> @@ -41,7 +41,7 @@ static inline bool kasan_p4d_table(pgd_t pgd)
>  }
>  #endif
>  #if CONFIG_PGTABLE_LEVELS > 3
> -pud_t kasan_early_shadow_pud[PTRS_PER_PUD] __page_aligned_bss;
> +pud_t kasan_early_shadow_pud[MAX_PTRS_PER_PUD] __page_aligned_bss;
>  static inline bool kasan_pud_table(p4d_t p4d)
>  {
> return p4d_page(p4d) == 
> virt_to_page(lm_alias(kasan_early_shadow_pud));
> @@ -53,7 +53,7 @@ static inline bool kasan_pud_table(p4d_t p4d)
>  }
>  #endif
>  #if CONFIG_PGTABLE_LEVELS > 2
> -pmd_t kasan_early_shadow_pmd[PTRS_PER_PMD] __page_aligned_bss;
> +pmd_t kasan_early_shadow_pmd[MAX_PTRS_PER_PMD] __page_aligned_bss;
>  static inline bool kasan_pmd_table(pud_t pud)
>  {
> return pud_page(pud) == 
> virt_to_page(lm_alias(kasan_early_shadow_pmd));
> @@ -64,7 +64,7 @@ static inline bool kasan_pmd_table(pud_t pud)
> return false;
>  }
>  #endif
> -pte_t kasan_early_shadow_pte[PTRS_PER_PTE + PTE_HWTABLE_PTRS]
> +pte_t kasan_early_shadow_pte[MAX_PTRS_PER_PTE + PTE_HWTABLE_PTRS]
> __page_aligned_bss;
>
>  static inline bool kasan_pte_table(pmd_t pmd)
> --
> 2.30.2
>

Reviewed-by: Andrey Konovalov 


Re: [PATCH v15 3/4] mm: define default MAX_PTRS_PER_* in include/pgtable.h

2021-06-20 Thread Andrey Konovalov
On Thu, Jun 17, 2021 at 12:30 PM Daniel Axtens  wrote:
>
> Commit c65e774fb3f6 ("x86/mm: Make PGDIR_SHIFT and PTRS_PER_P4D variable")
> made PTRS_PER_P4D variable on x86 and introduced MAX_PTRS_PER_P4D as a
> constant for cases which need a compile-time constant (e.g. fixed-size
> arrays).
>
> powerpc likewise has boot-time selectable MMU features which can cause
> other mm "constants" to vary. For KASAN, we have some static
> PTE/PMD/PUD/P4D arrays so we need compile-time maximums for all these
> constants. Extend the MAX_PTRS_PER_ idiom, and place default definitions
> in include/pgtable.h. These define MAX_PTRS_PER_x to be PTRS_PER_x unless
> an architecture has defined MAX_PTRS_PER_x in its arch headers.
>
> Clean up pgtable-nop4d.h and s390's MAX_PTRS_PER_P4D definitions while
> we're at it: both can just pick up the default now.
>
> Reviewed-by: Christophe Leroy 
> Reviewed-by: Marco Elver 
> Signed-off-by: Daniel Axtens 
>
> ---
>
> s390 was compile tested only.
> ---
>  arch/s390/include/asm/pgtable.h |  2 --
>  include/asm-generic/pgtable-nop4d.h |  1 -
>  include/linux/pgtable.h | 22 ++
>  3 files changed, 22 insertions(+), 3 deletions(-)
>
> diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
> index 7c66ae5d7e32..cf05954ce013 100644
> --- a/arch/s390/include/asm/pgtable.h
> +++ b/arch/s390/include/asm/pgtable.h
> @@ -342,8 +342,6 @@ static inline int is_module_addr(void *addr)
>  #define PTRS_PER_P4D   _CRST_ENTRIES
>  #define PTRS_PER_PGD   _CRST_ENTRIES
>
> -#define MAX_PTRS_PER_P4D   PTRS_PER_P4D
> -
>  /*
>   * Segment table and region3 table entry encoding
>   * (R = read-only, I = invalid, y = young bit):
> diff --git a/include/asm-generic/pgtable-nop4d.h 
> b/include/asm-generic/pgtable-nop4d.h
> index ce2cbb3c380f..2f6b1befb129 100644
> --- a/include/asm-generic/pgtable-nop4d.h
> +++ b/include/asm-generic/pgtable-nop4d.h
> @@ -9,7 +9,6 @@
>  typedef struct { pgd_t pgd; } p4d_t;
>
>  #define P4D_SHIFT  PGDIR_SHIFT
> -#define MAX_PTRS_PER_P4D   1
>  #define PTRS_PER_P4D   1
>  #define P4D_SIZE   (1UL << P4D_SHIFT)
>  #define P4D_MASK   (~(P4D_SIZE-1))
> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> index 9e6f71265f72..69700e3e615f 100644
> --- a/include/linux/pgtable.h
> +++ b/include/linux/pgtable.h
> @@ -1625,4 +1625,26 @@ typedef unsigned int pgtbl_mod_mask;
>  #define pte_leaf_size(x) PAGE_SIZE
>  #endif
>
> +/*
> + * Some architectures have MMUs that are configurable or selectable at boot
> + * time. These lead to variable PTRS_PER_x. For statically allocated arrays 
> it
> + * helps to have a static maximum value.
> + */
> +
> +#ifndef MAX_PTRS_PER_PTE
> +#define MAX_PTRS_PER_PTE PTRS_PER_PTE
> +#endif
> +
> +#ifndef MAX_PTRS_PER_PMD
> +#define MAX_PTRS_PER_PMD PTRS_PER_PMD
> +#endif
> +
> +#ifndef MAX_PTRS_PER_PUD
> +#define MAX_PTRS_PER_PUD PTRS_PER_PUD
> +#endif
> +
> +#ifndef MAX_PTRS_PER_P4D
> +#define MAX_PTRS_PER_P4D PTRS_PER_P4D
> +#endif
> +
>  #endif /* _LINUX_PGTABLE_H */
> --
> 2.30.2
>

Acked-by: Andrey Konovalov 


Re: [PATCH v15 2/4] kasan: allow architectures to provide an outline readiness check

2021-06-20 Thread Andrey Konovalov
On Thu, Jun 17, 2021 at 12:30 PM Daniel Axtens  wrote:
>
> Allow architectures to define a kasan_arch_is_ready() hook that bails
> out of any function that's about to touch the shadow unless the arch
> says that it is ready for the memory to be accessed. This is fairly
> uninvasive and should have a negligible performance penalty.
>
> This will only work in outline mode, so an arch must specify
> ARCH_DISABLE_KASAN_INLINE if it requires this.
>
> Cc: Balbir Singh 
> Cc: Aneesh Kumar K.V 
> Suggested-by: Christophe Leroy 
> Reviewed-by: Marco Elver 
> Signed-off-by: Daniel Axtens 
>
> --
>
> Both previous RFCs for ppc64 - by 2 different people - have
> needed this trick! See:
>  - https://lore.kernel.org/patchwork/patch/592820/ # ppc64 hash series
>  - https://patchwork.ozlabs.org/patch/795211/  # ppc radix series
>
> Build tested on arm64 with SW_TAGS and x86 with INLINE: the error fires
> if I add a kasan_arch_is_ready define.
> ---
>  mm/kasan/common.c  | 4 
>  mm/kasan/generic.c | 3 +++
>  mm/kasan/kasan.h   | 6 ++
>  mm/kasan/shadow.c  | 8 
>  4 files changed, 21 insertions(+)
>
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index 10177cc26d06..0ad615f3801d 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -331,6 +331,10 @@ static inline bool kasan_slab_free(struct kmem_cache 
> *cache, void *object,
> u8 tag;
> void *tagged_object;
>
> +   /* Bail if the arch isn't ready */

This comment brings no value. The fact that we bail is clear from the
following line. The comment should explain why we bail.

> +   if (!kasan_arch_is_ready())
> +   return false;

Have you considered including these checks into the high-level
wrappers in include/linux/kasan.h? Would that work?


> +
> tag = get_tag(object);
> tagged_object = object;
> object = kasan_reset_tag(object);
> diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
> index 53cbf28859b5..c3f5ba7a294a 100644
> --- a/mm/kasan/generic.c
> +++ b/mm/kasan/generic.c
> @@ -163,6 +163,9 @@ static __always_inline bool check_region_inline(unsigned 
> long addr,
> size_t size, bool write,
> unsigned long ret_ip)
>  {
> +   if (!kasan_arch_is_ready())
> +   return true;
> +
> if (unlikely(size == 0))
> return true;
>
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index 8f450bc28045..4dbc8def64f4 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -449,6 +449,12 @@ static inline void kasan_poison_last_granule(const void 
> *address, size_t size) {
>
>  #endif /* CONFIG_KASAN_GENERIC */
>
> +#ifndef kasan_arch_is_ready
> +static inline bool kasan_arch_is_ready(void)   { return true; }
> +#elif !defined(CONFIG_KASAN_GENERIC) || !defined(CONFIG_KASAN_OUTLINE)
> +#error kasan_arch_is_ready only works in KASAN generic outline mode!
> +#endif
> +
>  /*
>   * Exported functions for interfaces called from assembly or from generated
>   * code. Declarations here to avoid warning about missing declarations.
> diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
> index 082ee5b6d9a1..3c7f7efe6f68 100644
> --- a/mm/kasan/shadow.c
> +++ b/mm/kasan/shadow.c
> @@ -73,6 +73,10 @@ void kasan_poison(const void *addr, size_t size, u8 value, 
> bool init)
>  {
> void *shadow_start, *shadow_end;
>
> +   /* Don't touch the shadow memory if arch isn't ready */
> +   if (!kasan_arch_is_ready())
> +   return;
> +
> /*
>  * Perform shadow offset calculation based on untagged address, as
>  * some of the callers (e.g. kasan_poison_object_data) pass tagged
> @@ -99,6 +103,10 @@ EXPORT_SYMBOL(kasan_poison);
>  #ifdef CONFIG_KASAN_GENERIC
>  void kasan_poison_last_granule(const void *addr, size_t size)
>  {
> +   /* Don't touch the shadow memory if arch isn't ready */
> +   if (!kasan_arch_is_ready())
> +   return;
> +
> if (size & KASAN_GRANULE_MASK) {
> u8 *shadow = (u8 *)kasan_mem_to_shadow(addr + size);
> *shadow = size & KASAN_GRANULE_MASK;
> --
> 2.30.2
>


Re: [PATCH v15 1/4] kasan: allow an architecture to disable inline instrumentation

2021-06-20 Thread Andrey Konovalov
On Thu, Jun 17, 2021 at 12:30 PM Daniel Axtens  wrote:
>
> For annoying architectural reasons, it's very difficult to support inline
> instrumentation on powerpc64.*
>
> Add a Kconfig flag to allow an arch to disable inline. (It's a bit
> annoying to be 'backwards', but I'm not aware of any way to have
> an arch force a symbol to be 'n', rather than 'y'.)
>
> We also disable stack instrumentation in this case as it does things that
> are functionally equivalent to inline instrumentation, namely adding
> code that touches the shadow directly without going through a C helper.
>
> * on ppc64 atm, the shadow lives in virtual memory and isn't accessible in
> real mode. However, before we turn on virtual memory, we parse the device
> tree to determine which platform and MMU we're running under. That calls
> generic DT code, which is instrumented. Inline instrumentation in DT would
> unconditionally attempt to touch the shadow region, which we won't have
> set up yet, and would crash. We can make outline mode wait for the arch to
> be ready, but we can't change what the compiler inserts for inline mode.
>
> Reviewed-by: Marco Elver 
> Signed-off-by: Daniel Axtens 
> ---
>  lib/Kconfig.kasan | 14 ++
>  1 file changed, 14 insertions(+)
>
> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
> index cffc2ebbf185..cb5e02d09e11 100644
> --- a/lib/Kconfig.kasan
> +++ b/lib/Kconfig.kasan
> @@ -12,6 +12,15 @@ config HAVE_ARCH_KASAN_HW_TAGS
>  config HAVE_ARCH_KASAN_VMALLOC
> bool
>
> +config ARCH_DISABLE_KASAN_INLINE
> +   bool
> +   help
> + Sometimes an architecture might not be able to support inline
> + instrumentation but might be able to support outline 
> instrumentation.
> + This option allows an architecture to prevent inline and stack
> + instrumentation from being enabled.

This seems too wordy.

How about: "An architecture might not support inline instrumentation.
When this option is selected, inline and stack instrumentation are
disabled."

> +
> +

Drop the extra empty line.

>  config CC_HAS_KASAN_GENERIC
> def_bool $(cc-option, -fsanitize=kernel-address)
>
> @@ -130,6 +139,7 @@ config KASAN_OUTLINE
>
>  config KASAN_INLINE
> bool "Inline instrumentation"
> +   depends on !ARCH_DISABLE_KASAN_INLINE
> help
>   Compiler directly inserts code checking shadow memory before
>   memory accesses. This is faster than outline (in some workloads
> @@ -141,6 +151,7 @@ endchoice
>  config KASAN_STACK
> bool "Enable stack instrumentation (unsafe)" if CC_IS_CLANG && 
> !COMPILE_TEST
> depends on KASAN_GENERIC || KASAN_SW_TAGS
> +   depends on !ARCH_DISABLE_KASAN_INLINE
> default y if CC_IS_GCC
> help
>   The LLVM stack address sanitizer has a know problem that
> @@ -154,6 +165,9 @@ config KASAN_STACK
>   but clang users can still enable it for builds without
>   CONFIG_COMPILE_TEST.  On gcc it is assumed to always be safe
>   to use and enabled by default.
> + If the architecture disables inline instrumentation, this is

this => stack instrumentation



> + also disabled as it adds inline-style instrumentation that
> + is run unconditionally.
>
>  config KASAN_SW_TAGS_IDENTIFY
> bool "Enable memory corruption identification"
> --
> 2.30.2
>


Re: [PATCH v2 6/9] powerpc/microwatt: Add support for hardware random number generator

2021-06-20 Thread Nicholas Piggin
Excerpts from Segher Boessenkool's message of June 20, 2021 12:36 am:
> On Sat, Jun 19, 2021 at 01:08:51PM +1000, Nicholas Piggin wrote:
>> Excerpts from Paul Mackerras's message of June 18, 2021 1:47 pm:
>> > Microwatt's hardware RNG is accessed using the DARN instruction.
>> 
>> I think we're getting a platforms/book3s soon with the VAS patches, 
>> might be a place to add the get_random_darn function.
>> 
>> Huh, DARN is unprivileged right?
> 
> It is, that's the whole point: to make it very very cheap for user
> software it has to be an unprivileged instruction.

Right, I was just doing a double-take. In that case we should enable it 
in the pseries random number code as well, so it really would be a 
generic isa 3.0 function that all (microwatt, powernv, pseries) could
use AFAIKS.

Thanks,
Nick