Re: [alsa-devel] [PATCH 0/3] Add new module driver for new ASRC

2020-02-11 Thread Shengjiu Wang
On Wed, Feb 12, 2020 at 1:13 PM Randy Dunlap  wrote:
>
> On 2/11/20 8:30 PM, Shengjiu Wang wrote:
> > Add new module driver for new ASRC in i.MX815/865
> >
> > Shengjiu Wang (3):
> >   ASoC: fsl_asrc: Move common definition to fsl_asrc_common
> >   ASoC: dt-bindings: fsl_easrc: Add document for EASRC
> >   ASoC: fsl_easrc: Add EASRC ASoC CPU DAI and platform drivers
> >
> >  .../devicetree/bindings/sound/fsl,easrc.txt   |   57 +
> >  sound/soc/fsl/fsl_asrc.h  |   11 +-
> >  sound/soc/fsl/fsl_asrc_common.h   |   22 +
> >  sound/soc/fsl/fsl_easrc.c | 2265 +
> >  sound/soc/fsl/fsl_easrc.h |  668 +
> >  sound/soc/fsl/fsl_easrc_dma.c |  440 
> >  6 files changed, 3453 insertions(+), 10 deletions(-)
> >  create mode 100644 Documentation/devicetree/bindings/sound/fsl,easrc.txt
> >  create mode 100644 sound/soc/fsl/fsl_asrc_common.h
> >  create mode 100644 sound/soc/fsl/fsl_easrc.c
> >  create mode 100644 sound/soc/fsl/fsl_easrc.h
> >  create mode 100644 sound/soc/fsl/fsl_easrc_dma.c
> >
>
> Hi,
>
> Is this patch series missing Kconfig, Makefile, and possibly
> MAINTAINERS patches?
>
yes, Kconfig, Makefile is missed, will add in next version, and
no maintainers patch.

best regards
wang shengjiu


Re: [PATCH v6 4/4] powerpc: Book3S 64-bit "heavyweight" KASAN support

2020-02-11 Thread Christophe Leroy

Le 12/02/2020 à 06:47, Daniel Axtens a écrit :

diff --git a/arch/powerpc/include/asm/kasan.h b/arch/powerpc/include/asm/kasan.h
index fbff9ff9032e..2911fdd3a6a0 100644
--- a/arch/powerpc/include/asm/kasan.h
+++ b/arch/powerpc/include/asm/kasan.h
@@ -2,6 +2,8 @@
  #ifndef __ASM_KASAN_H
  #define __ASM_KASAN_H
  
+#include 

+
  #ifdef CONFIG_KASAN
  #define _GLOBAL_KASAN(fn) _GLOBAL(__##fn)
  #define _GLOBAL_TOC_KASAN(fn) _GLOBAL_TOC(__##fn)
@@ -14,29 +16,41 @@
  
  #ifndef __ASSEMBLY__
  
-#include 

-
  #define KASAN_SHADOW_SCALE_SHIFT  3
  
  #define KASAN_SHADOW_START	(KASAN_SHADOW_OFFSET + \

 (PAGE_OFFSET >> KASAN_SHADOW_SCALE_SHIFT))
  
+#ifdef CONFIG_KASAN_SHADOW_OFFSET

  #define KASAN_SHADOW_OFFSET   ASM_CONST(CONFIG_KASAN_SHADOW_OFFSET)
+#endif
  
+#ifdef CONFIG_PPC32

  #define KASAN_SHADOW_END  0UL
  
-#define KASAN_SHADOW_SIZE	(KASAN_SHADOW_END - KASAN_SHADOW_START)

+#ifdef CONFIG_KASAN
+void kasan_late_init(void);
+#else
+static inline void kasan_late_init(void) { }
+#endif
+
+#endif
+
+#ifdef CONFIG_PPC_BOOK3S_64
+#define KASAN_SHADOW_END   (KASAN_SHADOW_OFFSET + \
+(RADIX_VMEMMAP_END >> 
KASAN_SHADOW_SCALE_SHIFT))
+
+static inline void kasan_late_init(void) { }
+#endif
  
  #ifdef CONFIG_KASAN

  void kasan_early_init(void);
  void kasan_mmu_init(void);
  void kasan_init(void);
-void kasan_late_init(void);
  #else
  static inline void kasan_init(void) { }
  static inline void kasan_mmu_init(void) { }
-static inline void kasan_late_init(void) { }
  #endif


Why modify all this kasan_late_init() stuff ?

This function is only called from kasan init_32.c, it is never called by 
PPC64, so you should not need to modify anything at all.


Christophe



[PATCH v6 4/4] powerpc: Book3S 64-bit "heavyweight" KASAN support

2020-02-11 Thread Daniel Axtens
KASAN support on Book3S is a bit tricky to get right:

 - It would be good to support inline instrumentation so as to be able to
   catch stack issues that cannot be caught with outline mode.

 - Inline instrumentation requires a fixed offset.

 - Book3S runs code in real mode after booting. Most notably a lot of KVM
   runs in real mode, and it would be good to be able to instrument it.

 - Because code runs in real mode after boot, the offset has to point to
   valid memory both in and out of real mode.

[ppc64 mm note: The kernel installs a linear mapping at effective
address c000... onward. This is a one-to-one mapping with physical
memory from ... onward. Because of how memory accesses work on
powerpc 64-bit Book3S, a kernel pointer in the linear map accesses the
same memory both with translations on (accessing as an 'effective
address'), and with translations off (accessing as a 'real
address'). This works in both guests and the hypervisor. For more
details, see s5.7 of Book III of version 3 of the ISA, in particular
the Storage Control Overview, s5.7.3, and s5.7.5 - noting that this
KASAN implementation currently only supports Radix.]

One approach is just to give up on inline instrumentation. This way all
checks can be delayed until after everything set is up correctly, and the
address-to-shadow calculations can be overridden. However, the features and
speed boost provided by inline instrumentation are worth trying to do
better.

If _at compile time_ it is known how much contiguous physical memory a
system has, the top 1/8th of the first block of physical memory can be set
aside for the shadow. This is a big hammer and comes with 3 big
consequences:

 - there's no nice way to handle physically discontiguous memory, so only
   the first physical memory block can be used.

 - kernels will simply fail to boot on machines with less memory than
   specified when compiling.

 - kernels running on machines with more memory than specified when
   compiling will simply ignore the extra memory.

Implement and document KASAN this way. The current implementation is Radix
only.

Despite the limitations, it can still find bugs,
e.g. http://patchwork.ozlabs.org/patch/1103775/

At the moment, this physical memory limit must be set _even for outline
mode_. This may be changed in a later series - a different implementation
could be added for outline mode that dynamically allocates shadow at a
fixed offset. For example, see https://patchwork.ozlabs.org/patch/795211/

Suggested-by: Michael Ellerman 
Cc: Balbir Singh  # ppc64 out-of-line radix version
Cc: Christophe Leroy  # ppc32 version
Signed-off-by: Daniel Axtens 

---
Changes since v5:
 - rebase on powerpc/merge, with Christophe's latest changes integrating
   kasan-vmalloc
 - documentation tweaks based on latest 32-bit changes

Changes since v4:
 - fix some ppc32 build issues
 - support ptdump
 - clean up the header file. It turns out we don't need or use 
KASAN_SHADOW_SIZE,
   so just dump it, and make KASAN_SHADOW_END the thing that varies between 32
   and 64 bit. As part of this, make sure KASAN_SHADOW_OFFSET is only 
configured for
   32 bit - it is calculated in the Makefile for ppc64.
 - various cleanups

Changes since v3:
 - Address further feedback from Christophe.
 - Drop changes to stack walking, it looks like the issue I observed is
   related to that particular stack, not stack-walking generally.

Changes since v2:

 - Address feedback from Christophe around cleanups and docs.
 - Address feedback from Balbir: at this point I don't have a good solution
   for the issues you identify around the limitations of the inline 
implementation
   but I think that it's worth trying to get the stack instrumentation support.
   I'm happy to have an alternative and more flexible outline mode - I had
   envisoned this would be called 'lightweight' mode as it imposes fewer 
restrictions.
   I've linked to your implementation. I think it's best to add it in a 
follow-up series.
 - Made the default PHYS_MEM_SIZE_FOR_KASAN value 1024MB. I think most people 
have
   guests with at least that much memory in the Radix 64s case so it's a much
   saner default - it means that if you just turn on KASAN without reading the
   docs you're much more likely to have a bootable kernel, which you will never
   have if the value is set to zero! I'm happy to bikeshed the value if we want.

Changes since v1:
 - Landed kasan vmalloc support upstream
 - Lots of feedback from Christophe.

Changes since the rfc:

 - Boots real and virtual hardware, kvm works.

 - disabled reporting when we're checking the stack for exception
   frames. The behaviour isn't wrong, just incompatible with KASAN.

 - Documentation!

 - Dropped old module stuff in favour of KASAN_VMALLOC.

The bugs with ftrace and kuap were due to kernel bloat pushing
prom_init calls to be done via the plt. Because we did not have
a relocatable kernel, and they are done very early, 

[PATCH v6 3/4] powerpc/mm/kasan: rename kasan_init_32.c to init_32.c

2020-02-11 Thread Daniel Axtens
kasan is already implied by the directory name, we don't need to
repeat it.

Suggested-by: Christophe Leroy 
Signed-off-by: Daniel Axtens 
---
 arch/powerpc/mm/kasan/Makefile   | 2 +-
 arch/powerpc/mm/kasan/{kasan_init_32.c => init_32.c} | 0
 2 files changed, 1 insertion(+), 1 deletion(-)
 rename arch/powerpc/mm/kasan/{kasan_init_32.c => init_32.c} (100%)

diff --git a/arch/powerpc/mm/kasan/Makefile b/arch/powerpc/mm/kasan/Makefile
index 6577897673dd..36a4e1b10b2d 100644
--- a/arch/powerpc/mm/kasan/Makefile
+++ b/arch/powerpc/mm/kasan/Makefile
@@ -2,4 +2,4 @@
 
 KASAN_SANITIZE := n
 
-obj-$(CONFIG_PPC32)   += kasan_init_32.o
+obj-$(CONFIG_PPC32)   += init_32.o
diff --git a/arch/powerpc/mm/kasan/kasan_init_32.c 
b/arch/powerpc/mm/kasan/init_32.c
similarity index 100%
rename from arch/powerpc/mm/kasan/kasan_init_32.c
rename to arch/powerpc/mm/kasan/init_32.c
-- 
2.20.1



[PATCH v6 2/4] kasan: Document support on 32-bit powerpc

2020-02-11 Thread Daniel Axtens
KASAN is supported on 32-bit powerpc and the docs should reflect this.

Document s390 support while we're at it.

Suggested-by: Christophe Leroy 
Reviewed-by: Christophe Leroy 
Signed-off-by: Daniel Axtens 

---

Changes since v5:
 - rebase - riscv has now got support.
 - document s390 support while we're at it
 - clarify when kasan_vmalloc support is required
---
 Documentation/dev-tools/kasan.rst |  7 +--
 Documentation/powerpc/kasan.txt   | 12 
 2 files changed, 17 insertions(+), 2 deletions(-)
 create mode 100644 Documentation/powerpc/kasan.txt

diff --git a/Documentation/dev-tools/kasan.rst 
b/Documentation/dev-tools/kasan.rst
index c652d740735d..012ef3d91d1f 100644
--- a/Documentation/dev-tools/kasan.rst
+++ b/Documentation/dev-tools/kasan.rst
@@ -22,7 +22,8 @@ global variables yet.
 Tag-based KASAN is only supported in Clang and requires version 7.0.0 or later.
 
 Currently generic KASAN is supported for the x86_64, arm64, xtensa, s390 and
-riscv architectures, and tag-based KASAN is supported only for arm64.
+riscv architectures. It is also supported on 32-bit powerpc kernels. Tag-based 
+KASAN is supported only on arm64.
 
 Usage
 -
@@ -255,7 +256,9 @@ CONFIG_KASAN_VMALLOC
 
 
 With ``CONFIG_KASAN_VMALLOC``, KASAN can cover vmalloc space at the
-cost of greater memory usage. Currently this is only supported on x86.
+cost of greater memory usage. Currently this supported on x86, s390
+and 32-bit powerpc. It is optional, except on 32-bit powerpc kernels
+with module support, where it is required.
 
 This works by hooking into vmalloc and vmap, and dynamically
 allocating real shadow memory to back the mappings.
diff --git a/Documentation/powerpc/kasan.txt b/Documentation/powerpc/kasan.txt
new file mode 100644
index ..26bb0e8bb18c
--- /dev/null
+++ b/Documentation/powerpc/kasan.txt
@@ -0,0 +1,12 @@
+KASAN is supported on powerpc on 32-bit only.
+
+32 bit support
+==
+
+KASAN is supported on both hash and nohash MMUs on 32-bit.
+
+The shadow area sits at the top of the kernel virtual memory space above the
+fixmap area and occupies one eighth of the total kernel virtual memory space.
+
+Instrumentation of the vmalloc area is optional, unless built with modules,
+in which case it is required.
-- 
2.20.1



[PATCH v6 1/4] kasan: define and use MAX_PTRS_PER_* for early shadow tables

2020-02-11 Thread Daniel Axtens
powerpc has a variable number of PTRS_PER_*, set at runtime based
on the MMU that the kernel is booted under.

This means the PTRS_PER_* are no longer constants, and therefore
breaks the build.

Define default MAX_PTRS_PER_*s in the same style as MAX_PTRS_PER_P4D.
As KASAN is the only user at the moment, just define them in the kasan
header, and have them default to PTRS_PER_* unless overridden in arch
code.

Suggested-by: Christophe Leroy 
Suggested-by: Balbir Singh 
Reviewed-by: Christophe Leroy 
Reviewed-by: Balbir Singh 
Signed-off-by: Daniel Axtens 
---
 include/linux/kasan.h | 18 +++---
 mm/kasan/init.c   |  6 +++---
 2 files changed, 18 insertions(+), 6 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 5cde9e7c2664..b3a4500633f5 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -14,10 +14,22 @@ struct task_struct;
 #include 
 #include 
 
+#ifndef MAX_PTRS_PER_PTE
+#define MAX_PTRS_PER_PTE PTRS_PER_PTE
+#endif
+
+#ifndef MAX_PTRS_PER_PMD
+#define MAX_PTRS_PER_PMD PTRS_PER_PMD
+#endif
+
+#ifndef MAX_PTRS_PER_PUD
+#define MAX_PTRS_PER_PUD PTRS_PER_PUD
+#endif
+
 extern unsigned char kasan_early_shadow_page[PAGE_SIZE];
-extern pte_t kasan_early_shadow_pte[PTRS_PER_PTE];
-extern pmd_t kasan_early_shadow_pmd[PTRS_PER_PMD];
-extern pud_t kasan_early_shadow_pud[PTRS_PER_PUD];
+extern pte_t kasan_early_shadow_pte[MAX_PTRS_PER_PTE];
+extern pmd_t kasan_early_shadow_pmd[MAX_PTRS_PER_PMD];
+extern pud_t kasan_early_shadow_pud[MAX_PTRS_PER_PUD];
 extern p4d_t kasan_early_shadow_p4d[MAX_PTRS_PER_P4D];
 
 int kasan_populate_early_shadow(const void *shadow_start,
diff --git a/mm/kasan/init.c b/mm/kasan/init.c
index ce45c491ebcd..8b54a96d3b3e 100644
--- a/mm/kasan/init.c
+++ b/mm/kasan/init.c
@@ -46,7 +46,7 @@ static inline bool kasan_p4d_table(pgd_t pgd)
 }
 #endif
 #if CONFIG_PGTABLE_LEVELS > 3
-pud_t kasan_early_shadow_pud[PTRS_PER_PUD] __page_aligned_bss;
+pud_t kasan_early_shadow_pud[MAX_PTRS_PER_PUD] __page_aligned_bss;
 static inline bool kasan_pud_table(p4d_t p4d)
 {
return p4d_page(p4d) == virt_to_page(lm_alias(kasan_early_shadow_pud));
@@ -58,7 +58,7 @@ static inline bool kasan_pud_table(p4d_t p4d)
 }
 #endif
 #if CONFIG_PGTABLE_LEVELS > 2
-pmd_t kasan_early_shadow_pmd[PTRS_PER_PMD] __page_aligned_bss;
+pmd_t kasan_early_shadow_pmd[MAX_PTRS_PER_PMD] __page_aligned_bss;
 static inline bool kasan_pmd_table(pud_t pud)
 {
return pud_page(pud) == virt_to_page(lm_alias(kasan_early_shadow_pmd));
@@ -69,7 +69,7 @@ static inline bool kasan_pmd_table(pud_t pud)
return false;
 }
 #endif
-pte_t kasan_early_shadow_pte[PTRS_PER_PTE] __page_aligned_bss;
+pte_t kasan_early_shadow_pte[MAX_PTRS_PER_PTE] __page_aligned_bss;
 
 static inline bool kasan_pte_table(pmd_t pmd)
 {
-- 
2.20.1



[PATCH v6 0/4] KASAN for powerpc64 radix

2020-02-11 Thread Daniel Axtens
Building on the work of Christophe, Aneesh and Balbir, I've ported
KASAN to 64-bit Book3S kernels running on the Radix MMU.

This provides full inline instrumentation on radix, but does require
that you be able to specify the amount of physically contiguous memory
on the system at compile time. More details in patch 4.

v6: Rebase on the latest changes in powerpc/merge. Minor tweaks
  to the documentation. Small tweaks to the header to work
  with the kasan_late_init() function that Christophe added
  for 32-bit kasan-vmalloc support.
No functional change.

v5: ptdump support. More cleanups, tweaks and fixes, thanks
Christophe. Details in patch 4.

I have seen another stack walk splat, but I don't think it's
related to the patch set, I think there's a bug somewhere else,
probably in stack frame manipulation in the kernel or (more
unlikely) in the compiler.

v4: More cleanups, split renaming out, clarify bits and bobs.
Drop the stack walk disablement, that isn't needed. No other
functional change.

v3: Reduce the overly ambitious scope of the MAX_PTRS change.
Document more things, including around why some of the
restrictions apply.
Clean up the code more, thanks Christophe.

v2: The big change is the introduction of tree-wide(ish)
MAX_PTRS_PER_{PTE,PMD,PUD} macros in preference to the previous
approach, which was for the arch to override the page table array
definitions with their own. (And I squashed the annoying
intermittent crash!)

Apart from that there's just a lot of cleanup. Christophe, I've
addressed most of what you asked for and I will reply to your v1
emails to clarify what remains unchanged.



Re: [PATCH] powerpc: setup_64: hack around kcov + devicetree limitations

2020-02-11 Thread Daniel Axtens
> So: create a fake task and preload it into our fake PACA. Load the paca
> just into r13 (local_paca) before we call into dt_cpu_ftrs_init. This fake
> task persists just for the first part of the setup process before we set
> up the real PACAs.

mpe has asked for this to be fixed in a different way, so I'll respin
with that change.

Daniel

>
> Translations get switched on once we leave early_setup, so I think we'd
> already catch any other cases where the PACA or task aren't set up.
>
> Fixes: fb0b0a73b223 ("powerpc: Enable kcov")
> Cc: Andrew Donnellan 
> Signed-off-by: Daniel Axtens 
>
> ---
>
> I haven't made the setup conditional on kcov being compiled in, but I
> guess I could if we think it's worth it?
> ---
>  arch/powerpc/kernel/setup_64.c | 13 -
>  1 file changed, 12 insertions(+), 1 deletion(-)
>
> diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
> index e05e6dd67ae6..26f1b8539f8e 100644
> --- a/arch/powerpc/kernel/setup_64.c
> +++ b/arch/powerpc/kernel/setup_64.c
> @@ -281,7 +281,18 @@ void __init record_spr_defaults(void)
>  
>  void __init early_setup(unsigned long dt_ptr)
>  {
> - static __initdata struct paca_struct boot_paca;
> + /*
> +  * We need to get something valid into local_paca/r13 asap if we
> +  * are using kcov. dt_cpu_ftrs_init will call coverage-enabled code
> +  * in the generic dt library, and that will try to call in_task().
> +  * We need a minimal paca that at least provides a valid __current.
> +  * We can't use the usual initialise/setup/fixup path as that relies
> +  * on a CPU feature.
> +  */
> + static __initdata struct task_struct task = {};
> + static __initdata struct paca_struct boot_paca = { .__current =  };
> +
> + local_paca = _paca;
>  
>   /*  printk is _NOT_ safe to use here ! --- */
>  
> -- 
> 2.20.1


Re: [PATCH 0/3] Add new module driver for new ASRC

2020-02-11 Thread Randy Dunlap
On 2/11/20 8:30 PM, Shengjiu Wang wrote:
> Add new module driver for new ASRC in i.MX815/865
> 
> Shengjiu Wang (3):
>   ASoC: fsl_asrc: Move common definition to fsl_asrc_common
>   ASoC: dt-bindings: fsl_easrc: Add document for EASRC
>   ASoC: fsl_easrc: Add EASRC ASoC CPU DAI and platform drivers
> 
>  .../devicetree/bindings/sound/fsl,easrc.txt   |   57 +
>  sound/soc/fsl/fsl_asrc.h  |   11 +-
>  sound/soc/fsl/fsl_asrc_common.h   |   22 +
>  sound/soc/fsl/fsl_easrc.c | 2265 +
>  sound/soc/fsl/fsl_easrc.h |  668 +
>  sound/soc/fsl/fsl_easrc_dma.c |  440 
>  6 files changed, 3453 insertions(+), 10 deletions(-)
>  create mode 100644 Documentation/devicetree/bindings/sound/fsl,easrc.txt
>  create mode 100644 sound/soc/fsl/fsl_asrc_common.h
>  create mode 100644 sound/soc/fsl/fsl_easrc.c
>  create mode 100644 sound/soc/fsl/fsl_easrc.h
>  create mode 100644 sound/soc/fsl/fsl_easrc_dma.c
> 

Hi,

Is this patch series missing Kconfig, Makefile, and possibly
MAINTAINERS patches?

thanks.
-- 
~Randy



[PATCH] powerpc: setup_64: hack around kcov + devicetree limitations

2020-02-11 Thread Daniel Axtens
kcov instrumentation is collected the __sanitizer_cov_trace_pc hook in
kernel/kcov.c. The compiler inserts these hooks into every basic block
unless kcov is disabled for that file.

We then have a deep call-chain:
 - __sanitizer_cov_trace_pc calls to check_kcov_mode()
 - check_kcov_mode() (kernel/kcov.c) calls in_task()
 - in_task() (include/linux/preempt.h) calls preempt_count().
 - preempt_count() (include/asm-generic/preempt.h) calls
 current_thread_info()
 - because powerpc has THREAD_INFO_IN_TASK, current_thread_info()
 (include/linux/thread_info.h) is defined to 'current'
 - current (arch/powerpc/include/asm/current.h) is defined to
 get_current().
 - get_current (same file) loads an offset of r13.
 - arch/powerpc/include/asm/paca.h makes r13 a register variable
 called local_paca - it is the PACA for the current CPU, so
 this has the effect of loading the current task from PACA.
 - get_current returns the current task from PACA,
 - current_thread_info returns the task cast to a thread_info
 - preempt_count dereferences the thread_info to load preempt_count
 - that value is used by in_task and so on up the chain

The problem is:

 - kcov instrumentation is enabled for arch/powerpc/kernel/dt_cpu_ftrs.c

 - even if it were not, dt_cpu_ftrs_init calls generic dt parsing code
   which should definitely have instrumentation enabled.

 - setup_64.c calls dt_cpu_ftrs_init before it sets up a PACA.

 - It's not clear that we can move PACA setup before dt_cpu_ftrs_init as
   the PACA setup refers to CPU features - setup_paca() looks at
   early_cpu_has_feature(CPU_FTR_HVMODE)

 - If we don't set up a paca, r13 will contain unpredictable data.

 - In a zImage compiled with kcov and KASAN, we see r13 containing a value
   that leads to dereferencing invalid memory (something like
   912a72603d420015).

 - Weirdly, the same kernel as a vmlinux loaded directly by qemu does not
   crash. Investigating with gdb, it seems that in the vmlinux boot case,
   r13 is near enough to zero that we just happen to be able to read that
   part of memory (we're operating with translation off at this point) and
   the current pointer also happens to land in readable memory and
   everything just works.

There's no generic kill switch for kcov (as far as I can tell), and we
don't want to have to turn off instrumentation in the generic dt parsing
code (which lives outside arch/powerpc/) just because we don't have a real
paca or task yet.

So: create a fake task and preload it into our fake PACA. Load the paca
just into r13 (local_paca) before we call into dt_cpu_ftrs_init. This fake
task persists just for the first part of the setup process before we set
up the real PACAs.

Translations get switched on once we leave early_setup, so I think we'd
already catch any other cases where the PACA or task aren't set up.

Fixes: fb0b0a73b223 ("powerpc: Enable kcov")
Cc: Andrew Donnellan 
Signed-off-by: Daniel Axtens 

---

I haven't made the setup conditional on kcov being compiled in, but I
guess I could if we think it's worth it?
---
 arch/powerpc/kernel/setup_64.c | 13 -
 1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index e05e6dd67ae6..26f1b8539f8e 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -281,7 +281,18 @@ void __init record_spr_defaults(void)
 
 void __init early_setup(unsigned long dt_ptr)
 {
-   static __initdata struct paca_struct boot_paca;
+   /*
+* We need to get something valid into local_paca/r13 asap if we
+* are using kcov. dt_cpu_ftrs_init will call coverage-enabled code
+* in the generic dt library, and that will try to call in_task().
+* We need a minimal paca that at least provides a valid __current.
+* We can't use the usual initialise/setup/fixup path as that relies
+* on a CPU feature.
+*/
+   static __initdata struct task_struct task = {};
+   static __initdata struct paca_struct boot_paca = { .__current =  };
+
+   local_paca = _paca;
 
/*  printk is _NOT_ safe to use here ! --- */
 
-- 
2.20.1



[PATCH 3/3] ASoC: fsl_easrc: Add EASRC ASoC CPU DAI and platform drivers

2020-02-11 Thread Shengjiu Wang
EASRC (Enhanced Asynchronous Sample Rate Converter) is a new IP module
found on i.MX815. It is different with old ASRC module.

The primary features for the EASRC are as follows:
- 4 Contexts - groups of channels with an independent time base
- Fully independent and concurrent context control
- Simultaneous processing of up to 32 audio channels
- Programmable filter charachteristics for each context
- 32, 24, 20, and 16-bit fixed point audio sample support
- 32-bit floating point audio sample support
- 8kHz to 384kHz sample rate
- 1/16 to 8x sample rate conversion ratio

Signed-off-by: Shengjiu Wang 
---
 sound/soc/fsl/fsl_asrc_common.h |1 +
 sound/soc/fsl/fsl_easrc.c   | 2265 +++
 sound/soc/fsl/fsl_easrc.h   |  668 +
 sound/soc/fsl/fsl_easrc_dma.c   |  440 ++
 4 files changed, 3374 insertions(+)
 create mode 100644 sound/soc/fsl/fsl_easrc.c
 create mode 100644 sound/soc/fsl/fsl_easrc.h
 create mode 100644 sound/soc/fsl/fsl_easrc_dma.c

diff --git a/sound/soc/fsl/fsl_asrc_common.h b/sound/soc/fsl/fsl_asrc_common.h
index 8acc55778ff2..c2056b661f15 100644
--- a/sound/soc/fsl/fsl_asrc_common.h
+++ b/sound/soc/fsl/fsl_asrc_common.h
@@ -16,6 +16,7 @@ enum asrc_pair_index {
ASRC_PAIR_A = 0,
ASRC_PAIR_B = 1,
ASRC_PAIR_C = 2,
+   ASRC_PAIR_D = 3,
 };
 
 #endif /* _FSL_ASRC_COMMON_H */
diff --git a/sound/soc/fsl/fsl_easrc.c b/sound/soc/fsl/fsl_easrc.c
new file mode 100644
index ..6fe2953317f2
--- /dev/null
+++ b/sound/soc/fsl/fsl_easrc.c
@@ -0,0 +1,2265 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright 2019 NXP
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "fsl_easrc.h"
+#include "imx-pcm.h"
+
+#define FSL_EASRC_FORMATS   (SNDRV_PCM_FMTBIT_S16_LE | \
+SNDRV_PCM_FMTBIT_U16_LE | \
+SNDRV_PCM_FMTBIT_S24_LE | \
+SNDRV_PCM_FMTBIT_S24_3LE | \
+SNDRV_PCM_FMTBIT_U24_LE | \
+SNDRV_PCM_FMTBIT_U24_3LE | \
+SNDRV_PCM_FMTBIT_S32_LE | \
+SNDRV_PCM_FMTBIT_U32_LE | \
+SNDRV_PCM_FMTBIT_S20_3LE | \
+SNDRV_PCM_FMTBIT_U20_3LE | \
+SNDRV_PCM_FMTBIT_FLOAT_LE)
+
+static int fsl_easrc_iec958_put_bits(struct snd_kcontrol *kcontrol,
+struct snd_ctl_elem_value *ucontrol)
+{
+   struct snd_soc_component *comp = snd_kcontrol_chip(kcontrol);
+   struct fsl_easrc *easrc = snd_soc_component_get_drvdata(comp);
+   struct soc_mreg_control *mc =
+   (struct soc_mreg_control *)kcontrol->private_value;
+   unsigned int regval = ucontrol->value.integer.value[0];
+
+   easrc->bps_iec958[mc->regbase] = regval;
+
+   return 0;
+}
+
+static int fsl_easrc_iec958_get_bits(struct snd_kcontrol *kcontrol,
+struct snd_ctl_elem_value *ucontrol)
+{
+   struct snd_soc_component *comp = snd_kcontrol_chip(kcontrol);
+   struct fsl_easrc *easrc = snd_soc_component_get_drvdata(comp);
+   struct soc_mreg_control *mc =
+   (struct soc_mreg_control *)kcontrol->private_value;
+
+   ucontrol->value.enumerated.item[0] = easrc->bps_iec958[mc->regbase];
+
+   return 0;
+}
+
+int fsl_easrc_get_reg(struct snd_kcontrol *kcontrol,
+ struct snd_ctl_elem_value *ucontrol)
+{
+   struct snd_soc_component *component = snd_kcontrol_chip(kcontrol);
+   struct soc_mreg_control *mc =
+   (struct soc_mreg_control *)kcontrol->private_value;
+   unsigned int regval;
+   int ret;
+
+   ret = snd_soc_component_read(component, mc->regbase, );
+   if (ret < 0)
+   return ret;
+
+   ucontrol->value.integer.value[0] = regval;
+
+   return 0;
+}
+
+int fsl_easrc_set_reg(struct snd_kcontrol *kcontrol,
+ struct snd_ctl_elem_value *ucontrol)
+{
+   struct snd_soc_component *component = snd_kcontrol_chip(kcontrol);
+   struct soc_mreg_control *mc =
+   (struct soc_mreg_control *)kcontrol->private_value;
+   unsigned int regval = ucontrol->value.integer.value[0];
+   int ret;
+
+   ret = snd_soc_component_write(component, mc->regbase, regval);
+   if (ret < 0)
+   return ret;
+
+   return 0;
+}
+
+#define SOC_SINGLE_REG_RW(xname, xreg) \
+{  .iface = SNDRV_CTL_ELEM_IFACE_PCM, .name = (xname), \
+   .access = SNDRV_CTL_ELEM_ACCESS_READWRITE, \
+   .info = snd_soc_info_xr_sx, .get = fsl_easrc_get_reg, \
+   .put = fsl_easrc_set_reg, \
+   

[PATCH 2/3] ASoC: dt-bindings: fsl_easrc: Add document for EASRC

2020-02-11 Thread Shengjiu Wang
EASRC (Enhanced Asynchronous Sample Rate Converter) is a new
IP module found on i.MX815.

Signed-off-by: Shengjiu Wang 
---
 .../devicetree/bindings/sound/fsl,easrc.txt   | 57 +++
 1 file changed, 57 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/sound/fsl,easrc.txt

diff --git a/Documentation/devicetree/bindings/sound/fsl,easrc.txt 
b/Documentation/devicetree/bindings/sound/fsl,easrc.txt
new file mode 100644
index ..0e8153165e3b
--- /dev/null
+++ b/Documentation/devicetree/bindings/sound/fsl,easrc.txt
@@ -0,0 +1,57 @@
+NXP Asynchronous Sample Rate Converter (ASRC) Controller
+
+The Asynchronous Sample Rate Converter (ASRC) converts the sampling rate of a
+signal associated with an input clock into a signal associated with a different
+output clock. The driver currently works as a Front End of DPCM with other Back
+Ends Audio controller such as ESAI, SSI and SAI. It has four context to support
+four substreams within totally 32 channels.
+
+Required properties:
+- compatible:Contains "fsl,imx8mn-easrc".
+
+- reg:   Offset and length of the register set for the
+device.
+
+- interrupts:Contains the asrc interrupt.
+
+- dmas:  Generic dma devicetree binding as described in
+Documentation/devicetree/bindings/dma/dma.txt.
+
+- dma-names: Contains "ctx0_rx", "ctx0_tx",
+ "ctx1_rx", "ctx1_tx",
+ "ctx2_rx", "ctx2_tx",
+ "ctx3_rx", "ctx3_tx".
+
+- clocks:Contains an entry for each entry in clock-names.
+
+- clock-names:   "mem" - Peripheral clock to driver module.
+
+- fsl,easrc-ram-script-name: The coefficient table for the filters
+
+- fsl,asrc-rate: Defines a mutual sample rate used by DPCM Back
+Ends.
+
+- fsl,asrc-width:Defines a mutual sample width used by DPCM Back
+Ends.
+
+Example:
+
+easrc: easrc@300C {
+   compatible = "fsl,imx8mn-easrc";
+   reg = <0x0 0x300C 0x0 0x1>;
+   interrupts = ;
+   clocks = < IMX8MN_CLK_ASRC_ROOT>;
+   clock-names = "mem";
+   dmas = < 16 23 0> , < 17 23 0>,
+  < 18 23 0> , < 19 23 0>,
+  < 20 23 0> , < 21 23 0>,
+  < 22 23 0> , < 23 23 0>;
+   dma-names = "ctx0_rx", "ctx0_tx",
+   "ctx1_rx", "ctx1_tx",
+   "ctx2_rx", "ctx2_tx",
+   "ctx3_rx", "ctx3_tx";
+   fsl,easrc-ram-script-name = "imx/easrc/easrc-imx8mn.bin";
+   fsl,asrc-rate  = <8000>;
+   fsl,asrc-width = <16>;
+   status = "disabled";
+};
-- 
2.21.0



[PATCH 0/3] Add new module driver for new ASRC

2020-02-11 Thread Shengjiu Wang
Add new module driver for new ASRC in i.MX815/865

Shengjiu Wang (3):
  ASoC: fsl_asrc: Move common definition to fsl_asrc_common
  ASoC: dt-bindings: fsl_easrc: Add document for EASRC
  ASoC: fsl_easrc: Add EASRC ASoC CPU DAI and platform drivers

 .../devicetree/bindings/sound/fsl,easrc.txt   |   57 +
 sound/soc/fsl/fsl_asrc.h  |   11 +-
 sound/soc/fsl/fsl_asrc_common.h   |   22 +
 sound/soc/fsl/fsl_easrc.c | 2265 +
 sound/soc/fsl/fsl_easrc.h |  668 +
 sound/soc/fsl/fsl_easrc_dma.c |  440 
 6 files changed, 3453 insertions(+), 10 deletions(-)
 create mode 100644 Documentation/devicetree/bindings/sound/fsl,easrc.txt
 create mode 100644 sound/soc/fsl/fsl_asrc_common.h
 create mode 100644 sound/soc/fsl/fsl_easrc.c
 create mode 100644 sound/soc/fsl/fsl_easrc.h
 create mode 100644 sound/soc/fsl/fsl_easrc_dma.c

-- 
2.21.0



[PATCH 1/3] ASoC: fsl_asrc: Move common definition to fsl_asrc_common

2020-02-11 Thread Shengjiu Wang
There is a new ASRC included in i.MX serial platform, there
are some common definition can be shared with each other.
So move the common definition to a separate header file.

Signed-off-by: Shengjiu Wang 
---
 sound/soc/fsl/fsl_asrc.h| 11 +--
 sound/soc/fsl/fsl_asrc_common.h | 21 +
 2 files changed, 22 insertions(+), 10 deletions(-)
 create mode 100644 sound/soc/fsl/fsl_asrc_common.h

diff --git a/sound/soc/fsl/fsl_asrc.h b/sound/soc/fsl/fsl_asrc.h
index 8a821132d9d0..e8abb27ffeda 100644
--- a/sound/soc/fsl/fsl_asrc.h
+++ b/sound/soc/fsl/fsl_asrc.h
@@ -10,8 +10,7 @@
 #ifndef _FSL_ASRC_H
 #define _FSL_ASRC_H
 
-#define IN 0
-#define OUT1
+#include  "fsl_asrc_common.h"
 
 #define ASRC_DMA_BUFFER_NUM2
 #define ASRC_INPUTFIFO_THRESHOLD   32
@@ -283,14 +282,6 @@
 #define ASRMCR1i_OW16_MASK (1 << ASRMCR1i_OW16_SHIFT)
 #define ASRMCR1i_OW16(v)   ((v) << ASRMCR1i_OW16_SHIFT)
 
-
-enum asrc_pair_index {
-   ASRC_INVALID_PAIR = -1,
-   ASRC_PAIR_A = 0,
-   ASRC_PAIR_B = 1,
-   ASRC_PAIR_C = 2,
-};
-
 #define ASRC_PAIR_MAX_NUM  (ASRC_PAIR_C + 1)
 
 enum asrc_inclk {
diff --git a/sound/soc/fsl/fsl_asrc_common.h b/sound/soc/fsl/fsl_asrc_common.h
new file mode 100644
index ..8acc55778ff2
--- /dev/null
+++ b/sound/soc/fsl/fsl_asrc_common.h
@@ -0,0 +1,21 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright 2019 NXP
+ *
+ */
+
+#ifndef _FSL_ASRC_COMMON_H
+#define _FSL_ASRC_COMMON_H
+
+/* directions */
+#define IN 0
+#define OUT1
+
+enum asrc_pair_index {
+   ASRC_INVALID_PAIR = -1,
+   ASRC_PAIR_A = 0,
+   ASRC_PAIR_B = 1,
+   ASRC_PAIR_C = 2,
+};
+
+#endif /* _FSL_ASRC_COMMON_H */
-- 
2.21.0



Re: [PATCH v3 1/3] powerpc/tm: Fix clearing MSR[TS] in current when reclaiming on signal delivery

2020-02-11 Thread Michael Neuling
> Found with tm-signal-context-force-tm kernel selftest.
> 
> v3: Subject and comment improvements.
> v2: Fix build failure when tm is disabled.
> 
> Fixes: 2b0a576d15e0 ("powerpc: Add new transactional memory state to the
> signal context")
> Cc: sta...@vger.kernel.org # v3.9
> Signed-off-by: Gustavo Luiz Duarte 

Acked-By: Michael Neuling 


Re: [PATCH v2 04/13] powerpc sstep: Add support for prefixed load/stores

2020-02-11 Thread Jordan Niethe
On Tue, Feb 11, 2020 at 5:05 PM Christophe Leroy
 wrote:
>
>
>
> Le 11/02/2020 à 06:33, Jordan Niethe a écrit :
> > This adds emulation support for the following prefixed integer
> > load/stores:
> >* Prefixed Load Byte and Zero (plbz)
> >* Prefixed Load Halfword and Zero (plhz)
> >* Prefixed Load Halfword Algebraic (plha)
> >* Prefixed Load Word and Zero (plwz)
> >* Prefixed Load Word Algebraic (plwa)
> >* Prefixed Load Doubleword (pld)
> >* Prefixed Store Byte (pstb)
> >* Prefixed Store Halfword (psth)
> >* Prefixed Store Word (pstw)
> >* Prefixed Store Doubleword (pstd)
> >* Prefixed Load Quadword (plq)
> >* Prefixed Store Quadword (pstq)
> >
> > the follow prefixed floating-point load/stores:
> >* Prefixed Load Floating-Point Single (plfs)
> >* Prefixed Load Floating-Point Double (plfd)
> >* Prefixed Store Floating-Point Single (pstfs)
> >* Prefixed Store Floating-Point Double (pstfd)
> >
> > and for the following prefixed VSX load/stores:
> >* Prefixed Load VSX Scalar Doubleword (plxsd)
> >* Prefixed Load VSX Scalar Single-Precision (plxssp)
> >* Prefixed Load VSX Vector [0|1]  (plxv, plxv0, plxv1)
> >* Prefixed Store VSX Scalar Doubleword (pstxsd)
> >* Prefixed Store VSX Scalar Single-Precision (pstxssp)
> >* Prefixed Store VSX Vector [0|1] (pstxv, pstxv0, pstxv1)
> >
> > Signed-off-by: Jordan Niethe 
> > ---
> > v2: - Combine all load/store patches
> >  - Fix the name of Type 01 instructions
> >  - Remove sign extension flag from pstd/pld
> >  - Rename sufx -> suffix
> > ---
> >   arch/powerpc/lib/sstep.c | 165 +++
> >   1 file changed, 165 insertions(+)
> >
> > diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c
> > index 65143ab1bf64..0e21c21ff2be 100644
> > --- a/arch/powerpc/lib/sstep.c
> > +++ b/arch/powerpc/lib/sstep.c
> > @@ -187,6 +187,44 @@ static nokprobe_inline unsigned long xform_ea(unsigned 
> > int instr,
> >   return ea;
> >   }
> >
> > +/*
> > + * Calculate effective address for a MLS:D-form / 8LS:D-form
> > + * prefixed instruction
> > + */
> > +static nokprobe_inline unsigned long mlsd_8lsd_ea(unsigned int instr,
> > +   unsigned int suffix,
> > +   const struct pt_regs *regs)
> > +{
> > + int ra, prefix_r;
> > + unsigned int  dd;
> > + unsigned long ea, d0, d1, d;
> > +
> > + prefix_r = instr & (1ul << 20);
> > + ra = (suffix >> 16) & 0x1f;
> > +
> > + d0 = instr & 0x3;
> > + d1 = suffix & 0x;
> > + d = (d0 << 16) | d1;
> > +
> > + /*
> > +  * sign extend a 34 bit number
> > +  */
> > + dd = (unsigned int) (d >> 2);
> > + ea = (signed int) dd;
> > + ea = (ea << 2) | (d & 0x3);
> > +
> > + if (!prefix_r && ra)
> > + ea += regs->gpr[ra];
> > + else if (!prefix_r && !ra)
> > + ; /* Leave ea as is */
> > + else if (prefix_r && !ra)
> > + ea += regs->nip;
> > + else if (prefix_r && ra)
> > + ; /* Invalid form. Should already be checked for by caller! */
> > +
> > + return ea;
> > +}
> > +
> >   /*
> >* Return the largest power of 2, not greater than sizeof(unsigned long),
> >* such that x is a multiple of it.
> > @@ -1166,6 +1204,7 @@ int analyse_instr(struct instruction_op *op, const 
> > struct pt_regs *regs,
> > unsigned int instr, unsigned int suffix)
> >   {
> >   unsigned int opcode, ra, rb, rc, rd, spr, u;
> > + unsigned int suffixopcode, prefixtype, prefix_r;
> >   unsigned long int imm;
> >   unsigned long int val, val2;
> >   unsigned int mb, me, sh;
> > @@ -2652,6 +2691,132 @@ int analyse_instr(struct instruction_op *op, const 
> > struct pt_regs *regs,
> >
> >   }
> >
> > +/*
> > + * Prefixed instructions
> > + */
> > + switch (opcode) {
> > + case 1:
>
> Why not include it in the above switch () ?
I was wanting to keep all the prefixed instructions together, but you
are right, these are all load/stores so it would be clearer for them
to go in the Load and Stores switch.
>
> Should it be enclosed by #ifdef __powerpc64__, or will this new ISA also
> apply to 32 bits processors ?
No at this time it will not affect 32bit processors. I will #ifdef it.
>
> > + prefix_r = instr & (1ul << 20);
> > + ra = (suffix >> 16) & 0x1f;
> > + op->update_reg = ra;
> > + rd = (suffix >> 21) & 0x1f;
> > + op->reg = rd;
> > + op->val = regs->gpr[rd];
> > +
> > + suffixopcode = suffix >> 26;
> > + prefixtype = (instr >> 24) & 0x3;
> > + switch (prefixtype) {
> > + case 0: /* Type 00  Eight-Byte Load/Store */
> > + if (prefix_r && ra)
> > + break;
> > + op->ea = mlsd_8lsd_ea(instr, suffix, 

Re: [PATCH v2 06/13] powerpc: Support prefixed instructions in alignment handler

2020-02-11 Thread Jordan Niethe
On Tue, Feb 11, 2020 at 5:14 PM Christophe Leroy
 wrote:
>
>
>
> Le 11/02/2020 à 06:33, Jordan Niethe a écrit :
> > Alignment interrupts can be caused by prefixed instructions accessing
> > memory. In the alignment handler the instruction that caused the
> > exception is loaded and attempted emulate. If the instruction is a
> > prefixed instruction load the prefix and suffix to emulate. After
> > emulating increment the NIP by 8.
> >
> > Prefixed instructions are not permitted to cross 64-byte boundaries. If
> > they do the alignment interrupt is invoked with SRR1 BOUNDARY bit set.
> > If this occurs send a SIGBUS to the offending process if in user mode.
> > If in kernel mode call bad_page_fault().
> >
> > Signed-off-by: Jordan Niethe 
> > ---
> > v2: - Move __get_user_instr() and __get_user_instr_inatomic() to this
> > commit (previously in "powerpc sstep: Prepare to support prefixed
> > instructions").
> >  - Rename sufx to suffix
> >  - Use a macro for calculating instruction length
> > ---
> >   arch/powerpc/include/asm/uaccess.h | 30 ++
> >   arch/powerpc/kernel/align.c|  8 +---
> >   arch/powerpc/kernel/traps.c| 21 -
> >   3 files changed, 55 insertions(+), 4 deletions(-)
> >
> > diff --git a/arch/powerpc/include/asm/uaccess.h 
> > b/arch/powerpc/include/asm/uaccess.h
> > index 2f500debae21..30f63a81c8d8 100644
> > --- a/arch/powerpc/include/asm/uaccess.h
> > +++ b/arch/powerpc/include/asm/uaccess.h
> > @@ -474,4 +474,34 @@ static __must_check inline bool 
> > user_access_begin(const void __user *ptr, size_t
> >   #define unsafe_copy_to_user(d, s, l, e) \
> >   unsafe_op_wrap(raw_copy_to_user_allowed(d, s, l), e)
> >
>
> Could it go close to other __get_user() and friends instead of being at
> the end of the file ?
Will do.
>
> > +/*
> > + * When reading an instruction iff it is a prefix, the suffix needs to be 
> > also
> > + * loaded.
> > + */
> > +#define __get_user_instr(x, y, ptr)  \
> > +({   \
> > + long __gui_ret = 0; \
> > + y = 0;  \
> > + __gui_ret = __get_user(x, ptr); \
> > + if (!__gui_ret) {   \
> > + if (IS_PREFIX(x))   \
>
> Does this apply to PPC32 ?
No, for now (and the foreseeable future) it will just affect 64s.
> If not, can we make sure IS_PREFIX is constant 0 on PPC32 so that the
> second read gets dropped at compile time ?
>
> Can we instead do :
>
> if (!__gui_ret && IS_PREFIX(x))
Will do.
>
> > + __gui_ret = __get_user(y, ptr + 1); \
> > + }   \
> > + \
> > + __gui_ret;  \
> > +})
> > +
> > +#define __get_user_instr_inatomic(x, y, ptr) \
> > +({   \
> > + long __gui_ret = 0; \
> > + y = 0;  \
> > + __gui_ret = __get_user_inatomic(x, ptr);\
> > + if (!__gui_ret) {   \
> > + if (IS_PREFIX(x))   \
>
> Same commments as above
>
> > + __gui_ret = __get_user_inatomic(y, ptr + 1);\
> > + }   \
> > + \
> > + __gui_ret;  \
> > +})
> > +
> >   #endif  /* _ARCH_POWERPC_UACCESS_H */
> > diff --git a/arch/powerpc/kernel/align.c b/arch/powerpc/kernel/align.c
> > index ba3bf5c3ab62..e42cfaa616d3 100644
> > --- a/arch/powerpc/kernel/align.c
> > +++ b/arch/powerpc/kernel/align.c
> > @@ -293,7 +293,7 @@ static int emulate_spe(struct pt_regs *regs, unsigned 
> > int reg,
> >
> >   int fix_alignment(struct pt_regs *regs)
> >   {
> > - unsigned int instr;
> > + unsigned int instr, suffix;
> >   struct instruction_op op;
> >   int r, type;
> >
> > @@ -303,13 +303,15 @@ int fix_alignment(struct pt_regs *regs)
> >*/
> >   CHECK_FULL_REGS(regs);
> >
> > - if (unlikely(__get_user(instr, (unsigned int __user *)regs->nip)))
> > + if (unlikely(__get_user_instr(instr, suffix,
> > +  (unsigned int __user *)regs->nip)))
> >   return -EFAULT;
> >   if ((regs->msr & MSR_LE) != (MSR_KERNEL & MSR_LE)) {
> >   /* We don't handle PPC little-endian any more... */
> >   if (cpu_has_feature(CPU_FTR_PPC_LE))
> >   return -EIO;
> >   instr = swab32(instr);
> > + suffix = swab32(suffix);
> >   }
> >
> >   #ifdef CONFIG_SPE
> > @@ -334,7 +336,7 @@ int fix_alignment(struct pt_regs *regs)
> >   if ((instr & 0xfc0006fe) == 

Re: [PATCH v2 10/13] powerpc/kprobes: Support kprobes on prefixed instructions

2020-02-11 Thread Jordan Niethe
On Tue, Feb 11, 2020 at 5:46 PM Christophe Leroy
 wrote:
>
>
>
> Le 11/02/2020 à 06:33, Jordan Niethe a écrit :
> > A prefixed instruction is composed of a word prefix followed by a word
> > suffix. It does not make sense to be able to have a kprobe on the suffix
> > of a prefixed instruction, so make this impossible.
> >
> > Kprobes work by replacing an instruction with a trap and saving that
> > instruction to be single stepped out of place later. Currently there is
> > not enough space allocated to keep a prefixed instruction for single
> > stepping. Increase the amount of space allocated for holding the
> > instruction copy.
> >
> > kprobe_post_handler() expects all instructions to be 4 bytes long which
> > means that it does not function correctly for prefixed instructions.
> > Add checks for prefixed instructions which will use a length of 8 bytes
> > instead.
> >
> > For optprobes we normally patch in loading the instruction we put a
> > probe on into r4 before calling emulate_step(). We now make space and
> > patch in loading the suffix into r5 as well.
> >
> > Signed-off-by: Jordan Niethe 
> > ---
> >   arch/powerpc/include/asm/kprobes.h   |  5 +--
> >   arch/powerpc/kernel/kprobes.c| 47 +---
> >   arch/powerpc/kernel/optprobes.c  | 32 ++-
> >   arch/powerpc/kernel/optprobes_head.S |  6 
> >   4 files changed, 63 insertions(+), 27 deletions(-)
> >
> > diff --git a/arch/powerpc/include/asm/kprobes.h 
> > b/arch/powerpc/include/asm/kprobes.h
> > index 66b3f2983b22..0d44ce8a3163 100644
> > --- a/arch/powerpc/include/asm/kprobes.h
> > +++ b/arch/powerpc/include/asm/kprobes.h
> > @@ -38,12 +38,13 @@ extern kprobe_opcode_t optprobe_template_entry[];
> >   extern kprobe_opcode_t optprobe_template_op_address[];
> >   extern kprobe_opcode_t optprobe_template_call_handler[];
> >   extern kprobe_opcode_t optprobe_template_insn[];
> > +extern kprobe_opcode_t optprobe_template_suffix[];
> >   extern kprobe_opcode_t optprobe_template_call_emulate[];
> >   extern kprobe_opcode_t optprobe_template_ret[];
> >   extern kprobe_opcode_t optprobe_template_end[];
> >
> > -/* Fixed instruction size for powerpc */
> > -#define MAX_INSN_SIZE1
> > +/* Prefixed instructions are two words */
> > +#define MAX_INSN_SIZE2
> >   #define MAX_OPTIMIZED_LENGTHsizeof(kprobe_opcode_t) /* 4 bytes */
> >   #define MAX_OPTINSN_SIZE(optprobe_template_end - 
> > optprobe_template_entry)
> >   #define RELATIVEJUMP_SIZE   sizeof(kprobe_opcode_t) /* 4 bytes */
> > diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
> > index 24a56f062d9e..b061deba4fe7 100644
> > --- a/arch/powerpc/kernel/kprobes.c
> > +++ b/arch/powerpc/kernel/kprobes.c
> > @@ -104,17 +104,30 @@ kprobe_opcode_t *kprobe_lookup_name(const char *name, 
> > unsigned int offset)
> >
> >   int arch_prepare_kprobe(struct kprobe *p)
> >   {
> > + int len;
> >   int ret = 0;
> > + struct kprobe *prev;
> >   kprobe_opcode_t insn = *p->addr;
> > + kprobe_opcode_t prefix = *(p->addr - 1);
> >
> > + preempt_disable();
> >   if ((unsigned long)p->addr & 0x03) {
> >   printk("Attempt to register kprobe at an unaligned 
> > address\n");
> >   ret = -EINVAL;
> >   } else if (IS_MTMSRD(insn) || IS_RFID(insn) || IS_RFI(insn)) {
> >   printk("Cannot register a kprobe on rfi/rfid or mtmsr[d]\n");
> >   ret = -EINVAL;
> > + } else if (IS_PREFIX(prefix)) {
> > + printk("Cannot register a kprobe on the second word of 
> > prefixed instruction\n");
> > + ret = -EINVAL;
> > + }
> > + prev = get_kprobe(p->addr - 1);
> > + if (prev && IS_PREFIX(*prev->ainsn.insn)) {
> > + printk("Cannot register a kprobe on the second word of 
> > prefixed instruction\n");
> > + ret = -EINVAL;
> >   }
> >
> > +
> >   /* insn must be on a special executable page on ppc64.  This is
> >* not explicitly required on ppc32 (right now), but it doesn't hurt 
> > */
> >   if (!ret) {
> > @@ -124,14 +137,18 @@ int arch_prepare_kprobe(struct kprobe *p)
> >   }
> >
> >   if (!ret) {
> > - memcpy(p->ainsn.insn, p->addr,
> > - MAX_INSN_SIZE * sizeof(kprobe_opcode_t));
> > + if (IS_PREFIX(insn))
> > + len = MAX_INSN_SIZE * sizeof(kprobe_opcode_t);
> > + else
> > + len = sizeof(kprobe_opcode_t);
> > + memcpy(p->ainsn.insn, p->addr, len);
>
> This code is about to get changed, see
> https://patchwork.ozlabs.org/patch/1232619/
Ah thank you for the heads up.
>
> >   p->opcode = *p->addr;
> >   flush_icache_range((unsigned long)p->ainsn.insn,
> >   (unsigned long)p->ainsn.insn + 
> > sizeof(kprobe_opcode_t));
> >   }
> >
> >   p->ainsn.boostable = 0;
> > + 

Re: [PATCH v2 09/13] powerpc/xmon: Dump prefixed instructions

2020-02-11 Thread Jordan Niethe
On Tue, Feb 11, 2020 at 5:39 PM Christophe Leroy
 wrote:
>
>
>
> Le 11/02/2020 à 06:33, Jordan Niethe a écrit :
> > Currently when xmon is dumping instructions it reads a word at a time
> > and then prints that instruction (either as a hex number or by
> > disassembling it). For prefixed instructions it would be nice to show
> > its prefix and suffix as together. Use read_instr() so that if a prefix
> > is encountered its suffix is loaded too. Then print these in the form:
> >  prefix:suffix
> > Xmon uses the disassembly routines from GNU binutils. These currently do
> > not support prefixed instructions so we will not disassemble the
> > prefixed instructions yet.
> >
> > Signed-off-by: Jordan Niethe 
> > ---
> > v2: Rename sufx to suffix
> > ---
> >   arch/powerpc/xmon/xmon.c | 50 +++-
> >   1 file changed, 39 insertions(+), 11 deletions(-)
> >
> > diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
> > index 0b085642bbe7..513901ee18b0 100644
> > --- a/arch/powerpc/xmon/xmon.c
> > +++ b/arch/powerpc/xmon/xmon.c
> > @@ -2903,6 +2903,21 @@ prdump(unsigned long adrs, long ndump)
> >   }
> >   }
> >
> > +static bool instrs_are_equal(unsigned long insta, unsigned long suffixa,
> > +  unsigned long instb, unsigned long suffixb)
> > +{
> > + if (insta != instb)
> > + return false;
> > +
> > + if (!IS_PREFIX(insta) && !IS_PREFIX(instb))
> > + return true;
> > +
> > + if (IS_PREFIX(insta) && IS_PREFIX(instb))
> > + return suffixa == suffixb;
> > +
> > + return false;
> > +}
> > +
> >   typedef int (*instruction_dump_func)(unsigned long inst, unsigned long 
> > addr);
> >
> >   static int
> > @@ -2911,12 +2926,11 @@ generic_inst_dump(unsigned long adr, long count, 
> > int praddr,
> >   {
> >   int nr, dotted;
> >   unsigned long first_adr;
> > - unsigned int inst, last_inst = 0;
> > - unsigned char val[4];
> > + unsigned int inst, suffix, last_inst = 0, last_suffix = 0;
> >
> >   dotted = 0;
> > - for (first_adr = adr; count > 0; --count, adr += 4) {
> > - nr = mread(adr, val, 4);
> > + for (first_adr = adr; count > 0; --count, adr += nr) {
> > + nr = read_instr(adr, , );
> >   if (nr == 0) {
> >   if (praddr) {
> >   const char *x = fault_chars[fault_type];
> > @@ -2924,8 +2938,9 @@ generic_inst_dump(unsigned long adr, long count, int 
> > praddr,
> >   }
> >   break;
> >   }
> > - inst = GETWORD(val);
> > - if (adr > first_adr && inst == last_inst) {
> > + if (adr > first_adr && instrs_are_equal(inst, suffix,
> > + last_inst,
> > + last_suffix)) {
> >   if (!dotted) {
> >   printf(" ...\n");
> >   dotted = 1;
> > @@ -2934,11 +2949,24 @@ generic_inst_dump(unsigned long adr, long count, 
> > int praddr,
> >   }
> >   dotted = 0;
> >   last_inst = inst;
> > - if (praddr)
> > - printf(REG"  %.8x", adr, inst);
> > - printf("\t");
> > - dump_func(inst, adr);
> > - printf("\n");
> > + last_suffix = suffix;
> > + if (IS_PREFIX(inst)) {
> > + if (praddr)
> > + printf(REG"  %.8x:%.8x", adr, inst, suffix);
> > + printf("\t");
> > + /*
> > +  * Just use this until binutils ppc disassembly
> > +  * prints prefixed instructions.
> > +  */
> > + printf("%.8x:%.8x", inst, suffix);
> > + printf("\n");
> > + } else {
> > + if (praddr)
> > + printf(REG"  %.8x", adr, inst);
> > + printf("\t");
> > + dump_func(inst, adr);
> > + printf("\n");
> > + }
>
> What about:
>
>
> if (pr_addr) {
> printf(REG"  %.8x", adr, inst);
> if (IS_PREFIX(inst))
> printf(":%.8x", suffix);
> }
> printf("\t");
> if (IS_PREFIX(inst))
> printf("%.8x:%.8x", inst, suffix);
> else
> dump_func(inst, adr);
> printf("\n");
>
Yeah that looks better.
> >   }
> >   return adr - first_adr;
> >   }
> >
>
> Christophe


Re: [PATCH v2 08/13] powerpc/xmon: Add initial support for prefixed instructions

2020-02-11 Thread Jordan Niethe
On Tue, Feb 11, 2020 at 5:32 PM Christophe Leroy
 wrote:
>
>
>
> Le 11/02/2020 à 06:33, Jordan Niethe a écrit :
> > A prefixed instruction is composed of a word prefix and a word suffix.
> > It does not make sense to be able to have a breakpoint on the suffix of
> > a prefixed instruction, so make this impossible.
> >
> > When leaving xmon_core() we check to see if we are currently at a
> > breakpoint. If this is the case, the breakpoint needs to be proceeded
> > from. Initially emulate_step() is tried, but if this fails then we need
> > to execute the saved instruction out of line. The NIP is set to the
> > address of bpt::instr[] for the current breakpoint.  bpt::instr[]
> > contains the instruction replaced by the breakpoint, followed by a trap
> > instruction.  After bpt::instr[0] is executed and we hit the trap we
> > enter back into xmon_bpt(). We know that if we got here and the offset
> > indicates we are at bpt::instr[1] then we have just executed out of line
> > so we can put the NIP back to the instruction after the breakpoint
> > location and continue on.
> >
> > Adding prefixed instructions complicates this as the bpt::instr[1] needs
> > to be used to hold the suffix. To deal with this make bpt::instr[] big
> > enough for three word instructions.  bpt::instr[2] contains the trap,
> > and in the case of word instructions pad bpt::instr[1] with a noop.
> >
> > No support for disassembling prefixed instructions.
> >
> > Signed-off-by: Jordan Niethe 
> > ---
> > v2: Rename sufx to suffix
> > ---
> >   arch/powerpc/xmon/xmon.c | 82 ++--
> >   1 file changed, 71 insertions(+), 11 deletions(-)
> >
> > diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
> > index 897e512c6379..0b085642bbe7 100644
> > --- a/arch/powerpc/xmon/xmon.c
> > +++ b/arch/powerpc/xmon/xmon.c
> > @@ -97,7 +97,8 @@ static long *xmon_fault_jmp[NR_CPUS];
> >   /* Breakpoint stuff */
> >   struct bpt {
> >   unsigned long   address;
> > - unsigned intinstr[2];
> > + /* Prefixed instructions can not cross 64-byte boundaries */
> > + unsigned intinstr[3] __aligned(64);
> >   atomic_tref_count;
> >   int enabled;
> >   unsigned long   pad;
> > @@ -113,6 +114,7 @@ static struct bpt bpts[NBPTS];
> >   static struct bpt dabr;
> >   static struct bpt *iabr;
> >   static unsigned bpinstr = 0x7fe8;   /* trap */
> > +static unsigned nopinstr = 0x6000;   /* nop */
>
> Use PPC_INST_NOP instead of 0x6000
>
> And this nopinstr variable will never change. Why not use directly
> PPC_INST_NOP  in the code ?
True, I will do that.
>
> >
> >   #define BP_NUM(bp)  ((bp) - bpts + 1)
> >
> > @@ -120,6 +122,7 @@ static unsigned bpinstr = 0x7fe8; /* trap */
> >   static int cmds(struct pt_regs *);
> >   static int mread(unsigned long, void *, int);
> >   static int mwrite(unsigned long, void *, int);
> > +static int read_instr(unsigned long, unsigned int *, unsigned int *);
> >   static int handle_fault(struct pt_regs *);
> >   static void byterev(unsigned char *, int);
> >   static void memex(void);
> > @@ -706,7 +709,7 @@ static int xmon_core(struct pt_regs *regs, int fromipi)
> >   bp = at_breakpoint(regs->nip);
> >   if (bp != NULL) {
> >   int stepped = emulate_step(regs, bp->instr[0],
> > -PPC_NO_SUFFIX);
> > +bp->instr[1]);
> >   if (stepped == 0) {
> >   regs->nip = (unsigned long) >instr[0];
> >   atomic_inc(>ref_count);
> > @@ -761,8 +764,8 @@ static int xmon_bpt(struct pt_regs *regs)
> >
> >   /* Are we at the trap at bp->instr[1] for some bp? */
> >   bp = in_breakpoint_table(regs->nip, );
> > - if (bp != NULL && offset == 4) {
> > - regs->nip = bp->address + 4;
> > + if (bp != NULL && (offset == 4 || offset == 8)) {
> > + regs->nip = bp->address + offset;
> >   atomic_dec(>ref_count);
> >   return 1;
> >   }
> > @@ -864,7 +867,8 @@ static struct bpt *in_breakpoint_table(unsigned long 
> > nip, unsigned long *offp)
> >   return NULL;
> >   off %= sizeof(struct bpt);
> >   if (off != offsetof(struct bpt, instr[0])
> > - && off != offsetof(struct bpt, instr[1]))
> > + && off != offsetof(struct bpt, instr[1])
> > + && off != offsetof(struct bpt, instr[2]))
> >   return NULL;
> >   *offp = off - offsetof(struct bpt, instr[0]);
> >   return (struct bpt *) (nip - off);
> > @@ -881,9 +885,18 @@ static struct bpt *new_breakpoint(unsigned long a)
> >
> >   for (bp = bpts; bp < [NBPTS]; ++bp) {
> >   if (!bp->enabled && atomic_read(>ref_count) == 0) {
> > + /*
> > +  * Prefixed instructions are two words, but regular
> > +  

Re: [PATCH v2 03/13] powerpc sstep: Prepare to support prefixed instructions

2020-02-11 Thread Jordan Niethe
On Tue, Feb 11, 2020 at 4:57 PM Christophe Leroy
 wrote:
>
>
>
> Le 11/02/2020 à 06:33, Jordan Niethe a écrit :
> > Currently all instructions are a single word long. A future ISA version
> > will include prefixed instructions which have a double word length. The
> > functions used for analysing and emulating instructions need to be
> > modified so that they can handle these new instruction types.
> >
> > A prefixed instruction is a word prefix followed by a word suffix. All
> > prefixes uniquely have the primary op-code 1. Suffixes may be valid word
> > instructions or instructions that only exist as suffixes.
> >
> > In handling prefixed instructions it will be convenient to treat the
> > suffix and prefix as separate words. To facilitate this modify
> > analyse_instr() and emulate_step() to take a suffix as a
> > parameter. For word instructions it does not matter what is passed in
> > here - it will be ignored.
> >
> > We also define a new flag, PREFIXED, to be used in instruction_op:type.
> > This flag will indicate when emulating an analysed instruction if the
> > NIP should be advanced by word length or double word length.
> >
> > The callers of analyse_instr() and emulate_step() will need their own
> > changes to be able to support prefixed instructions. For now modify them
> > to pass in 0 as a suffix.
> >
> > Note that at this point no prefixed instructions are emulated or
> > analysed - this is just making it possible to do so.
> >
> > Signed-off-by: Jordan Niethe 
> > ---
> > v2: - Move definition of __get_user_instr() and
> > __get_user_instr_inatomic() to "powerpc: Support prefixed instructions
> > in alignment handler."
> >  - Use a macro for returning the length of an op
> >  - Rename sufx -> suffix
> >  - Define and use PPC_NO_SUFFIX instead of 0
> > ---
> >   arch/powerpc/include/asm/ppc-opcode.h |  5 +
> >   arch/powerpc/include/asm/sstep.h  |  9 ++--
> >   arch/powerpc/kernel/align.c   |  2 +-
> >   arch/powerpc/kernel/hw_breakpoint.c   |  4 ++--
> >   arch/powerpc/kernel/kprobes.c |  2 +-
> >   arch/powerpc/kernel/mce_power.c   |  2 +-
> >   arch/powerpc/kernel/optprobes.c   |  3 ++-
> >   arch/powerpc/kernel/uprobes.c |  2 +-
> >   arch/powerpc/kvm/emulate_loadstore.c  |  2 +-
> >   arch/powerpc/lib/sstep.c  | 12 ++-
> >   arch/powerpc/lib/test_emulate_step.c  | 30 +--
> >   arch/powerpc/xmon/xmon.c  |  5 +++--
> >   12 files changed, 46 insertions(+), 32 deletions(-)
> >
> > diff --git a/arch/powerpc/include/asm/ppc-opcode.h 
> > b/arch/powerpc/include/asm/ppc-opcode.h
> > index c1df75edde44..72783bc92e50 100644
> > --- a/arch/powerpc/include/asm/ppc-opcode.h
> > +++ b/arch/powerpc/include/asm/ppc-opcode.h
> > @@ -377,6 +377,11 @@
> >   #define PPC_INST_VCMPEQUD   0x10c7
> >   #define PPC_INST_VCMPEQUB   0x1006
> >
> > +/* macro to check if a word is a prefix */
> > +#define IS_PREFIX(x) (((x) >> 26) == 1)
>
> Can you add an OP_PREFIX in the OP list and use it instead of '1' ?
Will do.
>
> > +#define  PPC_NO_SUFFIX   0
> > +#define  PPC_INST_LENGTH(x)  (IS_PREFIX(x) ? 8 : 4)
> > +
> >   /* macros to insert fields into opcodes */
> >   #define ___PPC_RA(a)(((a) & 0x1f) << 16)
> >   #define ___PPC_RB(b)(((b) & 0x1f) << 11)
> > diff --git a/arch/powerpc/include/asm/sstep.h 
> > b/arch/powerpc/include/asm/sstep.h
> > index 769f055509c9..9ea8904a1549 100644
> > --- a/arch/powerpc/include/asm/sstep.h
> > +++ b/arch/powerpc/include/asm/sstep.h
> > @@ -89,11 +89,15 @@ enum instruction_type {
> >   #define VSX_LDLEFT  4   /* load VSX register from left */
> >   #define VSX_CHECK_VEC   8   /* check MSR_VEC not MSR_VSX for reg 
> > >= 32 */
> >
> > +/* Prefixed flag, ORed in with type */
> > +#define PREFIXED 0x800
> > +
> >   /* Size field in type word */
> >   #define SIZE(n) ((n) << 12)
> >   #define GETSIZE(w)  ((w) >> 12)
> >
> >   #define GETTYPE(t)  ((t) & INSTR_TYPE_MASK)
> > +#define OP_LENGTH(t) (((t) & PREFIXED) ? 8 : 4)
>
> Is it worth naming it OP_LENGTH ? Can't it be mistaken as one of the
> OP_xxx from the list in asm/opcode.h ?
>
> What about GETLENGTH() instead to be consistant with the above lines ?
Good point, will do.
>
> Christophe


[Bug 206501] Kernel 5.6-rc1 fails to boot on a PowerMac G4 3,6 with CONFIG_VMAP_STACK=y: Oops! Machine check, sig: 7 [#1]

2020-02-11 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=206501

--- Comment #1 from Erhard F. (erhar...@mailbox.org) ---
Created attachment 287313
  --> https://bugzilla.kernel.org/attachment.cgi?id=287313=edit
kernel .config (5.6.0-rc1, PowerMac G4 DP)

-- 
You are receiving this mail because:
You are watching the assignee of the bug.

[Bug 206501] New: Kernel 5.6-rc1 fails to boot on a PowerMac G4 3,6 with CONFIG_VMAP_STACK=y: Oops! Machine check, sig: 7 [#1]

2020-02-11 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=206501

Bug ID: 206501
   Summary: Kernel 5.6-rc1 fails to boot on a PowerMac G4 3,6 with
CONFIG_VMAP_STACK=y: Oops! Machine check, sig: 7  [#1]
   Product: Platform Specific/Hardware
   Version: 2.5
Kernel Version: 5.6.0-rc1
  Hardware: PPC-32
OS: Linux
  Tree: Mainline
Status: NEW
  Severity: normal
  Priority: P1
 Component: PPC-32
  Assignee: platform_ppc...@kernel-bugs.osdl.org
  Reporter: erhar...@mailbox.org
Regression: No

Created attachment 287311
  --> https://bugzilla.kernel.org/attachment.cgi?id=287311=edit
screenshot

The G4 boots fine with CONFIG_VMAP_STACK=n, but fails to boot with
CONFIG_VMAP_STACK=y.

[...]
NIP [c001c194] create_hpte+0xa8/0x120
LR [c001c0c4] add_hash_page+0x88/0xb0
Call Trace:
[f101dde8] [cO181568] alloc_set_pte+0x184/0x214 (unreliable)
[f101de18] [cO14d168] filemap_map_pages+0x21c/0x250
[f101de68] [c0181cf4] handle_mm_fault+0x66c/0x90c
[f101dee8] [c0019aac] do_page_fault+0x690/0x804
[f101df38] [c0014450] handle_page_fault+0x10/0x3c
--- interrupt: 401 at Oxb77ffd10
LR = 0x0
Instruction dump:
6c64003f 6884ffx0 3884fff8 7c0903a6 84x40008 7c062800 4002fff8 41a2008c
68a50040 7c0903a6 3883fff8 84c40008 <54c60001> 4002fff8 41a20070 3c80c08e
---[ end trace cd24dd23c7db9d53 ]---

Machine check in kernel mode.
Caused by (from SRR1=141020): Transfer error ack signal
Kernel panic - not syncing: Attempted to kill init! exitcode=0x0007

(OCRed screenshot + corrections by hand)

-- 
You are receiving this mail because:
You are watching the assignee of the bug.

Re: Problem booting a PowerBook G4 Aluminum after commit cd08f109 with CONFIG_VMAP_STACK=y

2020-02-11 Thread Christophe Leroy




Le 11/02/2020 à 17:06, Larry Finger a écrit :

On 2/11/20 12:55 AM, Christophe Leroy wrote:



Le 10/02/2020 à 13:55, Larry Finger a écrit :

On 2/9/20 12:19 PM, Christophe Leroy wrote:

Do you have CONFIG_TRACE_IRQFLAGS in your config ?
If so, can you try the patch below ?

https://patchwork.ozlabs.org/patch/1235081/

Otherwise, can you send me your .config and tell me exactly where it 
stops during the boot.


Christophe,

That patch did not work. My .config is attached.

It does boot if CONFIG_VMAP_STACK is not set.

The console display ends with the "DMA ranges" output. A screen shot 
is also appended.


Larry



Hi,

I tried your config under QEMU, it works.

In fact your console display is looping on itself, it ends at "printk: 
bootconsole [udbg0] disabled".


Looks like you get stuck at the time of switching to graphic mode. 
Need to understand why.


I'm not surprised that a real G4 differs from QEMU. For one thing, the 
real hardware uses i2c to connect to the graphics hardware.


I realized that the screen was not scrolling and output was missing. To 
see what was missed, I added a call to btext_clearscreen(). As you 
noted, it ends at the bootconsole disabled statement.


As I could not find any console output after that point, I then turned 
off the bootconsole disable. I realize this action may cause a different 
problem, but in this configuration, the computer hit a BUG Unable to 
handle kernel data access at 0x007a84fc. The faulting instruction 
address was 0x00013674. Those addresses look like physical, not virtual, 
addresses.




Can you send me a picture of that BUG Unable to handle kernel data 
access with all the registers values etc..., together with the matching 
vmlinux ?


First thing is to identify where we are when that happens. That mean see 
what is at 0xc0013674. Can be done with 'ppc-linux-objdump -d vmlinux' 
(Or whatever your PPC objdump is named) and get the function code.


Then we need to understand how we reach that function and why it tries 
to access a physical address.



Another thing I'm thinking about, not necessarily related to that 
problem: Some buggy drivers do DMA from stack. This doesn't work anymore 
with CONFIG_VMAP_STACK. Most of them can be detected with 
CONFIG_DEBUG_VIRTUAL so you should activate it.


Christophe


Re: [PATCH v2] libnvdimm: Update persistence domain value for of_pmem and papr_scm device

2020-02-11 Thread Dan Williams
On Tue, Feb 11, 2020 at 6:57 AM Aneesh Kumar K.V
 wrote:
>
> On 2/10/20 11:48 PM, Dan Williams wrote:
> > On Mon, Feb 10, 2020 at 6:20 AM Aneesh Kumar K.V
> >  wrote:
> >>
> >> Dan Williams  writes:
> >>
> >>> On Tue, Feb 4, 2020 at 9:21 PM Aneesh Kumar K.V
> >>>  wrote:
> 
>  Currently, kernel shows the below values
>   "persistence_domain":"cpu_cache"
>   "persistence_domain":"memory_controller"
>   "persistence_domain":"unknown"
> 
>  "cpu_cache" indicates no extra instructions is needed to ensure the 
>  persistence
>  of data in the pmem media on power failure.
> 
>  "memory_controller" indicates platform provided instructions need to be 
>  issued
> >>>
> >>> No, it does not. The only requirement implied by "memory_controller"
> >>> is global visibility outside the cpu cache. If there are special
> >>> instructions beyond that then it isn't persistent memory, at least not
> >>> pmem that is safe for dax. virtio-pmem is an example of pmem-like
> >>> memory that is not enabled for userspace flushing (MAP_SYNC disabled).
> >>>
> >>
> >> Can you explain this more? The way I was expecting the application to
> >> interpret the value was, a regular store instruction doesn't guarantee
> >> persistence if you find the "memory_controller" value for
> >> persistence_domain. Instead, we need to make sure we flush data to the
> >> controller at which point the platform will take care of the persistence in
> >> case of power loss. How we flush data to the controller will also be
> >> defined by the platform.
> >
> > If the platform requires any flush mechanism outside of the base cpu
> > ISA of cache flushes and memory barriers then MAP_SYNC needs to be
> > explicitly disabled to force the application to call fsync()/msync().
> > Then those platform specific mechanisms need to be triggered through a
> > platform-aware driver.
> >
>
>
> Agreed. I was thinking we mark the persistence_domain: "Unknown" in that
> case. virtio-pmem mark it that way.

I would say the driver requirement case is persistence_domain "None",
not "Unknown". I.e. the platform provides no mechanism to flush data
to the persistence domain on power loss, it's back to typical storage
semantics.

>
>
> >>
> >>
>  as per documented sequence to make sure data get flushed so that it is
>  guaranteed to be on pmem media in case of system power loss.
> 
>  Based on the above use memory_controller for non volatile regions on 
>  ppc64.
> 
>  Signed-off-by: Aneesh Kumar K.V 
>  ---
>    arch/powerpc/platforms/pseries/papr_scm.c | 7 ++-
>    drivers/nvdimm/of_pmem.c  | 4 +++-
>    include/linux/libnvdimm.h | 1 -
>    3 files changed, 9 insertions(+), 3 deletions(-)
> 
>  diff --git a/arch/powerpc/platforms/pseries/papr_scm.c 
>  b/arch/powerpc/platforms/pseries/papr_scm.c
>  index 7525635a8536..ffcd0d7a867c 100644
>  --- a/arch/powerpc/platforms/pseries/papr_scm.c
>  +++ b/arch/powerpc/platforms/pseries/papr_scm.c
>  @@ -359,8 +359,13 @@ static int papr_scm_nvdimm_init(struct 
>  papr_scm_priv *p)
> 
>   if (p->is_volatile)
>   p->region = nvdimm_volatile_region_create(p->bus, 
>  _desc);
>  -   else
>  +   else {
>  +   /*
>  +* We need to flush things correctly to guarantee 
>  persistance
>  +*/
> >>>
> >>> There are never guarantees. If you're going to comment what does
> >>> software need to flush, and how?
> >>
> >> Can you explain why you say there are never guarantees? If you follow the 
> >> platform
> >> recommended instruction sequence to flush data, we can be sure of data
> >> persistence in the pmem media.
> >
> > Because storage can always fail. You can reduce risk, but never
> > eliminate it. This is similar to SSDs that use latent capacitance to
> > flush their write caches on driver power loss. Even if the application
> > successfully flushes its writes to buffers that are protected by that
> > capacitance that power source can still (and in practice does) fail.
> >
>
> ok guarantee is not the right term there. Can we say
>
> /* We need to flush tings correctly to ensure persistence */

The definition of the "memory_controller" persistence domain is: "the
platform takes care to flush writes to media once they are globally
visible outside the cache".

>
>
> What I was trying to understand/clarify was the detail an application
> can infer looking at the value of persistence_domain ?
>
> Do you agree that below can be inferred from the "memory_controller"
> value of persistence_domain
>
> 1) Application needs to use cache flush instructions and that ensures
> data is persistent across power failure.
>
>
> Or are you suggesting that application should not infer any of those
> details looking at persistence_domain value? If so what is the 

Re: Problem booting a PowerBook G4 Aluminum after commit cd08f109 with CONFIG_VMAP_STACK=y

2020-02-11 Thread Larry Finger

On 2/11/20 12:55 AM, Christophe Leroy wrote:



Le 10/02/2020 à 13:55, Larry Finger a écrit :

On 2/9/20 12:19 PM, Christophe Leroy wrote:

Do you have CONFIG_TRACE_IRQFLAGS in your config ?
If so, can you try the patch below ?

https://patchwork.ozlabs.org/patch/1235081/

Otherwise, can you send me your .config and tell me exactly where it stops 
during the boot.


Christophe,

That patch did not work. My .config is attached.

It does boot if CONFIG_VMAP_STACK is not set.

The console display ends with the "DMA ranges" output. A screen shot is also 
appended.


Larry



Hi,

I tried your config under QEMU, it works.

In fact your console display is looping on itself, it ends at "printk: 
bootconsole [udbg0] disabled".


Looks like you get stuck at the time of switching to graphic mode. Need to 
understand why.


I'm not surprised that a real G4 differs from QEMU. For one thing, the real 
hardware uses i2c to connect to the graphics hardware.


I realized that the screen was not scrolling and output was missing. To see what 
was missed, I added a call to btext_clearscreen(). As you noted, it ends at the 
bootconsole disabled statement.


As I could not find any console output after that point, I then turned off the 
bootconsole disable. I realize this action may cause a different problem, but in 
this configuration, the computer hit a BUG Unable to handle kernel data access 
at 0x007a84fc. The faulting instruction address was 0x00013674. Those addresses 
look like physical, not virtual, addresses.


I then added pr_info statements to bracket the failure. In file 
drivers/video/fbdev/core/fb_ddc.c, the code reaches line 66, which is

algo_data->setsda(algo_data->data, 1);
Both pointers seem OK with algo_data = 0xeedfb4bc, and algo_data->data = 
0xeedb25c. The code faults before returning. I then annotated that callback 
routine radeon_gpio_setsda(), and found that execution is OK to the end of the 
routine, but the fault happens on the return from this routine as though the 
stack were corrupted.


I will be busy for about 8 hours, but if you can think of any debugging I can do 
on this routine, please let me know.


Thanks,

Larry


Re: [PATCH v2] libnvdimm: Update persistence domain value for of_pmem and papr_scm device

2020-02-11 Thread Aneesh Kumar K.V

On 2/10/20 11:48 PM, Dan Williams wrote:

On Mon, Feb 10, 2020 at 6:20 AM Aneesh Kumar K.V
 wrote:


Dan Williams  writes:


On Tue, Feb 4, 2020 at 9:21 PM Aneesh Kumar K.V
 wrote:


Currently, kernel shows the below values
 "persistence_domain":"cpu_cache"
 "persistence_domain":"memory_controller"
 "persistence_domain":"unknown"

"cpu_cache" indicates no extra instructions is needed to ensure the persistence
of data in the pmem media on power failure.

"memory_controller" indicates platform provided instructions need to be issued


No, it does not. The only requirement implied by "memory_controller"
is global visibility outside the cpu cache. If there are special
instructions beyond that then it isn't persistent memory, at least not
pmem that is safe for dax. virtio-pmem is an example of pmem-like
memory that is not enabled for userspace flushing (MAP_SYNC disabled).



Can you explain this more? The way I was expecting the application to
interpret the value was, a regular store instruction doesn't guarantee
persistence if you find the "memory_controller" value for
persistence_domain. Instead, we need to make sure we flush data to the
controller at which point the platform will take care of the persistence in
case of power loss. How we flush data to the controller will also be
defined by the platform.


If the platform requires any flush mechanism outside of the base cpu
ISA of cache flushes and memory barriers then MAP_SYNC needs to be
explicitly disabled to force the application to call fsync()/msync().
Then those platform specific mechanisms need to be triggered through a
platform-aware driver.




Agreed. I was thinking we mark the persistence_domain: "Unknown" in that 
case. virtio-pmem mark it that way.







as per documented sequence to make sure data get flushed so that it is
guaranteed to be on pmem media in case of system power loss.

Based on the above use memory_controller for non volatile regions on ppc64.

Signed-off-by: Aneesh Kumar K.V 
---
  arch/powerpc/platforms/pseries/papr_scm.c | 7 ++-
  drivers/nvdimm/of_pmem.c  | 4 +++-
  include/linux/libnvdimm.h | 1 -
  3 files changed, 9 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/platforms/pseries/papr_scm.c 
b/arch/powerpc/platforms/pseries/papr_scm.c
index 7525635a8536..ffcd0d7a867c 100644
--- a/arch/powerpc/platforms/pseries/papr_scm.c
+++ b/arch/powerpc/platforms/pseries/papr_scm.c
@@ -359,8 +359,13 @@ static int papr_scm_nvdimm_init(struct papr_scm_priv *p)

 if (p->is_volatile)
 p->region = nvdimm_volatile_region_create(p->bus, _desc);
-   else
+   else {
+   /*
+* We need to flush things correctly to guarantee persistance
+*/


There are never guarantees. If you're going to comment what does
software need to flush, and how?


Can you explain why you say there are never guarantees? If you follow the 
platform
recommended instruction sequence to flush data, we can be sure of data
persistence in the pmem media.


Because storage can always fail. You can reduce risk, but never
eliminate it. This is similar to SSDs that use latent capacitance to
flush their write caches on driver power loss. Even if the application
successfully flushes its writes to buffers that are protected by that
capacitance that power source can still (and in practice does) fail.



ok guarantee is not the right term there. Can we say

/* We need to flush tings correctly to ensure persistence */


What I was trying to understand/clarify was the detail an application 
can infer looking at the value of persistence_domain ?


Do you agree that below can be inferred from the "memory_controller" 
value of persistence_domain


1) Application needs to use cache flush instructions and that ensures 
data is persistent across power failure.



Or are you suggesting that application should not infer any of those 
details looking at persistence_domain value? If so what is the purpose 
of exporting that attribute?









+   set_bit(ND_REGION_PERSIST_MEMCTRL, _desc.flags);
 p->region = nvdimm_pmem_region_create(p->bus, _desc);
+   }
 if (!p->region) {
 dev_err(dev, "Error registering region %pR from %pOF\n",
 ndr_desc.res, p->dn);
diff --git a/drivers/nvdimm/of_pmem.c b/drivers/nvdimm/of_pmem.c
index 8224d1431ea9..6826a274a1f1 100644
--- a/drivers/nvdimm/of_pmem.c
+++ b/drivers/nvdimm/of_pmem.c
@@ -62,8 +62,10 @@ static int of_pmem_region_probe(struct platform_device *pdev)

 if (is_volatile)
 region = nvdimm_volatile_region_create(bus, _desc);
-   else
+   else {
+   set_bit(ND_REGION_PERSIST_MEMCTRL, _desc.flags);
 region = nvdimm_pmem_region_create(bus, _desc);
+   }

 if (!region)
 

Re: [PATCH v3 1/3] powerpc/tm: Fix clearing MSR[TS] in current when reclaiming on signal delivery

2020-02-11 Thread Sasha Levin
Hi,

[This is an automated email]

This commit has been processed because it contains a "Fixes:" tag,
fixing commit: 2b0a576d15e0 ("powerpc: Add new transactional memory state to 
the signal context").

The bot has tested the following trees: v5.5.2, v5.4.18, v4.19.102, v4.14.170, 
v4.9.213, v4.4.213.

v5.5.2: Build OK!
v4.19.102: Build OK!
v4.14.170: Failed to apply! Possible dependencies:
1c200e63d055 ("powerpc/tm: Fix endianness flip on trap")
92fb8690bd04 ("powerpc/tm: P9 disable transactionally suspended 
sigcontexts")

v4.9.213: Failed to apply! Possible dependencies:
1c200e63d055 ("powerpc/tm: Fix endianness flip on trap")
92fb8690bd04 ("powerpc/tm: P9 disable transactionally suspended 
sigcontexts")

v4.4.213: Failed to apply! Possible dependencies:
1c200e63d055 ("powerpc/tm: Fix endianness flip on trap")
92fb8690bd04 ("powerpc/tm: P9 disable transactionally suspended 
sigcontexts")
a7d623d4d053 ("powerpc: Move part of giveup_vsx into c")
b86fd2bd0302 ("powerpc: Simplify TM restore checks")
d11994314b2b ("powerpc: signals: Stop using current in signal code")
d96f234f47af ("powerpc: Avoid load hit store in setup_sigcontext()")
e1c0d66fcb17 ("powerpc: Set used_(vsr|vr|spe) in sigreturn path when MSR 
bits are active")


NOTE: The patch will not be queued to stable trees until it is upstream.

How should we proceed with this patch?

-- 
Thanks,
Sasha


Re: [PATCH RESEND] macintosh: convert to i2c_new_scanned_device

2020-02-11 Thread Michael Ellerman
Wolfram Sang  writes:
> Move from the deprecated i2c_new_probed_device() to the new
> i2c_new_scanned_device(). No functional change for this driver because
> it doesn't check the return code anyhow.
>
> Signed-off-by: Wolfram Sang 
> ---
>
> I can take this via I2C tree if this makes things easier...

Yes please. Sorry I missed it before.

Acked-by: Michael Ellerman 

cheers

>  drivers/macintosh/therm_windtunnel.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/macintosh/therm_windtunnel.c 
> b/drivers/macintosh/therm_windtunnel.c
> index 8c744578122a..f15fec5e1cb6 100644
> --- a/drivers/macintosh/therm_windtunnel.c
> +++ b/drivers/macintosh/therm_windtunnel.c
> @@ -321,10 +321,10 @@ do_attach( struct i2c_adapter *adapter )
>  
>   memset(, 0, sizeof(struct i2c_board_info));
>   strlcpy(info.type, "therm_ds1775", I2C_NAME_SIZE);
> - i2c_new_probed_device(adapter, , scan_ds1775, NULL);
> + i2c_new_scanned_device(adapter, , scan_ds1775, NULL);
>  
>   strlcpy(info.type, "therm_adm1030", I2C_NAME_SIZE);
> - i2c_new_probed_device(adapter, , scan_adm1030, NULL);
> + i2c_new_scanned_device(adapter, , scan_adm1030, NULL);
>  
>   if( x.thermostat && x.fan ) {
>   x.running = 1;
> -- 
> 2.20.1


[Bug 201723] [Bisected][Regression] THERM_WINDTUNNEL not working any longer in kernel 4.19.x (PowerMac G4 MDD)

2020-02-11 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=201723

--- Comment #5 from Wolfram Sang (w...@the-dreams.de) ---
I contacted Erhard by email to gather some more debug output. If we make
substantial progress, I will report it here.

Sidenote: therm_windtunnel has its own ADM1030 and DS1775 handling, so it
doesn't need the seperate drivers from HWMON. In theory, it should, but I guess
noone is up to refactoring all that old code.

-- 
You are receiving this mail because:
You are watching the assignee of the bug.

Re: [PATCH V13] mm/debug: Add tests validating architecture page table helpers

2020-02-11 Thread Russell King - ARM Linux admin
On Tue, Feb 11, 2020 at 06:33:47AM +0100, Christophe Leroy wrote:
> 
> 
> Le 11/02/2020 à 03:25, Anshuman Khandual a écrit :
> > 
> > 
> > On 02/10/2020 04:36 PM, Russell King - ARM Linux admin wrote:
> > > There are good reasons for the way ARM does stuff.  The generic crap was
> > > written without regard for the circumstances that ARM has, and thus is
> > > entirely unsuitable for 32-bit ARM.
> > 
> > Since we dont have an agreement here, lets just settle with disabling the
> > test for now on platforms where the build fails. CONFIG_EXPERT is enabling
> > this test for better adaptability and coverage, hence how about re framing
> > the config like this ? This at the least conveys the fact that EXPERT only
> > works when platform is neither IA64 or ARM.
> 
> Agreed
> 
> > 
> > config DEBUG_VM_PGTABLE
> > bool "Debug arch page table for semantics compliance"
> > depends on MMU
> > depends on ARCH_HAS_DEBUG_VM_PGTABLE || (EXPERT &&  !(IA64 || ARM))
> 
> I think it's maybe better to have a dedicated depends line:
> 
> depends on !IA64 && !ARM
> depends on ARCH_HAS_DEBUG_VM_PGTABLE || EXPERT
> 
> The day arm and/or ia64 is ready for building the test, we can remove that
> depends.

Never going to happen as its technically infeasible, sorry.

-- 
RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 12.1Mbps down 622kbps up
According to speedtest.net: 11.9Mbps down 500kbps up